(no title)
tsv_ | 1 year ago
- The models often create several tests within the same equivalence class, which barely expands test coverage
- They either skip parameterization, creating multiple redundant tests, or go overboard with 5+ parameters that make tests hard to read and maintain
- The model seems focused on "writing a test at any cost" often resorting to excessive mocking or monkey-patching without much thought
- The models don’t leverage existing helper functions or classes in the project, requiring me to upload the whole project context each time or customize GPTs for every individual project
Given these limitations, I primarily use LLMs for refactoring tests where IDE isn’t as efficient:
- Extracting repetitive code in tests into helpers or fixtures
- Merging multiple tests into a single parameterized test
- Breaking up overly complex parameterized tests for readability
- Renaming tests to maintain a consistent style across a module, without getting stuck on names
deeviant|1 year ago