top | item 40342055

(no title)

couchand | 1 year ago

Test code is code. It's as much of a burden as every other piece of code you are troubled with, so you must make it count. If you're finding it repetitive and formulaic, take that opportunity to identify the next refactoring.

Just churning out more near copies is not a good answer.

discuss

order

AlexandrB|1 year ago

The problem with refactoring test code is twofold:

1. It can make it harder to see what's actually being tested if there are too many layers of abstraction in the test.

2. Complex test code can have significant bugs of its own that can result in false passes. What tests the test code?

Thus I generally see repetitive or copy/pasted test code as a necessary evil a lot of the time.

munksbeer|1 year ago

Absolutely this! I was very guilty of over complicating test code to use abtractions and reduce boilerplate, but it certainly resulted in code which you could not always tell what was being tested. And, you'd result in nonsensical tests when the next developer added tests but didn't look deeply to see what the abstractions were doing.

I now find it is best to be very explicit in the individual test code about what the conditions are of that specific test.

lolinder|1 year ago

> If you're finding it repetitive and formulaic, take that opportunity to identify the next refactoring.

It doesn't really matter how many helper functions you extract from your test code, in the end you have to string them together and then make assertions, and that part will always be repetitive and formulaic. If you've extracted a lot of shared code, then it might look something like "do this high-level business thing and then check that this other high-level business thing is true". But that is still going to need to be written a dozen times to cover all the test cases, and you're still going to want test names that match the test content.

There's a certain amount of repetition and formulaism that will never go away and that copilot is very good at.

causal|1 year ago

LLMs are pretty good at anything that follows a pattern, even a really complex pattern. So unit tests often take a form similar to the n-shot testing we do with LLMs, a series of statements and their answers (or in the case of unit tests, a series of test names and their tests). It makes sense to me that LLMs would excel here and my own experience is that they are great at taking care of the low-hanging fruit when it comes to testing.

nasmorn|1 year ago

I agree. A very high impact change I made for an application my team is working on was allowing easy creation of test cases from production data. We deal with almost unknowable upstream data and cheaply testing something that was not working out has reduced the time to find bugs tremendously