(no title)
tmstieff | 3 years ago
"Aim for the highest level of integration while maintaining reasonable speed and cost"
My experience mirrors the author's. In any "real" business application, the unit tests end up mocking so many dependencies that changes become a chore, in many cases causing colleagues to skip certain obvious refactors because the thought of updating 300 unit tests is out of the question. I've found much better success testing at the integration level. And to be clear this means writing a tests inside the same project that run against a database. They should run as part of your build, both locally and in CI. The holy grail is probably writing all your business logic inside pure functions, and then unit testing those, while integration testing the outer layers for happy and error paths. But good luck trying to get your coworkers to think in pure functions.
lytefm|3 years ago
I've come to a similar conclusion. Functions don't necessarily have to be pure in the academical sense, though - but I feel like the more the business logic is decoupled from dependency injection and the less it is relying on some framework, the better.
It makes testing a lot easier, but also code reuse. I've just been writing a one-off migration script where I could simply plug in parts of the core business logic. It would have been very annoying if that was relying on Angular, NestJS or whatever.
zebraflask|3 years ago
I avoid tests (aside from hands-on end user testing) as much as possible, actually, since they rarely seem to tell you anything you'd didn't already know.
rectang|3 years ago
Good! They shouldn't do the refactor.
Because "obvious" refactors often introduce bugs (e.g. copy/paste errors), and if developers can't be bothered to write tests to catch them, they're going to screw over the other team members and users who will be forced to deal with their bugs in production.
> The holy grail is probably writing all your business logic inside pure functions, and then unit testing those, while integration testing the outer layers for happy and error paths.
So settle for half a loaf.
Write all the easy unit tests first. The coverage will be very incomplete, but something is better than nothing.
Write all the easy integration tests next.
Never write the hard tests if you can help it.
mb7733|3 years ago
> Because "obvious" refactors often introduce bugs (e.g. copy/paste errors), and if developers can't be bothered to write tests to catch them, they're going to screw over the other team members and users who will be forced to deal with their bugs in production.
In my opinion, useful tests should be able to survive a refactor. That is the only sane way I've ever done refactoring.
If I'm doing a large refactor on a project and there are no tests, or if the tests will not pass after the refactor, the first thing I do is write tests at a level that will pass both before and after refactoring.
Rewriting tests during refactoring doesn't protect from regression on my experience.
pydry|3 years ago
& if your tests arent catching those bugs and require extra maintenance to go green again you are doing them wrong.
lucasyvas|3 years ago
I think this is just one of those cases where there is a context-sensitive strategy to testing. It depends completely on the cleanliness of your code and experience working with it.
oxff|3 years ago
Then you get to use fuzzer & Arbitrary for basically what is a coverage guided property based test.
But it's hard to maintain that idea at all times when you are writing.