(no title)
gcp123 | 11 months ago
I've been on both sides of this war: the test evangelist fighting for coverage and the pragmatist shipping to beat a deadline. After 20+ years in software, the truth is painfully obvious: testing is the greatest productivity hack that everyone keeps "postponing until next sprint."
The author gets the psychology exactly right. We overestimate the initial cost and drastically undervalue the compound returns. What they call "Time Technical Debt" is the perfect description for that sinking feeling when you're working on a mature codebase with spotty test coverage.
The most insightful point is how testing fundamentally changes your design for the better. When you have to make something testable, you're forced to:
- Think about clear interfaces
- Handle edge cases explicitly
- Create clean separation of concerns
- Build proper startup/shutdown sequences
These aren't "testing best practices," they're just good engineering. Testing is simply the pressure that forces you to do it right.
My experience: if your system is hard to test, it's probably hard to reason about, hard to maintain, and hard to extend. The difficulty in testing is a symptom, not the disease.
At my last company, we built a graph of when outages occurred versus test coverage by service. The correlation was so obvious it became our most effective tool for convincing management to allocate time for testing.
magicalhippo|11 months ago
For stuff like core libraries or say compilers, sure, unit tests are great. But for the levels above I'm leaning towards integration tests first, and then possibly add unit tests if needed.
After all, you can't not have integration tests. No matter how perfect your Lego bricks are, you can still assemble them together the wrong way around.
mewpmewp2|11 months ago
So shared functions get unit tested, but if there is a non shared function that gets triggered only from few layers up via user click, it is better to simulate that click and assert accordingly. However the exceptions are when the input can have tons of different permutations, when unit test might just become more optimal then.
umvi|11 months ago
jasonpeacock|11 months ago
Unit tests let you change code fearlessly with instant feedback.
Integration tests require basically deploying your app, and when something fails you have to debug the root cause.
If you’re doing a lot of mocking then your design is not good. And only public interfaces should have testing.
roland35|11 months ago
paulryanrogers|11 months ago
One challenge is animation and timing races, supposedly Playwright can address many of those. Another is some infrastructure like GitHub Actions can be randomly resource starved, such as causing the Chrome Driver to become unresponsive. Automated retrying is one workaround, at the cost of possibly papering over rare race and timing issues.
Of course unit tests are nice and fast and narrow. But refactors could render a large portion obsolete, and they won't prove things work together as a whole.
soneca|11 months ago
Since whole features can be quickly ditched frequentl. Sometimes even complete product pivot.
edoceo|11 months ago
The habit is important. If you don't start, after three pivots you'll have a huge mountain of tests for a system nobody understands. Plus all the wasted time manually "testing".
Tests are so critical for the success of the business. It's fiscally irresponsible to skip.
wavemode|11 months ago
Over time you realize that testing truly does not slow down development as much as many people think it does. Maybe devs who just aren't used to testing find it difficult, but after a while it becomes second nature.
The best thing an early startup CTO can do is enforce testing across the board, so people don't just test when they feel like it.
TheCoelacanth|11 months ago
The payback for good testing is very fast, especially once you have set it up for the first feature.
leptons|11 months ago
shanemhansen|11 months ago
Even if I was doing a one day hackathon I'd probably have some sort of test feedback loop.
I've dealt with P1 bugs that cost the company 100k/minute and still took the time to write a test for the fix because you really don't have time to get the fix wrong and not find out until it is deployed.
MoreQARespect|11 months ago
When people say this type of thing I consider it to be kind of a code smell that they're testing at too low a level and tightly coupling their tests to their implementation.
It is true that the pain of tightly coupling your code to your tests to your implementation can drive you to unwind some of that tight coupling but that still leaves your tests and your code tightly coupled.
I find the best bang for my buck are tests that are run at a high enough level to make it possible to refactor a lot and safely with a minimum of test changes. Technical debt that is covered by tests doesnt compound at nearly the same rate.
klysm|11 months ago
- Handle edge cases explicitly
- Create clean separation of concerns
- Build proper startup/shutdown sequences
If you do all of these things to start with though, then what's the value proposition?
t-writescode|11 months ago
Increased customer trust because fewer regressions get missed.
p2detar|11 months ago
Software that managed to do this is very very rare. In fact I can't even think of any that I've seen.
> Handle edge cases explicitly
This is where the bread and butter in practice is though.
bdangubic|11 months ago
- thought about clear interfaces?
- created clean separation of concerns?
- built proper startup and shutdown sequences?
:)
Fire-Dragon-DoL|11 months ago
hobs|11 months ago
dougdonohoe|10 months ago