top | item 43597528

(no title)

gcp123 | 11 months ago

This struck a nerve.

I've been on both sides of this war: the test evangelist fighting for coverage and the pragmatist shipping to beat a deadline. After 20+ years in software, the truth is painfully obvious: testing is the greatest productivity hack that everyone keeps "postponing until next sprint."

The author gets the psychology exactly right. We overestimate the initial cost and drastically undervalue the compound returns. What they call "Time Technical Debt" is the perfect description for that sinking feeling when you're working on a mature codebase with spotty test coverage.

The most insightful point is how testing fundamentally changes your design for the better. When you have to make something testable, you're forced to:

- Think about clear interfaces

- Handle edge cases explicitly

- Create clean separation of concerns

- Build proper startup/shutdown sequences

These aren't "testing best practices," they're just good engineering. Testing is simply the pressure that forces you to do it right.

My experience: if your system is hard to test, it's probably hard to reason about, hard to maintain, and hard to extend. The difficulty in testing is a symptom, not the disease.

At my last company, we built a graph of when outages occurred versus test coverage by service. The correlation was so obvious it became our most effective tool for convincing management to allocate time for testing.

discuss

order

magicalhippo|11 months ago

While I agree testing in general is a must, I'm still not sold on unit testing as a general tool.

For stuff like core libraries or say compilers, sure, unit tests are great. But for the levels above I'm leaning towards integration tests first, and then possibly add unit tests if needed.

After all, you can't not have integration tests. No matter how perfect your Lego bricks are, you can still assemble them together the wrong way around.

mewpmewp2|11 months ago

I tend to think that the most valuable tests are on the exact level of when something gets used by multiple consumers.

So shared functions get unit tested, but if there is a non shared function that gets triggered only from few layers up via user click, it is better to simulate that click and assert accordingly. However the exceptions are when the input can have tons of different permutations, when unit test might just become more optimal then.

umvi|11 months ago

Agree - unit tests are best for pure functions and the like. If you are having to do a ton of mocking and injection in order to unit test something, it's probably a sign black box testing might be higher value

jasonpeacock|11 months ago

Integration test are slow/expensive to run compared to unit tests and reduce your iteration speed.

Unit tests let you change code fearlessly with instant feedback.

Integration tests require basically deploying your app, and when something fails you have to debug the root cause.

If you’re doing a lot of mocking then your design is not good. And only public interfaces should have testing.

roland35|11 months ago

I think in general tests should be where errors and mistakes are more likely to occur. Different code bases could be different! Hard core math libraries are different than a web app with various integrations.

paulryanrogers|11 months ago

IME end-to-end tests in a browser really help with services that have a lot of parts to integrate, but damn they are hard to make reliable.

One challenge is animation and timing races, supposedly Playwright can address many of those. Another is some infrastructure like GitHub Actions can be randomly resource starved, such as causing the Chrome Driver to become unresponsive. Automated retrying is one workaround, at the cost of possibly papering over rare race and timing issues.

Of course unit tests are nice and fast and narrow. But refactors could render a large portion obsolete, and they won't prove things work together as a whole.

soneca|11 months ago

Do you think early startup product might be an exception?

Since whole features can be quickly ditched frequentl. Sometimes even complete product pivot.

edoceo|11 months ago

No excuses. I've done this like 12 times as startup CTO.

The habit is important. If you don't start, after three pivots you'll have a huge mountain of tests for a system nobody understands. Plus all the wasted time manually "testing".

Tests are so critical for the success of the business. It's fiscally irresponsible to skip.

wavemode|11 months ago

No, I don't think there is any exception. If you intend to maintain a piece of software for any length of time (i.e. it's not just a throwaway demo), you should write tests for it.

Over time you realize that testing truly does not slow down development as much as many people think it does. Maybe devs who just aren't used to testing find it difficult, but after a while it becomes second nature.

The best thing an early startup CTO can do is enforce testing across the board, so people don't just test when they feel like it.

TheCoelacanth|11 months ago

Only if your runway is measured in days rather than weeks or months.

The payback for good testing is very fast, especially once you have set it up for the first feature.

leptons|11 months ago

I an early start-up is the exception, but my boss didn't. We were still in "stealth mode" and the CTO wanted 100% test coverage on our nodejs based social website, from the very start. 6 months in and we didn't have all that much built, because they couldn't really decide what they wanted us to build. So we built the most well-tested email sign-up form that ever existed, and a bunch of other user-account related stuff too, but then the company completely pivoted at around 6 months and I was now somehow doing PHP programming (which I hate) hacking the code of some ad server and bolting it on to a mobile app (not what we set out to build), and at that point the requirement for tests had been forgotten, because the company was desperate to find any viable path forward. It dissolved about 3 months after that, and now those tests seem pretty pointless.

shanemhansen|11 months ago

For me personally tests have a positive ROI within hours.

Even if I was doing a one day hackathon I'd probably have some sort of test feedback loop.

I've dealt with P1 bugs that cost the company 100k/minute and still took the time to write a test for the fix because you really don't have time to get the fix wrong and not find out until it is deployed.

MoreQARespect|11 months ago

>The most insightful point is how testing fundamentally changes your design for the better. When you have to make something testable.

When people say this type of thing I consider it to be kind of a code smell that they're testing at too low a level and tightly coupling their tests to their implementation.

It is true that the pain of tightly coupling your code to your tests to your implementation can drive you to unwind some of that tight coupling but that still leaves your tests and your code tightly coupled.

I find the best bang for my buck are tests that are run at a high enough level to make it possible to refactor a lot and safely with a minimum of test changes. Technical debt that is covered by tests doesnt compound at nearly the same rate.

klysm|11 months ago

- Think about clear interfaces

- Handle edge cases explicitly

- Create clean separation of concerns

- Build proper startup/shutdown sequences

If you do all of these things to start with though, then what's the value proposition?

t-writescode|11 months ago

Refactor simplicity, regression reduction, reduced time to “next launch” because manual validation period is shorter.

Increased customer trust because fewer regressions get missed.

p2detar|11 months ago

> Create clean separation of concerns

Software that managed to do this is very very rare. In fact I can't even think of any that I've seen.

> Handle edge cases explicitly

This is where the bread and butter in practice is though.

bdangubic|11 months ago

how do you know that you did:

- thought about clear interfaces?

- created clean separation of concerns?

- built proper startup and shutdown sequences?

:)

Fire-Dragon-DoL|11 months ago

I think testing as a cost and manual QA has another cost. Depending on the cost over time (maintenance or manual QA) and the upfront cost, as well as the risk of refactors, which needs tests, there are moments where one option is better than the other. It changes, so it is worth considering.

hobs|11 months ago

I dont think they replace each other, they are complementary - they do overlap though, which is why people think you can trade them off - a test is not creative, a human is not an automaton.

dougdonohoe|10 months ago

Author of the original post here. Thanks for such thoughtful comments and reactions. I'm glad I "stuck a nerve" for at least one person!