top | item 30942020

Unit Testing is Overrated (2020)

236 points| ivanvas | 4 years ago |tyrrrz.me | reply

228 comments

order
[+] me_me_mu_mu|4 years ago|reply
Team I'm on obsesses with 100% unit test coverage which means that it takes 3-5x as long to ship a feature while engineers write tests that are:

1) Almost always better to be part of integration test

2) Offer no real ROI because the framework usually handles most of these issues gracefully to begin with, so its just a few hundred lines of cute code trying to mock out shit (or better yet getting multiple people in a zoom and wasting their time trying to test 4 lines of code - WHICH THE FKIN LIBRARY/FRAMEWORK ALREADY HANDLES)

3) Complete garbage. They write the code first and then add tests. That's fine if these were integration tests, but they run their little commands to print out a report that says 92% test coverage. Oh no No nO nO NO No meme-1.gif.jpeg. We have incidents regularly compared to my previous team, which was granted all Senior/Staff engineers and we didn't spend our time chasing dumbass metrics. If you want 100% unit test coverage, just do TDD and not this half-ass yolo cowboy approach. Or just write simple, single responsibility code so it can be tested easily. But nah, you know how it is.

These people are annoying as hell to work with, and unfortunately I work at a bigger company so this kind of BS is normal as people try to chase promotions instead of actually shipping quality products. I personally wouldn't hire people like this, nor keep them around this long.

[+] avl999|4 years ago|reply
Does your team work on a codebase that uses a dynamically typed language? If so then the 100% coverage requirement is perfectly reasonable and arguably the only sane way to develop anything more than a tiny project. If a statically typed language is being used then sure, that requirement is counterproductive but if a team uses a dynamically typed language I don't see a way to have any reasonable amount of confidence that the code is even syntactically and semantically valid without 100% coverage. The 100% coverage is just a tax that you need to pay in dynamically typed codebases (hence my preference to use statically typed language for any reasonably sized work related project).

For 3) TDD has nothing to with test coverage or aiming for high coverage. The fact that TDD evangelists have tried to couple "good testing" with TDD is a huge scam. I don't care when you write your tests as long as they get written before the code is merged in. TDD is good for parts of the code which are very pure or where you know exactly what you are doing, but if what you are doing is more exploratory in nature or is not pure and has lots of external dependencies TDD is simply not practical for a lot of people.

[+] stouset|4 years ago|reply
Also, this kind of mentality frequently results in what I consider negative-value tests. A test suite is supposed to do two things:

  a) identify bugs by failing on incorrect behavior, and 
  b) enable refactoring by continuing to pass when underlying code doesn't change behavior
Focusing on 100% unit test coverage all the time—particularly in code that isn't doing simple, single responsibility stuff—often results (in my experience) in tests that overly mock other parts of the system. The net result is that you end up writing tests that simple check that "the code was written the way it is currently written". Tests like this fail on the first count because they fundamentally cannot detect incorrect behavior, and they fail on the second count because a non-trivial refactor inevitably changes the implementation even if the resulting behavior stays the same.

Not that mocks aren't useful in some scenarios, but if you find yourself writing entire test suites based on them you're likely creating tests that are negative value. They won't catch undiscovered bugs, and they'll give you bad information when you're refactoring.

[+] ravenstine|4 years ago|reply
That's the thing that pisses me off about how testing and the periphery around testing (like coverage) are commonly treated. They're seen as being virtuous in and of themselves and no one wants to measure whether they are counterproductive. Even if one took that initiative, good luck getting anyone to care.

Sure, I suppose one could say that any tests is better than no tests, but that also dismisses whether testing can be done better.

Code coverage is the worst. I've worked with codebases with various degrees of code coverage, and I've never seen a correlation between coverage and the number of bugs encountered. There may be an asymptotic nature to how much coverage is needed, but I would bet the threshold is low in a general sense. Once you've reached even 25% coverage, it's really dubious whether more coverage than that is actually telling you anything.

Of course, we could measure success by number of bugs and company revenue, but no, we can't do that; those could make people look bad.

[+] gizmo686|4 years ago|reply
I think this approach comes out of highly dynamic languages like python.

If a typo in a variable name doesn't cause any problems until it get evaluated, then you really need to execute every piece of the code to be sure that you don't have incredibly simple errors.

If you are using languages with a even a primitive static type system, you can catch a lot of errors at compile time (or when the interprartor loads them), which provides most of the benifits of 100% coverage.

[+] ajmurmann|4 years ago|reply
IMO it's a huge problem that TDD and unit testing hey conflated way too often. TDD just means you write the test first and don't write code not required to get the tests to pass. It does NOT tell you what tests you should write. I for one am a huge believer in inside-out TDD. This would have you start with an acceptance test that uses the product as a user would. This may be exercise the feature or reproduce the bug. It's for you to decide what the right abstraction level is. I typically will end up testing edge cases either one level down in integration tests or unit tests, but it depends and there are no hard rules. It's all a trade-off. What's definitely wrong to me is retiring solely on unit tests for your TDD that's a disaster waiting to happen. TDD is about fast feedback loops and quality not about proxy metrics like coverage from unit tests. As others have pointed out, depending on your language you also might need more or fewer tests. When I write Ruby or plain JS, I'll write a lot, a lot of tests on different levels. When I'm writing Rust, I write very few and those that I do write will be focused on complex business logic (granted all Rust code I've written was for toy projects or puzzles where "business" logic might be the wrong term)

Edit: we've also corrupted what "unit" is referring to. AFAIK it originally referred to a unit of functionality. Many now take it to refer to a single class.

[+] jrockway|4 years ago|reply
100% test coverage is really the starting point for testing. Once all the lines of code you've typed in actually run before they're delivered to the customer, then you can start adding the real test scenarios!
[+] rbanffy|4 years ago|reply
You are forgetting one important part: unit tests serve as an always updated, always tested documentation of how you use the functionalities of your software.

I agree with your point on 3 - you should use the tests to start figuring out how your unit is going to be used. If you write code first, you frequently end up with untestable code that you need to mock the universe to get it running in artificial conditions. I don't recommend writing your tests first unless it's a trivial case, but you should at least have the unit calls in tests before you start coding.

[+] vmception|4 years ago|reply
I like this post, because I avoided unit tests for so long lol. After finally doing them, I did notice that I would at least format my code better and this had other benefits. There would be cases I missed, made me start thinking about edge cases further in advance, as well as made it easier for other to read.

I think the primary point is made

> Or just write simple, single responsibility code so it can be tested easily

there it is.

[+] danielvaughn|4 years ago|reply
This describes my experience with unit testing perfectly.

What I try to do is just write small functions that feel like they would be well suited for a unit test. I find that makes them much easier to read, and reason about. I don’t even bother writing tests.

[+] _glass|4 years ago|reply
Unit Tests are still underrated in my experience. It pains me when I see developers testing their functionality end to end, manually. Discover a formatting error in their input, then do this again. When I was working in the office I could see that easily 90% of a lot of developer's time is spent like this. Then I showed that a unit test can discover this, people are still in denial, that it was just for this small error. But software development is a lot about nitty gritty details. And unit testing fixes this.
[+] josefx|4 years ago|reply
> It pains me when I see developers testing their functionality end to end, manually

On the other hand I know developers that only unit test. Hundreds of lines of mock code, dozens of tests. No documentation, no sample configuration for the final application and dozens of data races because the unit tests don't cover multi threading. But hey an unusable buggy mess is a net positive as long as it makes the test coverage statistic happy and that is one of those things reported to management.

While a bit exaggerated one of the downsides of unit tests is that they quickly become part of a metric without regards for when they ad value.

[+] capableweb|4 years ago|reply
Many developers seem to lack a focus on automation in the most basic sense of the word as well. While consulting, I see lots of backend developers constantly making a change, waiting for the server to reload with the new changes, switch to Postman/cURL/whatever, fire off their request and see if it's accurate. Rather than just having the request handler as a function that is under test with assertions, and have the test re-run on each change. So much time during a developers day is spent on just repetition, and they don't seem to care.
[+] jseban|4 years ago|reply
> Then I showed that a unit test can discover this

Yeah but the catch is that in order for your code to be "testable" you have to rewrite it completely in a way that comes with large sacrifices, making it much more confusing, abstract, larger and much harder to work with, in every way except testing it.

[+] guzik|4 years ago|reply
Once I wrote my third game in Unity, with unit tests coverage of about 80%. It was least buggy game in my entire career. It was also the hardest time as I had to start thinking more about creating testable components rather than messy spaghetti code I used to write before.
[+] bigDinosaur|4 years ago|reply
Property based tests have the benefit of being fun to actually think about (as far as testing goes) and also often exposing more bugs than unit tests (which are usually more like sanity checks than exploring the boundary of what will break or turn weird your functions).
[+] pydry|4 years ago|reply
It pains me when I see developers assiduously write unit tests for every line of their code and every single bug in the code ends up encoded in those unit tests so that they fail when the bug is fixed.

Integration tests need to be the default and people need to learn when to use one or the other.

[+] mrtksn|4 years ago|reply
Unit tests are extremely boring for people who don't particularly like programming but do it anyway as means to an end. It's just another piece of code you are writing instead of releasing the dopamine of the finished product. It's anticlimactic.

If you first write the test and then try to satisfy them(test driven development) can work to an extent as it can divide the large task into small pieces where you get a prize each time but it also alienates the developer from the larger picture, diminishing their value to the project since they no longer can put their intellectual output into the project.

[+] swalsh|4 years ago|reply
Unit testing is a tool, you don't need a 100% test coverage. But sometimes it's the right tool for the job. Sometimes it's not. If I have a piece of code... usually an algorithm that handles a handful of different use cases, and making a change might unexpectedly cause an issue to another use case, i'll unit test the heck out of it. If I have some code where it has a simple input, and output but it takes some time to test it, i'll write a unit test to make development faster.

But i'm not going to add an interface to every concrete class in my project, and design literally every component so I can mock it.

Writing software is a business, i can sprinkle TDD around for a high ROI. If I use it EVERYWHERE the ROI is very little if not negative.

[+] mring33621|4 years ago|reply
Very much agree with "sprinkle TDD around for a high ROI" This requires both developer expertise and trust from stakeholders.

But, VeryBigHugeBankCo requires me to have 70% unit-test coverage for 'new' code, as measured by Sonar or whatever, or else I am simply not able to deploy my change to PROD.

So here I am refactoring a DAO, using strategy pattern and functional interfaces, so that I can cover my DB interactions adequately for that 70% metric.

No, I can't just use H2 DB, as the SQL is Oracle-specific.

Yes, there is very little actual value in the new refactor, except to check the box.

This is how we do...

[+] glenjamin|4 years ago|reply
Some time ago I stopped debating the definition of "unit" in testing, and instead started focusing on whether my tests were fast, reliable, and provided a useful signal about the health of the system when they failed. I've been much happier since then.
[+] 2OEH8eoCRo0|4 years ago|reply
This is what Working Effectively with Legacy Code says:

Here are qualities of good unit tests:

  1. They run fast.

  2. They help us localize problems.
A unit test that takes 1/10th of a second to run is a slow unit test.
[+] stepbeek|4 years ago|reply
This is the sticking point I've found. The definition of "unit" is different depending on who I talk to.
[+] Barrera|4 years ago|reply
> Although the design above is perfectly valid in terms of OOP, neither of these classes are actually unit-testable. Because LocationProvider depends on its own instance of HttpClient and SolarCalculator in turn depends on LocationProvider, it’s impossible to isolate the business logic that may be contained within methods of these classes.

The design of those classes is bad, and the perceived need to reach for mocks (or interfaces) is telling you that.

The next step the author takes is to go for interfaces. That's not what I'd do. Instead, I'd decouple network from calculation with intermediate dumb data structures. The network side produces the data structure and the calculation side consumes it. The network side can further be broken down with a transformation of the API call result into the dumb data structure used for calculations.

This idea comes from an old Bob Martin presentation [1]. The whole thing is worth watching to put it into context.

This is the kind of thing that never seems to get discussed in these pieces praising integration tests over unit tests. For even better results, ditch the classes altogether and just use functions. It's quite surprising how badly designs come off the rails with the "everything is a class" model.

[1] https://www.youtube.com/watch?v=WpkDN78P884

[+] hinkley|4 years ago|reply
Which in turn comes from Bertrand Meyer, who probably stole it from somebody else in 1990. Code should act or decide, but not both.
[+] JoeNr76|4 years ago|reply
I stopped reading right here:

"Considering the factors mentioned above, we can reason that unit tests are only useful to verify pure business logic inside of a given function."

That just isn't true and it makes the rest of the blogpost also not true.

A unit test should test "a unit of functionality" not just a method or a class. Your unit tests also shouldn't be coupled to the implementation of your unit of functionality. If you are making classes or methods public because you want to unit test them, you're doing it wrong.

The exception is maybe those tests you are writing while you're doing the coding. But you don't have to keep them around as they are.

[+] lucasyvas|4 years ago|reply
As a predominantly fullstack web developer, I will only ever voluntarily write unit tests and end to end tests, but avoid in-code integration tests in most circumstances.

This is frankly the easiest and best bang for buck.

Unit tests force you to make your code more flexible, they're fast to write, fast to run. Maintenance depends on how overboard you go with your verifications - try to verify the most important parts.

End to end tests require no elaborate scaffolding of internals and allow you to test it as a real user interacts with it. Generally fast to make, slow to run, and maintenance depends on how disciplined you are with stable identifiers/interaction points.

As for in-code integration tests - I love the idea of them, but they're absolutely miserable 9 times out of 10 due to extremely convoluted processes to "bring up" parts of your application. If you use DI it shouldn't be as bad, but almost nobody does and it becomes a total clusterfuck not worth the maintenance burden.

As an idealist, I want all 3. But most codebases frankly cannot support all 3, so the most value will come from whatever is easiest. I've found that's most often unit and e2e.

[+] tmstieff|4 years ago|reply
The article actually argues the opposite. Developers should move their focus to integration / "real-world" tests. The major summary bullet point being:

"Aim for the highest level of integration while maintaining reasonable speed and cost"

My experience mirrors the author's. In any "real" business application, the unit tests end up mocking so many dependencies that changes become a chore, in many cases causing colleagues to skip certain obvious refactors because the thought of updating 300 unit tests is out of the question. I've found much better success testing at the integration level. And to be clear this means writing a tests inside the same project that run against a database. They should run as part of your build, both locally and in CI. The holy grail is probably writing all your business logic inside pure functions, and then unit testing those, while integration testing the outer layers for happy and error paths. But good luck trying to get your coworkers to think in pure functions.

[+] Beltalowda|4 years ago|reply
> As for in-code integration tests - I love the idea of them, but they're absolutely miserable 9 times out of 10 due to extremely convoluted processes to "bring up" parts of your application.

I've definitely worked on projects like this (actually, I'm working on one now at $dayjob), but I found that with a little bit of effort you can get this right, and usually isn't that much effort.

What I often see people do is "oh I need a thingy for this test, and a thaty for this other test" and create it ad-hoc when needed, instead of creating a good convenient API to do all of this, which is then also used in the application itself.

The big advantage of these tests is that they tend to be a lot faster than E2E tests, at least in my experience, and often also easier to reason about once you get it right.

[+] fleventynine|4 years ago|reply
Some observations after using many different testing philosophies on many different teams in the past 20 years:

1. Unit tests for code with no side effects (code that takes well defined input and produces well-defined output) are easy to write, easy to understand, and I've never regretted writing them.

2. I've had good experiences transforming as much error-prone logic as possible into code with no side effects.

3. Integration tests that execute in an environment that is similar to production have proven to be incredibly valuable, especially when refactoring. When possible, I prefer to make technology-stack choices that make such an environment quick to setup and teardown such that these tests can run in seconds.

4. Many bugs occur because team A makes bad assumptions about how team B's system behaves, and they encode these assumptions into their (passing) test cases.

5. Unit tests with lots of mocks should be avoided; their cost-to-benefit ratio is terrible. Sometimes the best solution is to delete these tests. Relying on mocks for a few error-case tests that are hard to reproduce on the real system is ok.

6. If they cannot be avoided, fake implementations should be written by the same team that writes the real implementation. They understand how it actually behaves much better than their users, and are in a good place to reuse critical logic from the real implementation in the fake to make it more realistic.

7. If my project includes firmware running on custom hardware, building an emulator for the hardware that can run on standard computing infrastructure is valuable for writing test cases against.

8. More tests or coverage is not always better. We have a finite amount of time to improve our systems, and it is our duty as engineers to ensure that the benefit we get from adding a specific test-case is higher than using that time to make some other improvement to the project.

[+] tippytippytango|4 years ago|reply
I forgot who, but someone semi-famous said unit tests should be called "programmer tests". That is, unit tests are for the programmers, not for verification. They are used as a tool by the programmer to:

Tighten up the development loop (manual e2e is often too slow)

Prove to yourself that the code does what you want

Communicate intent to other programmers

Documentation to other programmers

Help programmers debug regressions

I really think this holds up because, really, who except for programmers that can read and comprehend unit tests are going to trust they verify a product works? The more distant a stakeholder is from a project the more holistic the testing needs to be in order for that stakeholder to trust it.

[+] gnulinux|4 years ago|reply
This is exactly right. I write unittests first and foremost to gain confidence this shit won't blow up on me on prod. I also take extra care of them because it's by far the best documentation of the code. To me these are good enough reasons to write extra potentially unnecessary tests.
[+] kevwil|4 years ago|reply
"Oooo...yeahhhh, ummm...I'm gonna have to go ahead and sort of disagree with you there." - Office Space

UT is not the panacea and won't do your laundry for you, but well-tested code is better than sloppily- or un-tested code. Always will be, IMHO. UT is the core of that, for languages where a unit of functionality can be executed directly. Integration tests are also critical, but IMHO it's foolish to do one or the other.

"But dependencies ..." yeah that's been solved many times over with mock tools and such, and refactoring to reduce complexity. If your code is too complex to test properly, it's too complex to release. Don't be lazy, do it right.

[+] ozim|4 years ago|reply
But you are writing like a single person would be writing whole project.

I might not be too lazy - but I have to collaborate with 5-10 people.

Well-tested code is also some kind of utopia, no one have seen such code but everyone is claiming that once you get there it is all unicorns and roses.

I can write perfect code if I do it alone for my toy project - when I have team of people that has to agree on stuff it is not going to happen.

[+] langager|4 years ago|reply
I really enjoyed this article and spoke to a lot of the lived experience I've had in over abstracted codebases that had tests incredibly coupled to implementation.

I think this title could maybe be rewritten as "Unit Testing that Requires Mocks is Overrated".

Unit testing something that has no (or very simple) dependencies is great. For example: - some kind of encryption that takes in a string as a key - serialization that expects the same output as the input passed in - a transform function that takes in one type and expects a different type out

As soon as you rely on a DB, file system, etc... you're probably better off with an e2e test.

At the end of the day, it comes down to data contracts. This could be the functions a package exposes or the GQL/REST/gRPC/whatever API. That is the most important thing to not break the behavior of. Write good tests that target the external facing data contracts and treat implementation like a black box and you'll be empowered to do the important things that tests should enable like refactoring, reworking abstractions that may have made sense at one point, but no longer do, and let your codebase evolve as you learn.

Tests that are a barrier to reworking internal abstractions are not good tests.

[+] wikidani|4 years ago|reply
Oh god this resonates so much with what I'm going through right now. I'm on a team that rotates members pretty much every 6 months and have been put in charge of writing unit tests for a series of repositories that I have never seen in my life. I must also add that I had 0 experience with unit testing beyond knowing what the concept is when asked to start testing. Now, I wouldn't be too bothered because it was a sort of learning experience but what was supposed to be a two week task at most has been dragging for a couple of months because they keep adding APIs and other backends which further complicates the issue and, of course the code isn't really unit testable so I have to modify code that I haven't seen before made by people who aren't on the team anymore just to make tests work

All of this to achieve a very arbitrary 80% coverage that's required by business on a few REST APIs that's not even ours!! And don't get me wrong I get the importance of testing but the enphasis on unit testing these days seems ridiculous

[+] rectang|4 years ago|reply
> I'm on a team that rotates members pretty much every 6 months and have been put in charge of writing unit tests for a series of repositories that I have never seen in my life. I must also add that I had 0 experience with unit testing beyond knowing what the concept is when asked to start testing.

I don't think the concept of "unit testing" is the main problem here.

[+] magicalhippo|4 years ago|reply
> I'm on a team that rotates members pretty much every 6 months

What kind of software do you make?

Where I am, I was still very much learning after 6 months and it wasn't until a year or so before I became really effective.

[+] dhzhzjsbevs|4 years ago|reply
Hot take.

Unit testing is not overrated, if you feel this way you likely just suck at optimising your suite for high RoI tests or chase some stupid metric like code coverage getting mad when you find out you wasted a bunch of time writing worthless tests.

https://youtu.be/z9quxZsLcfo

[+] adhesive_wombat|4 years ago|reply
There's nothing that cements the value of units tests more, for me, than surfacing bugs in new code almost immediately that would otherwise need debugging "in situ" in the application.

Figuring out where your bug is is so much harder when you have to do it though the lens of other code, whereas in your unit test, you can just see "oh, I'm returning foo.X, not foo.Y".

It also makes sure you actually can construct your object at all without dragging in a dependency on the entire system. Code without tests tends to accrete things that can only be set up in a very long and complex process. This is both hard to reason about (because your system can only be thought about as an ensemble) and fragile when the system changes: you now have to unpick the dependencies to allow your "SendEmail" function to not somehow depend on an interface to some SPI hardware!

But there's certainly value in not spending hours testing obvious code to death: a getter is almost certainly fine, and even if it isn't, the first "real" test that uses it will fail if you're returning the wrong thing. But if, down the line, you do find a bug in it, then something was probably not that obvious!

[+] dullcrisp|4 years ago|reply
The first question I’d ask if I were writing unit tests for their first example is why a solar calculator containing mathematical business logic should depend on a location provider rather than just accepting a location as a method parameter.

Unit tests help you to design your classes based on use cases—if a simple class is hard to test you should think about whether it might have a simpler design. If you have a brute forced design and each of your classes have dozens of dependencies and your tests require pages of setup and are hard to reason about and are tightly coupled to the implementation then the tests might not be providing any benefit at all.

[+] calderwoodra|4 years ago|reply
Dogmatic unit testing is overrated.

For backend testing, I've had amazing success following a few simple principles: 1. Abstract and mock code you don't control that require network calls. 2. Use frameworks that support database abstraction out of the box 3. Test end to end functionality. Every test should basically be getting your DB into a specific state, then hitting an API endpoint and validating the output and/or state change.

It's not that hard, you can get decent and worthwhile code coverage for not much work, and even if you only write a few tests with low coverage, you can easily add regression tests later if you notice a bug.

[+] rybosworld|4 years ago|reply
The biggest issue in my experience is trying to target metrics. E.g. some % of coverage.

If you encourage developers to write tests with the mindset that "these should make my life easier", you'll get better and more maintainable tests.

If you encourage developers to write tests "just because" you will get very useless tests.

[+] hatch_q|4 years ago|reply
Maybe, in type-safe language like C#. But in a dynamic language, we rely on unit tests to catch obvious typos and regressions.
[+] codr7|4 years ago|reply
I'm wondering if that's part of the problem, the fact that unit testing started out in Smalltalk. It does make a lot more sense in a dynamic language, because any line that's not touched is a potential typo.
[+] Veuxdo|4 years ago|reply
My problems with unit testing:

- There's no consensus on what a "unit" is.

- There's no consensus on how much coverage is needed.

- There's no consensus on what should be mocked.

- There's no consensus on if private methods should be tested independently, even though they obviously shouldn't.

- It violates "don't repeat yourself".

- The developers who write the tests usually write the code. Most bugs occur when developers overlook a use case. Do the math.

[+] injidup|4 years ago|reply
The first example is such a straw man. The LocationProvider class has no business being a class. It should just be a function. Problems then go away.
[+] ensiferum|4 years ago|reply
And the sunset calculator doesn't need a location provider. All it needs are location coordinates which can be given to the calculate function.

In fact the whole thing then becomes a standalone function which takes Location and other relevant parameters and returns the computer sunrise/sunset values. Pure function and super easy to unit test.

If one needs to do a lot of "mockups" for your unit tests then maybe one needs to consider the API and class design. Removing needless coupling helps testability by removing the need to use mocks in the first place.

[+] gautamdivgi|4 years ago|reply
Unit tests are essential. I've developed some devops code around self-healing where the degrading signals were not reproducible. We knew they occurred because we lost a non-trivial number of nodes a day. A weird set of bugs between Linux, Kubernetes & Docker. The problem was we could recognize the signals and take action. The entire daemon I created to trap and execute on these was built and deployed on unit tests. In fact, I had more lines of unit test code than actual functional code because I had to mock the hell out of what the actual system would look like.

Another situation - where my wife used to work in medical devices. Feature development velocity was a problem because there was a queue on the very limited set of devices (CT scanners, PET scanners, etc.) that the team could use for testing. Debugging was very hard. Fixing bugs was hard because to debug you needed a device. With unit testing and mocks they made a lot of contention on the devices go away.

Write your unit tests. It will help you and the people coming in after you to maintain the code.