I mean how many of you stick with this test driven development practice consistently? Can you describe the practical benefit? Do you happen to rewrite the tests completely while doing the implementation? When does this approach work for you and when did it fail you?
No. Tests are like any other code. They incur technical debt and bugs at the same rate as other code. They also introduce friction to the development process. As your test suite grows, your dev process often begins to slow down unless you apply additional work to grease the wheels, which is yet another often unmeasured cost of testing in this fashion.
So, in short, I view tests as a super useful, but over-applied tool. I want my tests to deliver high enough value to warrant their ongoing maintenance and costs. That means I don't write nearly as many tests as I used to (in my own projects), and far fewer than my peers.
Where I work, tests are practically mandated for everything, and a full CI run takes hours, even when distributed across 20 machines. Anecdotally, I've worked for companies that test super heavily, and I've worked for companies that had no automated tests at all. (They tested manually before releases.) The ratio of production issues across all of my jobs is roughly flat.
This issue tends to trigger people. It's like religion or global warming or any other hot-button issue. It would be interesting to try to come up with some statistical analysis of the costs / benefit of automated tests.
Thankfully we don't have to. Hillel Wayne links to a few of the studies that have been done on TDD [0]. It doesn't have a conclusive effect on error rates in software. While I do write tests before I write code, more so in a dynamic language without a strong, static type system; it appears that there isn't any correlation to a reduced number of bugs.
But I still do it. And I think that's because that while I may prevent a few obvious errors with the practice the benefit I get is from the informal, automated specifications of the units of a module that I'm building. I also get the benefit of a continuous refactor cycle. Less experienced developers wonder how I write clean, simple code: I refactor, a lot, all the time, because I have tests checking my code.
If it's functional correctness we're after TDD is only one, small, piece of the puzzle. A strong, formal specification will go a lot further to ensuring correctness than fifty more unit tests.
I take the perspective that tests - and particularly unit tests - are a living specification for the software that you know cannot be out of date. If you write tests in a way that you understand the business purpose of the system, you are providing a significant part of the documentation while also providing a regression suite that is automated.
There are other ways to handle this. You can have a design document that is continually updated. Some teams can do this well. You can use literate programming a la Knuth. (Many code bases have abbreviated forms of this.) You can assume that the small number of developers who have been around for these decisions will never leave the company and forgo all of it. (I do not recommend this.)
So, what is your preferred alternative to unit tests as a specification? (And if your set of unit tests don't provide clarity to the source code, that may be a source of the frustration.)
I'm wary of "testing theater" (in the spirit of "security theater"); but I've come to think of testing as similar to two-factor authentication: it doesn't guarantee correctness, but it does reduce the likelihood of bugs and regressions, especially during refactors.
I think the other benefit of testability has nothing to do with the tests themselves, but rather the discipline of writing testable code: in general, writing code that is easy to test will tend to be higher-quality and easier to reason about.
The thing I'm not fully sold on is mocking, which ends up being a huge timesink, and may or may not improve reliability since you're testing against a fake system and not the real thing. I vastly prefer a combination of small functional/unit tests, and E2E integration tests in a real environment (cypress/etc); the uncanny valley in between has a poor ROI IMO.
This is a good point. Just like with anything in engineering understanding the process is more important than just following some steps because you think you should.
When developing software there are many steps before testing even occurs to catch problems, and the earlier you catch problems the better. Coding standards, having requirements, and peer reviews all are important too.
I find tests useful for having a "checklist" of things to do before releasing a new build. In robotics automated tests are especially helpful since there is a lot of code which only happens in certain physical conditions which are hard to recreate manually (i.e. in a low battery condition the robot should do this behavior). But just having the checklist is more important than how you execute it.
Spot on with cost, as with everything you have to be pragmatic.
Tests are great for:
* High risk items (large consequence when it goes wrong)
* Documentation
* Weird unintuitive things
We had a C# project recently that needs to detect changes between a DTOs properties. At implementation, all the comparisons would be done over value types, but if someone added a reference type that didn't properly implement equality this would silently fail (likely for months). Good case for adding a test that ensures the change detection works for all properties.
I used to be a TDD zealot. In recent years, I've taken a much more selective approach to test coverage. I typically focus my testing on pieces of code that contain business logic whereas I used to test everything. I've also found automated UI testing is not worth the squeeze and I've had better luck just looking at impacted objects and manually testing those.
I'd be interested to hear if anyone has automated UI testing tools in place that are easier to write test cases for than to just do the manual testing.
Since you don't write as many tests, that means you're not actually testing all your code branches because tests incur technical debt after all.
So does this mean you test every single branch manually? just don't bother with it at all?
Do you just have a few integration tests and they break and you spend a good chunk of time figuring out which logical branch broke?
What happens if you make a typo, comment out a piece of code and forget to uncomment , etc?
I'd love to write less tests but don't know how to do it?
It's weird you bring up global warming when there is scientific consensus that it is not only happening but man made. Doesn't seem like a good analogy for something you want more data on.
There are a couple of circumstances I often do, though.
The first is when fixing a bug - writing the (red) regression test first forces me to pin down the exact issue and adds confidence that my test works. Committing the red test and the fix to the test in two commits makes the bug and its fix easy to review.
The second is when I'm writing something high risk (particularly from a security standpoint). In this case I want to have a good idea of what I'm building before I start to make sure I've thought through all the cases, so there's less risk of rewriting all the tests later. There's also more benefit to having a thorough test suite, and I find doing that up front forces me to pin down all of the edge cases and think through all the implications before I get bogged down in the "how" too much.
> Committing the red test and the fix to the test in two commits makes the bug and its fix easy to review.
I've done this in the past. Then I started to use `git bisect` and having a red test somewhere in your commit-history is a killer for bisect. So now I tend to include both, the test and the bug-fix, within one commit.
I think there's this myth that TDD is one of the best ways to write software and if you admit you don't do it, you'll be seen as a cowboy and will look stupid. I think the truth is TDD has its pros and cons, and the weight of each pro and con is highly dependent on the project you're doing.
- The uncomfortable truth for some is that not doing any testing at all can be a perfectly fine trade off and there's plenty of successful projects that do this.
- Sometimes the statically checked assertions from a strongly typed language are enough.
- Sometimes just integration tests are enough and unit tests aren't likely to catch many bugs.
- For others, going all the way to formal verification makes sense. This has several orders of magnitude higher correctness guarantees along with enormous time costs compared to TDD.
For example, the Linux kernel doesn't use exhaustive unit tests (as far as I know) let alone TDD, and the seL4 kernel has been formally verified, both having been successful in doing what they set out to do.
I notice nobody ever gets looked down on for not going the formal verification route - people need to acknowledge that automated testing takes time and that time could be spent on something else, so you have to weigh up the benefits. Exhaustive tests aren't free especially when you know for your specific project you're unlikely to reap much in the way of benefits long term and you have limited resources.
For example, you're probably (depending on the project) not going to benefit from exhaustive tests for an MVP when you're a solo developer, can keep most of the codebase in your head, the impact of live bugs isn't high, the chance of you building on the code later isn't high and you're likely to drastically change the architecture later.
Are there any statistics on how many developers use TDD? There's a lot of "no" answers in this thread but obviously that's anecdotal.
I've found that unless I have a solid architecture already (such as in a mature product), I end up massively modifying, or even entirely rewriting most of my tests as development goes on, which is a waste of time. Or even worse, I end up avoiding modifications to the architecture because I dread the amount of test rewrites I'll have to do.
I do, TDD gives me such a sense of confidence that now that I'm used to, it's hard not to use.
> Can you describe the practical benefit?
Confidence that the code I'm writing does what it's supposed to. With the added benefit that I can easily add more tests if I'm not confident about some behaviors of the feature or easily add a test when a bug shows up.
> Do you happen to rewrite the tests completely while doing the implementation?
Not completely, depends on how you write your tests, I'm not testing each function individually, I'm testing behaviour, so unless there's a big architectural change or we need to change something drastic, the tests have minimal changes
> When does this approach work for you and when did it fail you?
It works better on layered architectures, when you can easily just test the business logic independently of the framework/glue code. It has failed me for exploratory work, that's the one scenario where I just prefer to write code and manually test it, since I don't know what I want it to do...yet
Nope. I pretty much always find it to be counterproductive.
Most of programming happens in the exploration phase. That's the real problem solving. You're just trying things and seeing if some api gives you what you want or works as you might expect. You have no idea which functions to call or what classes to use, etc.
If you write the tests before you do the exploration, you're saying you know what you're going to find in that exploration.
Nobody knows the future. You can waste a crazy amount of time pretending you do.
> You're just trying things and seeing if some api gives you what you want or works as you might expect.
I don't do most of my programming this way, because mostly I'm writing new things, not gluing together existing APIs with a tiny amount of simple glue code. But when I do need to characterize existing APIs, I find that unit tests are a really helpful way to do it — especially in languages without REPLs, but even in languages that do have REPLs, because the tests allow me to change things (parameters, auth keys, versions of Somebody Else's Software) and verify that the beliefs I based my code on are still valid.
No. And also 'do you write a test for everything?'. Also No.
Tried it, ended up with too many tests. Quelle surprise. There is a time/money/cognitive cost to writing all those tests, they bring some benefit but usually not enough to cover the costs.
I'm also going off the 'architect everything into a million pieces to make unit testing "easier"' approach.
I heard someone saying that if you write a test and it never fails, you've wasted your time. I think thats quite an interesting viewpoint.
Everyone's in the confessional booth here admitting dogmatic test-first-test-everything's not so hot in practice, which is nice, but how long until it becomes safe to answer with anything other than some variation of "love testing, it's always great, I love tests, more is better" when asked how you feel about testing in interviews?
>No. And also 'do you write a test for everything?'. Also No.
Same here and for the same reasons plus stuff in the backlog that takes more priority; At least on Finance, gambling and telecom industries that I've worked on
Do you consider coverage an effective metric? I've got some code that has a test suite which is effectively a bunch of low level driver checks, plus a bunch of common example snippets and checks for eg empty inputs etc.
Coverage gives an idea of how many lines of code have been run, but obviously no guarantees of correctness for those specific lines (eg you can't detect a double negative).
It's worked well for me so far, since the important parts are (a) the hardware communication works and (b) users can process and output data in a way that is correct. No need to obsessively check the intermediate steps if the output is good.
Never done this, and don't consider it practical. Code and interfaces (even internal ones) change rapidly for me when I'm starting a new project or adding new major functionality to the point that the tests I'd write at the beginning would become useless pretty quickly.
I also believe that 100% test coverage (or numbers close to that) just isn't a useful goal, and is counterproductive from a maintenance perspective: test code is still code that has to be maintained in and of itself, and if it tests code that has a low risk of errors (or code where, if there are errors, those errors will bubble up to be caught by other kinds of testing), the ROI is too low for me.
After I've settled on interfaces and module boundaries, with a plausibly-working implementation, I'll start writing tests for the code with the highest risk of errors, and then work my way down as time permits. If I need to make large changes in code that doesn't yet have test coverage, and I'm worried about those changes causing regressions, I'll write some tests before making the changes.
That is how I used to work; then I got into finance and there are two things different with the work I did before that (web/desktop/app (or too long ago; there was no 'testing' in the 80s) the software I write now has to be certified/audited to some extent and I cannot change/repair production software on the fly. That could costs a lot of money for certain bugs. So now I tend to write tests for everything and that helps a lot.
What "other kinds of testing" do you do instead then? How do you make sure the code is testable by those other tests?
Often people fall back on manual testing, which is often slow, unreliable and incomplete. And certain things might not even be testable if the system hasn't been designed to allow it.
99% of the code I write is test first. It makes my life easier - I always know what to do next and it reduces the amount I need to keep in my head.
TDD done the way many developers do is a PITA though. When I write a test it will start off life with zero mocking. I'll hit the db and live APIs. From here I'm iterating on making it work. I only introduce mocking/factories because it's harder work not to. I'll gradually add assertions as I get an idea about what behaviour I want to pin down.
Done this way using tests is just making life easier, you can start off testing huge chunks of code if that's what you're sketching out, then add more focused tests if that's a faster way to iterate on a particular piece. For me the process is all about faster feedback and getting the computer to automate as much of my workflow as possible.
edit: Kent Beck had a fantastic video series about working this way, I can only find the first 10 mins now unfortunately but it gives you a taste, https://www.youtube.com/watch?v=VVSSga1Olt8.
I mean how many of you stick with this test driven development practice consistently?
I have been doing this for a while now. Practically, saves me a tonne of time and am able to ship software confidently.
Can you describe the practical benefit?
Say, a change is executed on one section of the (enterprise level)application. You missed addressing an associated section. This is easily identified as your test will FAIL. When the number of feature increases, the complexity of the application increases. Tests guide you. They help you to ship faster, as you don't need to manually test the whole application again. In manual testing, there are chances of missing out few cases. If it's automated, such cases are all executed. Moreover, in TDD - you only write code which is necessary to complete the feature. Personally, tests act as a (guided)document for the application.
Do you happen to rewrite the tests completely while doing the implementation?
Yes, if the current tests doesn't align with the requirements.
When does this approach work for you and when did it fail you?
WORK - I wouldn't call it a silver bullet. But I am really grateful/happy to be a developer following TDD. As the codebase increases, when new developers are brought in - TESTS is one of the metrics which helps me ship software.
NOT WORK - a simple contact only based form(i.e. a fixed requirement having a name, email, textarea field and an upload file option), I rather test it manually than spend time writing tests
Thanks for your perspective. Did it take a lot of time to develop the required discipline? I mean defining the interface for a single function is different from defining a set of functions in the context of a test. May i ask: what is your problem domain / field of work?
cannot agree more.
i've worked for a year on a fast evolving software and we had to refactor things a lot. TDD helped me to refactor in confidence and without regressions.
Now i can't live without tests!
I've been writing software professional for 20 years and for much of that time I was very skeptical of testing. Even after I started writing tests it was several more years before I saw the value of writing tests first. I've moved to doing this more and more, especially when doing maintenance or bug fixes on the back-end. I still struggle with writing valuable tests on the front-end, apart from unit tests of easily extracted logic functions, or very basic render tests that ensure a component doesn't blow up when mounted with valid data.
If you write your test after making the code changes, its easier to have a bug in your test that makes it pass for the wrong reasons. By writing the test first, and progressively, you can be sure that each thing it asserts fails properly if the new code you write doesn't do what is expected.
Sometimes I do write the code first, and then I just stash it and run the new tests to be sure the test fails correctly. Writing the test first is simply a quicker way to accomplish this.
Like others have said when there is a lot of new code - new architectural concerns etc, its not really worth it to write tests until you've sketched things out well-enough to know you aren't so likely to have major API changes. Still, there is another benefit to writing the tests - or at least defining the specs early on - which is that you are not as likely to forget testing a particular invariant. If you've at least got a test file open and can write a description of what the test will be, that can save you from missing an invariant.
Think of tests as insurance that someone working on the code later (including yourself, in the future) doesn't break an invariant because they do not know what they all are. Your tests both state that the invariant is intentional and necessary, and ensure it is not broken.
I tend to write test cases that re-produce bugs first, then fix the bug. Other than that, I don't stick too hard to test driven development. I did for a while, but you start to get a sense of the sort of design pressure tests create and end up build more modular, testable code from the get go anyway.
> Can you describe the practical benefit?
For a test case that produces a bug, you might find the bug manually. Getting that manual process into a test case is often a chore, but in doing so you'll better understand how the system with the bug failed. Did it call collaborators wrong? Did something unexpected get returned? Etc. In those cases, I think the benefit really is a better understanding of the system.
> Do you happen to rewrite the tests completely while doing the implementation?
A TDD practicioner will probably tell you taht you're doing it wrong if you do this. You write the minimum viable test that fails first. It might be something simple like "does the function/method exist". You add to your tests just in time to make the change in the real code.
In new code, I'll usually write high level black box tests once enough code is in place to start doing something useful. I rarely write unit tests except for behavior that is prone to be badly implemented/refactored, or for stuff that's pretty well isolated and that I know I won't touch for a while.
Then as the project evolves, I start adding more high level tests to avoid regressions.
I prefer high level testing of products, they're more useful since you can use them for monitoring as well, if you do it right. I work with typed languages so there's little value in unit tests in most cases.
Sometimes I'll write a test suite "first", but then again only once I have at least written up a client to exercise the system. Which implies I probably decided to stabilize the API at that point.
Like others have said, tests often turn into a huge burden when you're trying to iterate on designs, so early tests tend to cause worse designs in my opinion, since they discourage architectural iterations.
It's a tool like any other and I reach for it when tests will help me write code faster and at a higher level of quality. Which is pretty often with new code.
Also always before a refactor. Document all the existing states and input and output and I can refactor ruthlessly, seeing as soon as I break something.
Tests are also great documentation for how I intend my api to be used. A bunch of examples with input, output, and all the possible exceptions. The first thing I look for when trying to understand a code base are the tests.
When do I not write tests? When I'm in the flow and want to continue cranking out code, especially code that is rapidly changing because as I write I'm re-thinking the solution. Tests will come shortly after I am happy with a first prototype in this case. And they will often inform me what I got wrong in terms of how I would like my api consumed.
When did it fail me? There are cases when it's really difficult to write tests. For example, Jest uses jsdom, which as an emulator has limitations. Sometimes it is worth it to work around these limitations, sometimes not.
Sometimes a dependency is very difficult to mock. And so it's not worth the effort to write the test.
Tests add value, but like anything that adds value, there is a cost and you have to sometimes take a step back and decide how much value you'll get and when the costs have exceeded the value and it's time to abandon that tool.
Almost never. I’m roughing things out first, or iterating the APIs. When the functions, data and interactions seem to stabilize, then I’ll start to put tests in.
Once, I started with tests, but I had to rip up a lot along the way.
It is helpful to ensure testability early on. It might be easier for some devs to figure it out by actually coding up some tests early.
I won’t argue against anyone who is actually productive using hard-core TDD.
I’ve always been highly skeptical of this approach. Often what you’re doing is so clear cut that tests are entirely unneeded. In fact, outside of the most complicated cases, I don’t even use unit tests. I have black box testing that I use to check for regression. My biggest reasoning for this is that test code is effectively another code base to maintain, and as soon as you start changing something it’s legacy code to maintain.
All that being said, I haven’t spent much time on teams with a particularly large group of people working in one project. I think the most has been 4 in one service. The more people working in a code base, the more utility you get from TDD, I believe. It’s just tough to have a solid grasp on everything when it changes rapidly.
TDD works best at the interface where there is the lowest likelihood of API churn.
Writing a test for something an MP3 ID tag parser is a good case for TDD with unit tests. It’s pretty clear what the interface is, you just need to get the right answer, and you end up with a true unit test.
Doing TDD with a large new greenfield project is harder. Unless you have a track record of getting architecture right first time,
individual tests will have to be rewritten as you rethink your model, which wastes a lot of energy. Far better is to test right at the outermost boundary of your code that isn’t in-question: for example a command line invocation of your tool doing some real world example. These typically turn into integration or end to end tests.
I tend to then let unit tests appear in stable (stable as in the design has settled) code as they are needed. For example, a bug report would result in a unit test to exhibit the bug and to put a fixed point on neighboring code, and then in the same commit you can fix the bug. Now you have a unit test too.
One important point to add is that while I reserve the right to claim to be quite good at some parts of my career, I’m kind of a mediocre software engineer, and I think I’m ok with that. The times in my career when I’ve really gotten myself in a bind have been where I’ve assumed my initial design was the right one and built my architecture up piece by piece — with included unit tests — only to find that once I’d built and climbed my monumental construction, I realized all I really needed was a quick wooden ladder to get up to the next level which itself is loaded with all kinds of new problems I hadn’t even thought of.
If you solve each level of a problem by building a beautiful polished work of art at each stage you risk having to throw it away if you made a wrong assumption, and at best, waste a lot of time.
Don’t overthink things. Get something working first. If you need a test to drive that process so be it, but that doesn’t mean it needs to be anything fancy or commit worthy.
I recently started doing this. My project involved using three different services, where one of them was internal. I only had API documentation for these services and because of many reasons, there was a delay in obtaining the API keys required and I was stuck on testing my code. That's when I decided to write unit tests and mock these services wherever I am using and started testing my code. There were zero bugs in these integrations later.
While doing this I also found one more benefit, at least for my use case. The backend for user login was simple when I started, but it started growing in a few weeks. Writing test cases saved me from manually logging in with each use case, testing some functionality, then logging out and repeating with other use cases.
Not sure if it is a practical benefit or not, but writing test cases initially also helped me rewrite the way I was configuring Redis for a custom module so that the module can be tested better.
My only issue is that it takes time, and selling higher-ups this was kind of difficult.
I do sometimes. It depends. I want to do more of it.
Here are cases where I've genuinely found it valuable and enjoyable to write tests ahead of time:
Some things are difficult to test. I've had things that involve a ton of setup, or a configuration with an external system. With tests you can automate that setup and run through a scenario. You can mock external systems. This gives you a way of setting up a scaffold into which your implementation will fall.
Things that involve time are also great for setting up test cases. Imagine some functionality where you do something, and need 3 weeks to pass before something else happens. Testing that by hand is effectively impossible. With test tools, you can fake the passing of time and have confirmation that your code is working well.
Think about when you are writing some functionality that requires some involved logic, and UIs. It makes sense to implement the logic first. But how do you even invoke it without a UI? Write a test case! You can debug it through test runs without needing to invest time in writing a UI.
Bugs! Something esoteric breaks. I often write a test case named test_this_and_that__jira2987 where 2987 is the ticket number where the issue came up. I write up a test case replicating the bug in with only essential conditions. Fixing it is a lot more enjoyable than trying to walk through the replication script by hand. Additionally, it results in a good regression test that makes sure my team does not reintroduce the bug again.
I don't write as many tests as I'd like in general (adding tests to a legacy project that has none is a struggle - often worth it, but needs to be proitized against other tasks).
I once had to write an integration for a "soap" web service that was... Special. Apparently it was implemented in php (judging by the url), by hand (judging by the.. "special" features) - and likely born as a refractor of a back-end for a flash app (judging by the fact that they had a flash app).
By trial and error (and with help of the extensive, if not entirely accurate, documentation) via soapui and curl - i discovered that it expected the soap xml message inside a comment inside an xml soap message (which is interesting as there are some characters that are illegal inside xml comments.. And apparently they did parse these nested messages with a real xml library, I'm guessing libxml.) I also discovered that the Api was sensitive to the order of elements in the inner xml message..
Thankfully I managed to conjure up some valid post bodies (along with the crazy replies the service provided, needed to test an entire "dialog") - and could test against these - as I had to implement half of a broken soap library on top of an xml library and raw post/get due to the quirks.
At any rate, I don't think I'd ever got that done/working if I couldn't do tests first.
Obviously the proper fix would've been to send a tactical team to hunt down the original developers and just say no to the client...
[+] [-] christophilus|6 years ago|reply
So, in short, I view tests as a super useful, but over-applied tool. I want my tests to deliver high enough value to warrant their ongoing maintenance and costs. That means I don't write nearly as many tests as I used to (in my own projects), and far fewer than my peers.
Where I work, tests are practically mandated for everything, and a full CI run takes hours, even when distributed across 20 machines. Anecdotally, I've worked for companies that test super heavily, and I've worked for companies that had no automated tests at all. (They tested manually before releases.) The ratio of production issues across all of my jobs is roughly flat.
This issue tends to trigger people. It's like religion or global warming or any other hot-button issue. It would be interesting to try to come up with some statistical analysis of the costs / benefit of automated tests.
[+] [-] agentultra|6 years ago|reply
But I still do it. And I think that's because that while I may prevent a few obvious errors with the practice the benefit I get is from the informal, automated specifications of the units of a module that I'm building. I also get the benefit of a continuous refactor cycle. Less experienced developers wonder how I write clean, simple code: I refactor, a lot, all the time, because I have tests checking my code.
If it's functional correctness we're after TDD is only one, small, piece of the puzzle. A strong, formal specification will go a lot further to ensuring correctness than fifty more unit tests.
[0] https://www.hillelwayne.com/talks/what-we-know-we-dont-know/
[+] [-] ebiester|6 years ago|reply
There are other ways to handle this. You can have a design document that is continually updated. Some teams can do this well. You can use literate programming a la Knuth. (Many code bases have abbreviated forms of this.) You can assume that the small number of developers who have been around for these decisions will never leave the company and forgo all of it. (I do not recommend this.)
So, what is your preferred alternative to unit tests as a specification? (And if your set of unit tests don't provide clarity to the source code, that may be a source of the frustration.)
[+] [-] lukifer|6 years ago|reply
I think the other benefit of testability has nothing to do with the tests themselves, but rather the discipline of writing testable code: in general, writing code that is easy to test will tend to be higher-quality and easier to reason about.
The thing I'm not fully sold on is mocking, which ends up being a huge timesink, and may or may not improve reliability since you're testing against a fake system and not the real thing. I vastly prefer a combination of small functional/unit tests, and E2E integration tests in a real environment (cypress/etc); the uncanny valley in between has a poor ROI IMO.
[+] [-] roland35|6 years ago|reply
When developing software there are many steps before testing even occurs to catch problems, and the earlier you catch problems the better. Coding standards, having requirements, and peer reviews all are important too.
I find tests useful for having a "checklist" of things to do before releasing a new build. In robotics automated tests are especially helpful since there is a lot of code which only happens in certain physical conditions which are hard to recreate manually (i.e. in a low battery condition the robot should do this behavior). But just having the checklist is more important than how you execute it.
[+] [-] ElatedOwl|6 years ago|reply
Tests are great for:
* High risk items (large consequence when it goes wrong)
* Documentation
* Weird unintuitive things
We had a C# project recently that needs to detect changes between a DTOs properties. At implementation, all the comparisons would be done over value types, but if someone added a reference type that didn't properly implement equality this would silently fail (likely for months). Good case for adding a test that ensures the change detection works for all properties.
[+] [-] mipmap04|6 years ago|reply
I'd be interested to hear if anyone has automated UI testing tools in place that are easier to write test cases for than to just do the manual testing.
[+] [-] swat535|6 years ago|reply
Since you don't write as many tests, that means you're not actually testing all your code branches because tests incur technical debt after all. So does this mean you test every single branch manually? just don't bother with it at all? Do you just have a few integration tests and they break and you spend a good chunk of time figuring out which logical branch broke?
What happens if you make a typo, comment out a piece of code and forget to uncomment , etc?
I'd love to write less tests but don't know how to do it?
[+] [-] voxl|6 years ago|reply
[+] [-] jswizzy|6 years ago|reply
[+] [-] Bootwizard|6 years ago|reply
At least 50% of my last job was writing tests, and the snail's pace of their dev process was the main reason I left.
[+] [-] aazaa|6 years ago|reply
How did refactoring work at that latter company?
[+] [-] dmilicevic|6 years ago|reply
[+] [-] IneffablePigeon|6 years ago|reply
There are a couple of circumstances I often do, though.
The first is when fixing a bug - writing the (red) regression test first forces me to pin down the exact issue and adds confidence that my test works. Committing the red test and the fix to the test in two commits makes the bug and its fix easy to review.
The second is when I'm writing something high risk (particularly from a security standpoint). In this case I want to have a good idea of what I'm building before I start to make sure I've thought through all the cases, so there's less risk of rewriting all the tests later. There's also more benefit to having a thorough test suite, and I find doing that up front forces me to pin down all of the edge cases and think through all the implications before I get bogged down in the "how" too much.
[+] [-] Deradon|6 years ago|reply
I've done this in the past. Then I started to use `git bisect` and having a red test somewhere in your commit-history is a killer for bisect. So now I tend to include both, the test and the bug-fix, within one commit.
[+] [-] tomtomtom777|6 years ago|reply
I don't think this is compelling argument. Normally the test and the fix are looked at together and are already sufficiently separated in the code.
It makes more sense to me to use a single commit so the fix can be described in a single commit message, keeping the history clean.
[+] [-] seanwilson|6 years ago|reply
- The uncomfortable truth for some is that not doing any testing at all can be a perfectly fine trade off and there's plenty of successful projects that do this.
- Sometimes the statically checked assertions from a strongly typed language are enough.
- Sometimes just integration tests are enough and unit tests aren't likely to catch many bugs.
- For others, going all the way to formal verification makes sense. This has several orders of magnitude higher correctness guarantees along with enormous time costs compared to TDD.
For example, the Linux kernel doesn't use exhaustive unit tests (as far as I know) let alone TDD, and the seL4 kernel has been formally verified, both having been successful in doing what they set out to do.
I notice nobody ever gets looked down on for not going the formal verification route - people need to acknowledge that automated testing takes time and that time could be spent on something else, so you have to weigh up the benefits. Exhaustive tests aren't free especially when you know for your specific project you're unlikely to reap much in the way of benefits long term and you have limited resources.
For example, you're probably (depending on the project) not going to benefit from exhaustive tests for an MVP when you're a solo developer, can keep most of the codebase in your head, the impact of live bugs isn't high, the chance of you building on the code later isn't high and you're likely to drastically change the architecture later.
Are there any statistics on how many developers use TDD? There's a lot of "no" answers in this thread but obviously that's anecdotal.
[+] [-] Jimpulse|6 years ago|reply
[+] [-] kstenerud|6 years ago|reply
For new development, no.
I've found that unless I have a solid architecture already (such as in a mature product), I end up massively modifying, or even entirely rewriting most of my tests as development goes on, which is a waste of time. Or even worse, I end up avoiding modifications to the architecture because I dread the amount of test rewrites I'll have to do.
[+] [-] joshschreuder|6 years ago|reply
[+] [-] C0d3r|6 years ago|reply
> Can you describe the practical benefit?
Confidence that the code I'm writing does what it's supposed to. With the added benefit that I can easily add more tests if I'm not confident about some behaviors of the feature or easily add a test when a bug shows up.
> Do you happen to rewrite the tests completely while doing the implementation?
Not completely, depends on how you write your tests, I'm not testing each function individually, I'm testing behaviour, so unless there's a big architectural change or we need to change something drastic, the tests have minimal changes
> When does this approach work for you and when did it fail you?
It works better on layered architectures, when you can easily just test the business logic independently of the framework/glue code. It has failed me for exploratory work, that's the one scenario where I just prefer to write code and manually test it, since I don't know what I want it to do...yet
[+] [-] sloopy543|6 years ago|reply
Most of programming happens in the exploration phase. That's the real problem solving. You're just trying things and seeing if some api gives you what you want or works as you might expect. You have no idea which functions to call or what classes to use, etc.
If you write the tests before you do the exploration, you're saying you know what you're going to find in that exploration.
Nobody knows the future. You can waste a crazy amount of time pretending you do.
[+] [-] kragen|6 years ago|reply
I don't do most of my programming this way, because mostly I'm writing new things, not gluing together existing APIs with a tiny amount of simple glue code. But when I do need to characterize existing APIs, I find that unit tests are a really helpful way to do it — especially in languages without REPLs, but even in languages that do have REPLs, because the tests allow me to change things (parameters, auth keys, versions of Somebody Else's Software) and verify that the beliefs I based my code on are still valid.
[+] [-] JustSomeNobody|6 years ago|reply
Write a POC to learn then you can write tests first for production.
[+] [-] xkcdfan001|6 years ago|reply
[deleted]
[+] [-] codeulike|6 years ago|reply
Tried it, ended up with too many tests. Quelle surprise. There is a time/money/cognitive cost to writing all those tests, they bring some benefit but usually not enough to cover the costs.
I'm also going off the 'architect everything into a million pieces to make unit testing "easier"' approach.
I heard someone saying that if you write a test and it never fails, you've wasted your time. I think thats quite an interesting viewpoint.
Reminded of:
"Do programmers have any specific superstitions?"
"Yeah, but we call them best practices."
https://twitter.com/dbgrandi/status/508329463990734848
[+] [-] shantly|6 years ago|reply
[+] [-] lelima|6 years ago|reply
Same here and for the same reasons plus stuff in the backlog that takes more priority; At least on Finance, gambling and telecom industries that I've worked on
[+] [-] joshvm|6 years ago|reply
Coverage gives an idea of how many lines of code have been run, but obviously no guarantees of correctness for those specific lines (eg you can't detect a double negative).
It's worked well for me so far, since the important parts are (a) the hardware communication works and (b) users can process and output data in a way that is correct. No need to obsessively check the intermediate steps if the output is good.
[+] [-] kelnos|6 years ago|reply
I also believe that 100% test coverage (or numbers close to that) just isn't a useful goal, and is counterproductive from a maintenance perspective: test code is still code that has to be maintained in and of itself, and if it tests code that has a low risk of errors (or code where, if there are errors, those errors will bubble up to be caught by other kinds of testing), the ROI is too low for me.
After I've settled on interfaces and module boundaries, with a plausibly-working implementation, I'll start writing tests for the code with the highest risk of errors, and then work my way down as time permits. If I need to make large changes in code that doesn't yet have test coverage, and I'm worried about those changes causing regressions, I'll write some tests before making the changes.
[+] [-] tluyben2|6 years ago|reply
[+] [-] ptx|6 years ago|reply
Often people fall back on manual testing, which is often slow, unreliable and incomplete. And certain things might not even be testable if the system hasn't been designed to allow it.
[+] [-] kpU8efre7r|6 years ago|reply
[+] [-] ollysb|6 years ago|reply
TDD done the way many developers do is a PITA though. When I write a test it will start off life with zero mocking. I'll hit the db and live APIs. From here I'm iterating on making it work. I only introduce mocking/factories because it's harder work not to. I'll gradually add assertions as I get an idea about what behaviour I want to pin down.
Done this way using tests is just making life easier, you can start off testing huge chunks of code if that's what you're sketching out, then add more focused tests if that's a faster way to iterate on a particular piece. For me the process is all about faster feedback and getting the computer to automate as much of my workflow as possible.
edit: Kent Beck had a fantastic video series about working this way, I can only find the first 10 mins now unfortunately but it gives you a taste, https://www.youtube.com/watch?v=VVSSga1Olt8.
[+] [-] chynkm|6 years ago|reply
Can you describe the practical benefit? Say, a change is executed on one section of the (enterprise level)application. You missed addressing an associated section. This is easily identified as your test will FAIL. When the number of feature increases, the complexity of the application increases. Tests guide you. They help you to ship faster, as you don't need to manually test the whole application again. In manual testing, there are chances of missing out few cases. If it's automated, such cases are all executed. Moreover, in TDD - you only write code which is necessary to complete the feature. Personally, tests act as a (guided)document for the application.
Do you happen to rewrite the tests completely while doing the implementation? Yes, if the current tests doesn't align with the requirements.
When does this approach work for you and when did it fail you? WORK - I wouldn't call it a silver bullet. But I am really grateful/happy to be a developer following TDD. As the codebase increases, when new developers are brought in - TESTS is one of the metrics which helps me ship software. NOT WORK - a simple contact only based form(i.e. a fixed requirement having a name, email, textarea field and an upload file option), I rather test it manually than spend time writing tests
[+] [-] Nursie|6 years ago|reply
We write extensive unit tests, but mostly after development work. The re-write work you mention is then avoided.
[+] [-] MichaelMoser123|6 years ago|reply
[+] [-] sixonesixo|6 years ago|reply
[+] [-] jeremyjh|6 years ago|reply
If you write your test after making the code changes, its easier to have a bug in your test that makes it pass for the wrong reasons. By writing the test first, and progressively, you can be sure that each thing it asserts fails properly if the new code you write doesn't do what is expected.
Sometimes I do write the code first, and then I just stash it and run the new tests to be sure the test fails correctly. Writing the test first is simply a quicker way to accomplish this.
Like others have said when there is a lot of new code - new architectural concerns etc, its not really worth it to write tests until you've sketched things out well-enough to know you aren't so likely to have major API changes. Still, there is another benefit to writing the tests - or at least defining the specs early on - which is that you are not as likely to forget testing a particular invariant. If you've at least got a test file open and can write a description of what the test will be, that can save you from missing an invariant.
Think of tests as insurance that someone working on the code later (including yourself, in the future) doesn't break an invariant because they do not know what they all are. Your tests both state that the invariant is intentional and necessary, and ensure it is not broken.
[+] [-] chrisguitarguy|6 years ago|reply
> Can you describe the practical benefit?
For a test case that produces a bug, you might find the bug manually. Getting that manual process into a test case is often a chore, but in doing so you'll better understand how the system with the bug failed. Did it call collaborators wrong? Did something unexpected get returned? Etc. In those cases, I think the benefit really is a better understanding of the system.
> Do you happen to rewrite the tests completely while doing the implementation?
A TDD practicioner will probably tell you taht you're doing it wrong if you do this. You write the minimum viable test that fails first. It might be something simple like "does the function/method exist". You add to your tests just in time to make the change in the real code.
[+] [-] AYBABTME|6 years ago|reply
Then as the project evolves, I start adding more high level tests to avoid regressions.
I prefer high level testing of products, they're more useful since you can use them for monitoring as well, if you do it right. I work with typed languages so there's little value in unit tests in most cases.
Sometimes I'll write a test suite "first", but then again only once I have at least written up a client to exercise the system. Which implies I probably decided to stabilize the API at that point.
Like others have said, tests often turn into a huge burden when you're trying to iterate on designs, so early tests tend to cause worse designs in my opinion, since they discourage architectural iterations.
[+] [-] xkcdfan001|6 years ago|reply
[deleted]
[+] [-] tchaffee|6 years ago|reply
Also always before a refactor. Document all the existing states and input and output and I can refactor ruthlessly, seeing as soon as I break something.
Tests are also great documentation for how I intend my api to be used. A bunch of examples with input, output, and all the possible exceptions. The first thing I look for when trying to understand a code base are the tests.
When do I not write tests? When I'm in the flow and want to continue cranking out code, especially code that is rapidly changing because as I write I'm re-thinking the solution. Tests will come shortly after I am happy with a first prototype in this case. And they will often inform me what I got wrong in terms of how I would like my api consumed.
When did it fail me? There are cases when it's really difficult to write tests. For example, Jest uses jsdom, which as an emulator has limitations. Sometimes it is worth it to work around these limitations, sometimes not.
Sometimes a dependency is very difficult to mock. And so it's not worth the effort to write the test.
Tests add value, but like anything that adds value, there is a cost and you have to sometimes take a step back and decide how much value you'll get and when the costs have exceeded the value and it's time to abandon that tool.
[+] [-] quantified|6 years ago|reply
Once, I started with tests, but I had to rip up a lot along the way.
It is helpful to ensure testability early on. It might be easier for some devs to figure it out by actually coding up some tests early.
I won’t argue against anyone who is actually productive using hard-core TDD.
[+] [-] nscalf|6 years ago|reply
All that being said, I haven’t spent much time on teams with a particularly large group of people working in one project. I think the most has been 4 in one service. The more people working in a code base, the more utility you get from TDD, I believe. It’s just tough to have a solid grasp on everything when it changes rapidly.
[+] [-] gorgoiler|6 years ago|reply
Writing a test for something an MP3 ID tag parser is a good case for TDD with unit tests. It’s pretty clear what the interface is, you just need to get the right answer, and you end up with a true unit test.
Doing TDD with a large new greenfield project is harder. Unless you have a track record of getting architecture right first time, individual tests will have to be rewritten as you rethink your model, which wastes a lot of energy. Far better is to test right at the outermost boundary of your code that isn’t in-question: for example a command line invocation of your tool doing some real world example. These typically turn into integration or end to end tests.
I tend to then let unit tests appear in stable (stable as in the design has settled) code as they are needed. For example, a bug report would result in a unit test to exhibit the bug and to put a fixed point on neighboring code, and then in the same commit you can fix the bug. Now you have a unit test too.
One important point to add is that while I reserve the right to claim to be quite good at some parts of my career, I’m kind of a mediocre software engineer, and I think I’m ok with that. The times in my career when I’ve really gotten myself in a bind have been where I’ve assumed my initial design was the right one and built my architecture up piece by piece — with included unit tests — only to find that once I’d built and climbed my monumental construction, I realized all I really needed was a quick wooden ladder to get up to the next level which itself is loaded with all kinds of new problems I hadn’t even thought of.
If you solve each level of a problem by building a beautiful polished work of art at each stage you risk having to throw it away if you made a wrong assumption, and at best, waste a lot of time.
Don’t overthink things. Get something working first. If you need a test to drive that process so be it, but that doesn’t mean it needs to be anything fancy or commit worthy.
[+] [-] notadoctor_ssh|6 years ago|reply
While doing this I also found one more benefit, at least for my use case. The backend for user login was simple when I started, but it started growing in a few weeks. Writing test cases saved me from manually logging in with each use case, testing some functionality, then logging out and repeating with other use cases.
Not sure if it is a practical benefit or not, but writing test cases initially also helped me rewrite the way I was configuring Redis for a custom module so that the module can be tested better.
My only issue is that it takes time, and selling higher-ups this was kind of difficult.
[+] [-] MichaelMoser123|6 years ago|reply
[+] [-] koliber|6 years ago|reply
Here are cases where I've genuinely found it valuable and enjoyable to write tests ahead of time:
Some things are difficult to test. I've had things that involve a ton of setup, or a configuration with an external system. With tests you can automate that setup and run through a scenario. You can mock external systems. This gives you a way of setting up a scaffold into which your implementation will fall.
Things that involve time are also great for setting up test cases. Imagine some functionality where you do something, and need 3 weeks to pass before something else happens. Testing that by hand is effectively impossible. With test tools, you can fake the passing of time and have confirmation that your code is working well.
Think about when you are writing some functionality that requires some involved logic, and UIs. It makes sense to implement the logic first. But how do you even invoke it without a UI? Write a test case! You can debug it through test runs without needing to invest time in writing a UI.
Bugs! Something esoteric breaks. I often write a test case named test_this_and_that__jira2987 where 2987 is the ticket number where the issue came up. I write up a test case replicating the bug in with only essential conditions. Fixing it is a lot more enjoyable than trying to walk through the replication script by hand. Additionally, it results in a good regression test that makes sure my team does not reintroduce the bug again.
[+] [-] e12e|6 years ago|reply
I once had to write an integration for a "soap" web service that was... Special. Apparently it was implemented in php (judging by the url), by hand (judging by the.. "special" features) - and likely born as a refractor of a back-end for a flash app (judging by the fact that they had a flash app).
By trial and error (and with help of the extensive, if not entirely accurate, documentation) via soapui and curl - i discovered that it expected the soap xml message inside a comment inside an xml soap message (which is interesting as there are some characters that are illegal inside xml comments.. And apparently they did parse these nested messages with a real xml library, I'm guessing libxml.) I also discovered that the Api was sensitive to the order of elements in the inner xml message..
Thankfully I managed to conjure up some valid post bodies (along with the crazy replies the service provided, needed to test an entire "dialog") - and could test against these - as I had to implement half of a broken soap library on top of an xml library and raw post/get due to the quirks.
At any rate, I don't think I'd ever got that done/working if I couldn't do tests first.
Obviously the proper fix would've been to send a tactical team to hunt down the original developers and just say no to the client...