From the article: Because you want to ensure that you always pass the majority of tests, you tend to think about this when you change and extend the program. You therefore are more reluctant to make large-scale changes that will lead to the failure of lots of tests. Psychologically, you become conservative to avoid breaking lots of tests.
Interesting. I've often found that the lack of tests leaves me absolutely terrified of making changes to large Ruby and JavaScript applications. I mean, if a test breaks, then I get a nice message right on the spot. But if I don't have tests, then the application itself breaks, and I may not find out until it's been put into production on a high-volume site or shipped to users.
Once an application crosses, say, 25,000 lines of code, it's hard to keep an entire program in my head, especially in a dynamic language and with multiple authors working on the code base. Under these conditions, large scale refactorings or framework upgrades can cause massive test failures, but the only alternative is to cause massive, unknown breakage.
One good way to the limit breakage in such cases is to solely perform black box tests on the API level. In case of our Node.js based Backends we don't ever write a single classical unit test, instead we have a custom Framework built on top of Mocha which performs tests on the HTTP layer against all of our endpoints.
This works remarkable well in practice and allows for large scale refactorings under the hood with little to no impact on the tests. We can also mock databases, memcached, redis and graylog on their respective http/tcp/udp level. This in turn means no custom build mocks which could break when refactoring. The tests itself also contain no logic, they are pretty much just chained method calls with data that should go in and an expected response that should come out, along with a specification of all external resource our API fetches during the request and their responses etc. Any unexpected outgoing HTTP request from our server will actually result in a test failure.
As for scaling this approach, from our experience it works quite well, especially when you have lots of complicated interactions with customer APIs during your requests since the flows are super quick to set up.
It's not a good argument against testing, I agree. Automated testing is a huge boon for software quality (though by no means a panacea).
But TDD, in my experience, gives a slightly different dynamic. Because you generated the code being motivated by the tests, there are a lot of unit tests that don't test functional units - tests that are essentially testing implementation decisions.
I've seen situations where that big refactoring would involve scrapping or rewriting tests, and that causes its own kind of architectural conservatism.
I ended up in a compromise position. I use a lot of tests, but I only do TDD in niche cases where it suits the problem I'm solving.
From what I understand author is not questioning the usefulness of testing, just TDD approach of writing tests first. I personally prefer a sandwich approach, I start with code first, and then write tests for it as soon as I have that little piece of logic finished (usually a method or two). Then I add more code, followed by more test, and so on. Works great for me and my team.
I too don't have the disposition to patiently wade through fixing tons of tons of broken tests over and over again. IF I have to I can do it, but I know better than to unreservedly trust my judgement about how things are progressing.
But what this has taught me is to stop trying to push water uphill. If testing is hard because of questionably coupled code, refactor it NOW instead of waiting for things to get bad. The refactoring almost always suggests new features we could add to the code (or makes me backpedal on pronouncements that certain things were 'impossible'), and the number and kind of tests that break is reduced.
I came across a quote recently from Bertrand Meyer (the Design By Contract guy), where he suggested that code for making decisions and code for acting on those decisions should be separated. I found myself nodding along to this because it was something I knew intuitively but had never articulated: Decisions without actions don't need mocks, and actions without decisions need at most one or two (often none). Decisions can be tested (possibly in parallel) with unit tests, and actions tested with functional tests. Then all your integration testing is that decisions lead to actions (basically that event dispatch works) and that catastrophic failures are handled gracefully. A proper testing pyramid.
Now I want to figure out which of his books or interviews this was in because I want to read the rest of what he had to say on the subject.
That's different, I'm against TDD for the same reason the author is. I'm FOR automated testing for exactly the same reasons you are.
Think of it this way: pre-testing makes you afraid to change you mind because you throw out your tests, not post-testing makes you afraid to change your mind because you might break something. If you think changing your mind is good the path forward is clear.
> Under these conditions, large scale refactorings or framework upgrades can cause massive test failures
An even bigger problem is that large scale refactorings can result in the tests themselves no longer being correct (which isn't the same thing as “not passing”), if you test at too fine a level of granularity. However, when you wrote these tests, you couldn't have possibly foreseen a large scale refactoring 18 months into the future, so how do you tell in advance what the right level of granularity for your tests is?
When tests function as a type system as well, this is true. Buy a good type system will guide refactoring and a good module system and IDE and will guaretee isolation .
If one test breaks you get a nice message. If 20% of your tests break because they depend on functionality that you just changed intentionally, you increased the cost of your feature a lot.
I suspect the author of the article is only giving up on test first or test driven development where you write tests before writing each bit of the program. I doubt he is against having a corpus of tests for your program.
Agreed. I'd argue that if a large number of tests break when a "large" change occurs in an application, those tests are either testing the wrong thing, or written with bad starting assumptions. Yes, there's a transitional period where the responsibility may shift from the caller to the call-ee to take care of some resource, and yes, stuff will break, but the amount of test breakage should be proportional to the impact to the application.
I greatly-prefer the warm fuzziness of knowing that I have a large number of unit tests as a "safety net" to detect if my own thoughts on what is "correct" are in some way wrong.
I think the language and in particular the type system matters a lot. So making big changes in the absence of tests in a Swift or even the somewhat more loosely typed Objective C is less scary than in Ruby and Javascript application.
In my observation and feeling, the statement:
" Because you want to ensure that you always pass the majority of tests, you tend to think about this when you change and extend the program. You therefore are more reluctant to make large-scale changes that will lead to the failure of lots of tests. Psychologically, you become conservative to avoid breaking lots of tests."
is most likely due to the low quality and low test coverage of the exiting code base. So that the tests are fragile, seemingly hinders changes.
If things are started with TDD, I would be really surprised that tests hinders change.
Ruby and Javascript are dynamic languages, so you need all those tests to make up for missing compiler type checks. With staticaly typed languages like Java or Scala things are much different.
Don't expect just because there are no test failures that you haven't broken anything. I can't stand developers who have this mindset.
Just because there are unit tests, doesn't mean they are covering all the code foremost, or that they are written properly to truly detect when things have changed for the worst.
Tests are not a replacement for actually testing and verifying your changes in whatever application gradient you're working in.
While I understand that the conservatism the author refers to might be detrimental to the individual working alone, in a team environment it is actually an advantage. The module the team member is working on is likely a dependency for some other team, and any test-breaking refactoring---however beneficial it might seem---needs to be conscientiously approached and coordinated with other people. There may be very good reasons not to pursue refactoring at the moment, reasons that an individual narrowly focused on the module at hand might not be aware of.
Over the years I've gone from writing no tests at all, to being a die hard TDD purist, and then out the other side to writing some things with tests first, others with tests after writing the code, and some without any tests at all.
In some situations I have clear view of what I need to build, and how that should work. TDD is great in that case - write a test for the expected behaviour, make it pass, refactor, rinse and repeat. The element I think a lot of people miss when doing this is higher level integration tests that ensure everything works together, because that's the hard bit, but its also essential.
Other situations you're still feeling out the problem space and don't necessarily know exactly what the solution is. There's an argument that in those cases you should find a solution with some exploratory development then throw it all out and do it with TDD. If I've got the time that's probably true, it'll result in better code simply through designing it the second time with the insight provided by the first pass, but often that just isn't viable. Deadlines loom, there's a bug that needs fixing elsewhere, and I've got two hours until the end of the week and a meal date with my wife.
Finally there's the times when tests just aren't needed, or they don't offer a decent return on the effort that will be required to make them work. I'm thinking GUIs, integration testing HTTP requests to other services, and intentionally non-deterministic code. Those cases certainly can be tested, but it often results in a much more abstract design than would otherwise be called for, and brittle tests. Brittle tests mean that eventually you stop paying attention to the test suite failing because its probably just a GUI test again, and that eventually leads to nasty bugs making it into production.
One thing I'll directly say on the article is that I found his opinion that its hard to write tests for handling bad data. That's almost the easiest thing to test, especially if you're finding bugs due to bad data in the real world - you take the bad data that caused the bug, reduce it down to a test case, then make the test pass. That process has been a huge boon in developing a data ingestion system for an e-commerce platform to import data from partner's websites as its simply a case of dumping the HTTP response to a file and then writing a test against it rather than having to hit the partner's website constantly.
Over the years I've gone from writing no tests at all, to being a die hard TDD purist, and then out the other side to writing some things with tests first, others with tests after writing the code, and some without any tests at all.
Hear, hear! Exactly the same here, for the same reasons as you and the OP mention.
And observed fom a distance, it's always the same universal principle: purist behaviour (in the sense of almost religous beliefs that something is a strict rule, sentence starting with Always etc you get the point) in programing or to a further extent, in life, is nearly always wrong, period. No matter what the rule is, you can pretty much always find yourself in a situation where the rule is not the best choice.
I deliberately decided to experiment with test-first development a few months ago.
and
the programs I am writing are my own personal projects
If you have no real experience with TDD and you're hacking away at personal projects, I can see where you might not find TDD to be useful.
If you're experienced with it and working on a project that is going to be sizable and for use by others who might be paying to use it, TDD is indispensable.
I've seen a lot of fads come and go over the years. I've tried many of them out just to see how they fit my style of working. TDD is one of those paradigms that has withstood the test of time.
Much more important than code coverage and ability to make changes without breaking things as mentioned in the article, TDD forces you to think about your software components from the client perspective first. It's that discipline that I appreciate more than anything.
I like the idea of TDD, but I rarely use it. The problem is that TDD works well when you already know what you're going to build and how, that way figuring out how to put new code under test isn't too onerous.
If the company is paying you for it, absolutely take the extra time to TDD, and do your best to maintain the test base. That's what they're paying you for.
If you're greenfielding a side project, TDD is only going to slow you down. Time spent learning your domain will get redirected to "figuring out how to put X under test", significantly increasing time-to-market. Get your product to market, find product-market fit, and get some resources to re-engineer your product with, and don't do it yourself because you'll have more important things on your plate.
If you're early in your career, spend some time to learn TDD. Don't do it on your own projects, let someone else pay you to work it out. Don't actually use it unless someone's paying you to, but learn how it works, what it buys you, and what it costs.
TDD feels completely unnatural to me. I, like I suspect most humans, want to build and have a thirst for results.
Also, I think it's a solid point that TDD can overly influence your program design. Program design should typically be mostly driven by end user needs/desires.
> I won’t spend ridiculous amounts of time writing tests when I can read the code and clearly understand what it does.
Is he arguing that simply trusting yourself not to make mistakes is a sufficient guarantor of quality? Ian Sommerville is the author of a famous textbook on software engineering, so it would be surprising if he was.
TDD is actually much more difficult in practice than people realise. I read Kent Beck's book and thought it sounded like utter horseshit. I tried doing it myself and decided it was definitely horseshit.
Then I came to work at Pivotal Labs. Now I am distrustful of code that was written before tests.
As for the argument that TDD distracts from the big picture, this is like saying that indicator signals and mirror checks distract from driving. Sure, when you are learning to drive, you feel so overwhelmed by attending to all the little details that you struggle to drive at all. You become unable to focus on your navigation.
After a while you learn the skill and it becomes automatic. TDD is such a skill.
* fixing bugs - reproducing the bug in a test case both confirms you're fixing the thing, and acts as a regression test
* defining a protocol - where you need to glue two things together, e.g. a front end UI and a back end controller, or a model shared between different modules
It's a lot weaker for design. Test-first tends to encourage overly open to extension abstractions, because you need to make things visible to tests and make components replaceable by mocks. In the early stages, the weight of updating tests makes the design overly awkward to change. And early on in the process is exactly the wrong time to be creating your general abstractions - that's when you have the least amount of information about what will make for the best abstraction.
You still need to back-fill tests after good abstractions have been chosen, of course. Tests are great; test-first, specifically, isn't always best.
> I’m sure that TDD purists would say that I’m not doing it right so I’m not getting the real benefits of TDD.
"You are not doing it right! Take this 3-day course for a grand and buy these books. Also hire a TDD/Scrum/Agile coach for your team for a few months. There you go! (in Eric Cartman's voice)."
Psychologically, you become conservative to avoid breaking lots of tests.
Psychologically, the FIRST problem is that there is a distinct separation in his head between "working" code and "test" code. They are essentially married together. "Breaking tests" is simply identifying now-broken functionality that would NOT have been highlighted had he made the change WITHOUT that test coverage.
Basically, I don't understand how one could come to this conclusion unless one was 1) terrible at writing tests, 2) did too many integration tests and not enough unit tests, or 3) had the wrong frame of mind when considering test code as "distinct and separate" from the code under test.
But as I started implementing a GUI, the tests got harder to write
That is due to the architecture of the GUI, not a fault of TDD itself. As this stackoverflow says, http://stackoverflow.com/questions/382946/how-to-apply-test-..., "you don't apply TDD to the GUI, you design the GUI in such as way that there's a layer just underneath you can develop with TDD. The Gui is reduced to a trivial mapping of controls to the ViewModel, often with framework bindings, and so is ignored for TDD."
I think we're in this world I'd like to call guardrail programming. It's really sad. We're like "I can make change because I have tests". Who does that? Who drives their car around banging against the guardrail saying, "Whoa! I'm glad I've got these guardrails because I'd never make it to the show on time".
TDD is nuts for code without a client or specification. The whole point of tests is to ensure that the code does what it's supposed to do. When you have neither client nor spec, how are you supposed to know what the code is supposed to do? There is, IME, a >90% chance that any such code will be ripped out and replaced as you develop a better understanding of the problem domain.
I've found it's pretty useful to go back and add tests as you accumulate users, though (or convince an exec that your project is Too Big To Fail in the corporate world). You're capturing your accumulated knowledge of the problem domain in executable form, so that when you try an alternate solution, you won't suddenly find out - the hard way - that it doesn't consider some case you fixed a couple years ago.
Like all things in life TDD should be taken in moderation.
It's an excellent process to create stable and maintainable code, but it does not fit every bill.
But abandoning it completely on the grounds that it sometimes makes you write "bad software" is a bit weird to me, in fact, one of the main arguments for TDD is that it makes you write better code.
I found that it does make you write better code many times. So like all things, use when appropriate.
I guess the real hard thing is to determine when using TDD is appropriate.
I like to think that, like with any other technique, with experience comes the ability to decide when not to apply it. Any old tutorial will show you an example of when it works, but only with experience will you learn when it might not.
I don't understand why people tend to turn useful things into strict ideologies.
TDD is super useful for simple algorithms. It gets harder once you get into more complex scenarios like multiple components working together. If TDD is not useful for some cases then don't use it there and use it where applicable. Or think about how you can make it work. It's not that hard.
> Like all things in life TDD should be taken in moderation.
You're points are all good, but your opening one especially.
The obsession that some developers have with methodological purity can be really puzzling sometimes.
Where I'm at: I work in research and engineering (in the gov't, on the borderline between industry and academia), and I can say that doing any sort of testing at all (either up front or later on) is an improvement over the untested, undocumented, unversion-controlled status quo that exists in a lot of cases.
C.S. Degree in 1996, 20 years "professional" programmer and I never once thought TDD was helping my project. Every time I did it, it was because the boss told me I had to. Litmus test: every side project I did just for me, I never did TDD.
As a suggestion, QuickCheck type testing frameworks are good for finding bugs relating to unexpected data.
Summary of how they work: you say "this program takes a 32 bit signed int and a string" and the testing framework will throw it a heap of random ints and strings, some of which match the sorts of classic curve balls it might encounter (negative numbers, int max and min values, strings that are very long, empty strings, strings with \0 in them, strings that don't parse as valid unicode, "Robert');DROP TABLE Students; --", and so on.)
"...because I think it encourages conservatism in programming, it encourages you to make design decisions that can be tested rather than the right decisions for the program users.." - Couldn't agree more with this.
TDD only works if the tests are written correctly.
Tests are not about "code coverage", nor about establishing the exact sequence of things in stone. Tests are about fixing invariants.
When a new project starts, I only know about 10-15% things for sure, and those are exactly which will go in tests, before writing any new code. I don't worry about some things in my code are not yet covered by tests; I don't know yet how they will turn out, so I can't write any meaningful invariants.
In my experience, useful tests are much higher-level than TDD guys prefer. They routinely fix invariants for the entire system / subproject, not assuring coverage of every method in class (some crazy folks are even testing getters and setters — why?)
> some crazy folks are even testing getters and setters — why?
Well, I can see the logic if your getters and setters are hiding more activity than simply retrieving/setting the value of a private field, which is the point of having separate getters/setters at all.
If you imagine a getFullName() / setFullName(name) pair, for example, that actually reads from/writes to two different private fields for first and last name (leaving aside middle names, internationalisation, etc), then there's some minimal logic there that you might want to test.
In a duck-typed language, when you're trying to ensure a class obeys an implicit interface, it may also have value.
Apart from that, for vanilla getters/setters, it's a little pointless.
One of the things that people aim for in writing tests is orthogonality - different tests should not break for the same reason. This promotes the ability to refactor and change your code. I have also seen massive codebases, with masses of tests which were rarely run, and which effectively concreted the code and stopped it from changing.
When writing tests is boring, difficult, and tedious, that's a really good time to think hard about the way the program is structured, if you have time for this.
The way to make testing pleasant is to extract more and more behavior into units with clear boundaries... realizing how to do this was a major event in my programming career, and I attribute the insight partly to doing TDD.
I don't agree with some posters who say that TDD encourages overly generalized design. Sure, it encourages some form of dependency injection... but mostly, it just encourages the creation of coherent and loosely coupled units, which is a universally lauded best practice.
When I finally decided to really decouple and mock dependencies and create "pure" unit tests was the major "Ah ha!" moment of TDD. I had been trying it out here and there but never really saw the benefit because I wasn't really creating nice testable units.
It is hard to say for sure, because the author doesn't give any specifics but wording like:
"You therefore are more reluctant to make large-scale changes that will lead to the failure of lots of tests. Psychologically, you become conservative to avoid breaking lots of tests."
make me think the author wasn't actually making unit tests, instead they were likely end-to-end tests, or partial ETE tests that were running inside a unit test framework. I have had many disagreements with other developers that just because your test runs inside a unit testing framework doesn't actually make your test a "unit" test.
It encourages dependency injection, which in my experience will eventually encourage a functional programming style since there's so little barrier to it once you're doing things like injecting the current time into a method which makes use of it. Maybe I'll turn around in five year's time and regret saying this, but I've never regretted pushing a project in a functional direction.
TDD is best when you're writing code that talks to other code. So APIs, database models, etc. Pure functions, and code that has dependencies you can inject and mock. You should never abandon TDD in situations like this.
It's true that it's harder to write TDD for code with side effects or that draws UI. It doesn't really make sense to use TDD for this.
You shouldn't conflate the two. Also, "always pass the majority of tests" is a trap. You should always pass all the tests.
Source: I've been managing and working in automated testing and continuous integration systems for 8 years, dating back to before the term was coined. I was the manager of the system, at IMVU, that coined the term "continuous integration". I've also worked on testing at Sauce Labs and Google.
Just like the chicken and egg, it doesnt matter which comes first, code or test. The key is that both are written, ideally around the same time and part of same changeset. Refactoring posthoc for testability is tricky and often brings to the surface poor software designs in the original implementation - bad coupling, module dependencies, leaky abstractions, etc.
Most people probably felt the same way after only a "few months" (best-case, perhaps less) of practicing TDD.
And certainly TDD is harder as you approach the GUI, you want to test in vague ways which don't break with every change. If you thoroughly test all of the underlying behavior, implementing a GUI is typically incredibly trivial because everything beneath it is known (and proven) to work. Most of the article is not related to the GUI.
> ...it doesn’t work very well for a major class of program problems – those caused by unexpected data.
This is a hollow argument. Regardless of development methodology, if unexpected data isn't considered at all it could have all kinds of side effects.
Regarding conservatism with breaking tests (many tests failing for one change), it's likely the result of a structural problem within the application if it's an intimidating number of failures.
> It is easier to test some program designs than others. Sometimes, the best design is one that’s hard to test so you are more reluctant to take this approach because you know that you’ll spend a lot more time designing and writing tests (which I, for one, quite a boring thing to do)
Not sure how this applies to TDD, if you're writing tests first you aren't deeply concerned with designing tests because you're imagining what the interface for well-designed code would be, and then you write it. It frequently sounds like the author jumps into writing tests without any forethought.
> In my experience, lots of program failures arise because the data being processed is not what’s expected by the programmer. It’s really hard to write ‘bad data’ tests that accurately reflect the real bad data you will have to process because you have to be a domain expert to understand the data.
If you don't understand the variety of inputs, how can you possibly validate them? Programmers should have some domain understanding, certainly program inputs fall within that realm.
> Think-first rather than test-first is the way to go.
I agree; but step 2 should be testing in my opinion. Test first is just the first tangible work product, it isn't a ban on thinking.
[+] [-] ekidd|10 years ago|reply
Interesting. I've often found that the lack of tests leaves me absolutely terrified of making changes to large Ruby and JavaScript applications. I mean, if a test breaks, then I get a nice message right on the spot. But if I don't have tests, then the application itself breaks, and I may not find out until it's been put into production on a high-volume site or shipped to users.
Once an application crosses, say, 25,000 lines of code, it's hard to keep an entire program in my head, especially in a dynamic language and with multiple authors working on the code base. Under these conditions, large scale refactorings or framework upgrades can cause massive test failures, but the only alternative is to cause massive, unknown breakage.
[+] [-] BonsaiDen|10 years ago|reply
This works remarkable well in practice and allows for large scale refactorings under the hood with little to no impact on the tests. We can also mock databases, memcached, redis and graylog on their respective http/tcp/udp level. This in turn means no custom build mocks which could break when refactoring. The tests itself also contain no logic, they are pretty much just chained method calls with data that should go in and an expected response that should come out, along with a specification of all external resource our API fetches during the request and their responses etc. Any unexpected outgoing HTTP request from our server will actually result in a test failure.
As for scaling this approach, from our experience it works quite well, especially when you have lots of complicated interactions with customer APIs during your requests since the flows are super quick to set up.
[+] [-] sago|10 years ago|reply
But TDD, in my experience, gives a slightly different dynamic. Because you generated the code being motivated by the tests, there are a lot of unit tests that don't test functional units - tests that are essentially testing implementation decisions.
I've seen situations where that big refactoring would involve scrapping or rewriting tests, and that causes its own kind of architectural conservatism.
I ended up in a compromise position. I use a lot of tests, but I only do TDD in niche cases where it suits the problem I'm solving.
[+] [-] ivanhoe|10 years ago|reply
[+] [-] hinkley|10 years ago|reply
I too don't have the disposition to patiently wade through fixing tons of tons of broken tests over and over again. IF I have to I can do it, but I know better than to unreservedly trust my judgement about how things are progressing.
But what this has taught me is to stop trying to push water uphill. If testing is hard because of questionably coupled code, refactor it NOW instead of waiting for things to get bad. The refactoring almost always suggests new features we could add to the code (or makes me backpedal on pronouncements that certain things were 'impossible'), and the number and kind of tests that break is reduced.
I came across a quote recently from Bertrand Meyer (the Design By Contract guy), where he suggested that code for making decisions and code for acting on those decisions should be separated. I found myself nodding along to this because it was something I knew intuitively but had never articulated: Decisions without actions don't need mocks, and actions without decisions need at most one or two (often none). Decisions can be tested (possibly in parallel) with unit tests, and actions tested with functional tests. Then all your integration testing is that decisions lead to actions (basically that event dispatch works) and that catastrophic failures are handled gracefully. A proper testing pyramid.
Now I want to figure out which of his books or interviews this was in because I want to read the rest of what he had to say on the subject.
[+] [-] Ensorceled|10 years ago|reply
Think of it this way: pre-testing makes you afraid to change you mind because you throw out your tests, not post-testing makes you afraid to change your mind because you might break something. If you think changing your mind is good the path forward is clear.
[+] [-] catnaroek|10 years ago|reply
An even bigger problem is that large scale refactorings can result in the tests themselves no longer being correct (which isn't the same thing as “not passing”), if you test at too fine a level of granularity. However, when you wrote these tests, you couldn't have possibly foreseen a large scale refactoring 18 months into the future, so how do you tell in advance what the right level of granularity for your tests is?
[+] [-] kazagistar|10 years ago|reply
[+] [-] Illniyar|10 years ago|reply
[+] [-] adrianN|10 years ago|reply
[+] [-] plinkplonk|10 years ago|reply
[+] [-] mikehollinger|10 years ago|reply
I greatly-prefer the warm fuzziness of knowing that I have a large number of unit tests as a "safety net" to detect if my own thoughts on what is "correct" are in some way wrong.
[+] [-] josephlord|10 years ago|reply
[+] [-] justicezyx|10 years ago|reply
In my observation and feeling, the statement: " Because you want to ensure that you always pass the majority of tests, you tend to think about this when you change and extend the program. You therefore are more reluctant to make large-scale changes that will lead to the failure of lots of tests. Psychologically, you become conservative to avoid breaking lots of tests."
is most likely due to the low quality and low test coverage of the exiting code base. So that the tests are fragile, seemingly hinders changes.
If things are started with TDD, I would be really surprised that tests hinders change.
[+] [-] mamon|10 years ago|reply
[+] [-] iamleppert|10 years ago|reply
Just because there are unit tests, doesn't mean they are covering all the code foremost, or that they are written properly to truly detect when things have changed for the worst.
Tests are not a replacement for actually testing and verifying your changes in whatever application gradient you're working in.
[+] [-] vannevar|10 years ago|reply
[+] [-] unknown|10 years ago|reply
[deleted]
[+] [-] choward|10 years ago|reply
I agree. If I'm working on some part of the app that doesn't have any test coverage, I might add some high level tests before I start.
[+] [-] jon-wood|10 years ago|reply
In some situations I have clear view of what I need to build, and how that should work. TDD is great in that case - write a test for the expected behaviour, make it pass, refactor, rinse and repeat. The element I think a lot of people miss when doing this is higher level integration tests that ensure everything works together, because that's the hard bit, but its also essential.
Other situations you're still feeling out the problem space and don't necessarily know exactly what the solution is. There's an argument that in those cases you should find a solution with some exploratory development then throw it all out and do it with TDD. If I've got the time that's probably true, it'll result in better code simply through designing it the second time with the insight provided by the first pass, but often that just isn't viable. Deadlines loom, there's a bug that needs fixing elsewhere, and I've got two hours until the end of the week and a meal date with my wife.
Finally there's the times when tests just aren't needed, or they don't offer a decent return on the effort that will be required to make them work. I'm thinking GUIs, integration testing HTTP requests to other services, and intentionally non-deterministic code. Those cases certainly can be tested, but it often results in a much more abstract design than would otherwise be called for, and brittle tests. Brittle tests mean that eventually you stop paying attention to the test suite failing because its probably just a GUI test again, and that eventually leads to nasty bugs making it into production.
One thing I'll directly say on the article is that I found his opinion that its hard to write tests for handling bad data. That's almost the easiest thing to test, especially if you're finding bugs due to bad data in the real world - you take the bad data that caused the bug, reduce it down to a test case, then make the test pass. That process has been a huge boon in developing a data ingestion system for an e-commerce platform to import data from partner's websites as its simply a case of dumping the HTTP response to a file and then writing a test against it rather than having to hit the partner's website constantly.
[+] [-] stinos|10 years ago|reply
Hear, hear! Exactly the same here, for the same reasons as you and the OP mention.
And observed fom a distance, it's always the same universal principle: purist behaviour (in the sense of almost religous beliefs that something is a strict rule, sentence starting with Always etc you get the point) in programing or to a further extent, in life, is nearly always wrong, period. No matter what the rule is, you can pretty much always find yourself in a situation where the rule is not the best choice.
[+] [-] crusso|10 years ago|reply
I deliberately decided to experiment with test-first development a few months ago.
and
the programs I am writing are my own personal projects
If you have no real experience with TDD and you're hacking away at personal projects, I can see where you might not find TDD to be useful.
If you're experienced with it and working on a project that is going to be sizable and for use by others who might be paying to use it, TDD is indispensable.
I've seen a lot of fads come and go over the years. I've tried many of them out just to see how they fit my style of working. TDD is one of those paradigms that has withstood the test of time.
Much more important than code coverage and ability to make changes without breaking things as mentioned in the article, TDD forces you to think about your software components from the client perspective first. It's that discipline that I appreciate more than anything.
[+] [-] henrik_w|10 years ago|reply
[+] [-] vinceguidry|10 years ago|reply
If the company is paying you for it, absolutely take the extra time to TDD, and do your best to maintain the test base. That's what they're paying you for.
If you're greenfielding a side project, TDD is only going to slow you down. Time spent learning your domain will get redirected to "figuring out how to put X under test", significantly increasing time-to-market. Get your product to market, find product-market fit, and get some resources to re-engineer your product with, and don't do it yourself because you'll have more important things on your plate.
If you're early in your career, spend some time to learn TDD. Don't do it on your own projects, let someone else pay you to work it out. Don't actually use it unless someone's paying you to, but learn how it works, what it buys you, and what it costs.
[+] [-] pbreit|10 years ago|reply
Also, I think it's a solid point that TDD can overly influence your program design. Program design should typically be mostly driven by end user needs/desires.
[+] [-] jacques_chester|10 years ago|reply
Is he arguing that simply trusting yourself not to make mistakes is a sufficient guarantor of quality? Ian Sommerville is the author of a famous textbook on software engineering, so it would be surprising if he was.
TDD is actually much more difficult in practice than people realise. I read Kent Beck's book and thought it sounded like utter horseshit. I tried doing it myself and decided it was definitely horseshit.
Then I came to work at Pivotal Labs. Now I am distrustful of code that was written before tests.
As for the argument that TDD distracts from the big picture, this is like saying that indicator signals and mirror checks distract from driving. Sure, when you are learning to drive, you feel so overwhelmed by attending to all the little details that you struggle to drive at all. You become unable to focus on your navigation.
After a while you learn the skill and it becomes automatic. TDD is such a skill.
[+] [-] Chris2048|10 years ago|reply
And what do you get with regards to specs? It seems to me the methodology fits how requirements are derived.
[+] [-] barrkel|10 years ago|reply
* fixing bugs - reproducing the bug in a test case both confirms you're fixing the thing, and acts as a regression test
* defining a protocol - where you need to glue two things together, e.g. a front end UI and a back end controller, or a model shared between different modules
It's a lot weaker for design. Test-first tends to encourage overly open to extension abstractions, because you need to make things visible to tests and make components replaceable by mocks. In the early stages, the weight of updating tests makes the design overly awkward to change. And early on in the process is exactly the wrong time to be creating your general abstractions - that's when you have the least amount of information about what will make for the best abstraction.
You still need to back-fill tests after good abstractions have been chosen, of course. Tests are great; test-first, specifically, isn't always best.
[+] [-] koder2016|10 years ago|reply
"You are not doing it right! Take this 3-day course for a grand and buy these books. Also hire a TDD/Scrum/Agile coach for your team for a few months. There you go! (in Eric Cartman's voice)."
[+] [-] pmarreck|10 years ago|reply
Psychologically, the FIRST problem is that there is a distinct separation in his head between "working" code and "test" code. They are essentially married together. "Breaking tests" is simply identifying now-broken functionality that would NOT have been highlighted had he made the change WITHOUT that test coverage.
Basically, I don't understand how one could come to this conclusion unless one was 1) terrible at writing tests, 2) did too many integration tests and not enough unit tests, or 3) had the wrong frame of mind when considering test code as "distinct and separate" from the code under test.
But as I started implementing a GUI, the tests got harder to write
That is due to the architecture of the GUI, not a fault of TDD itself. As this stackoverflow says, http://stackoverflow.com/questions/382946/how-to-apply-test-..., "you don't apply TDD to the GUI, you design the GUI in such as way that there's a layer just underneath you can develop with TDD. The Gui is reduced to a trivial mapping of controls to the ViewModel, often with framework bindings, and so is ignored for TDD."
If the GUI is not architectured in a way that makes that easy, then you're going to have a bad time, admittedly. See: http://alistair.cockburn.us/Hexagonal+architecture and the Boundaries talk https://www.destroyallsoftware.com/talks/boundaries for examples of ways you can reduce I/O to an extremely thin layer that can be tested in isolation.
[+] [-] talles|10 years ago|reply
Gotta love Hickey.
[+] [-] nostrademons|10 years ago|reply
I've found it's pretty useful to go back and add tests as you accumulate users, though (or convince an exec that your project is Too Big To Fail in the corporate world). You're capturing your accumulated knowledge of the problem domain in executable form, so that when you try an alternate solution, you won't suddenly find out - the hard way - that it doesn't consider some case you fixed a couple years ago.
[+] [-] Illniyar|10 years ago|reply
It's an excellent process to create stable and maintainable code, but it does not fit every bill.
But abandoning it completely on the grounds that it sometimes makes you write "bad software" is a bit weird to me, in fact, one of the main arguments for TDD is that it makes you write better code.
I found that it does make you write better code many times. So like all things, use when appropriate.
I guess the real hard thing is to determine when using TDD is appropriate.
[+] [-] galaktor|10 years ago|reply
[+] [-] maxxxxx|10 years ago|reply
TDD is super useful for simple algorithms. It gets harder once you get into more complex scenarios like multiple components working together. If TDD is not useful for some cases then don't use it there and use it where applicable. Or think about how you can make it work. It's not that hard.
[+] [-] thearn4|10 years ago|reply
You're points are all good, but your opening one especially.
The obsession that some developers have with methodological purity can be really puzzling sometimes.
Where I'm at: I work in research and engineering (in the gov't, on the borderline between industry and academia), and I can say that doing any sort of testing at all (either up front or later on) is an improvement over the untested, undocumented, unversion-controlled status quo that exists in a lot of cases.
[+] [-] andrewfromx|10 years ago|reply
[+] [-] JulianMorrison|10 years ago|reply
Summary of how they work: you say "this program takes a 32 bit signed int and a string" and the testing framework will throw it a heap of random ints and strings, some of which match the sorts of classic curve balls it might encounter (negative numbers, int max and min values, strings that are very long, empty strings, strings with \0 in them, strings that don't parse as valid unicode, "Robert');DROP TABLE Students; --", and so on.)
[+] [-] neverminder|10 years ago|reply
[+] [-] collyw|10 years ago|reply
http://david.heinemeierhansson.com/2014/tdd-is-dead-long-liv...
[+] [-] Chris2048|10 years ago|reply
[+] [-] atemerev|10 years ago|reply
Tests are not about "code coverage", nor about establishing the exact sequence of things in stone. Tests are about fixing invariants.
When a new project starts, I only know about 10-15% things for sure, and those are exactly which will go in tests, before writing any new code. I don't worry about some things in my code are not yet covered by tests; I don't know yet how they will turn out, so I can't write any meaningful invariants.
In my experience, useful tests are much higher-level than TDD guys prefer. They routinely fix invariants for the entire system / subproject, not assuring coverage of every method in class (some crazy folks are even testing getters and setters — why?)
[+] [-] tragic|10 years ago|reply
Well, I can see the logic if your getters and setters are hiding more activity than simply retrieving/setting the value of a private field, which is the point of having separate getters/setters at all.
If you imagine a getFullName() / setFullName(name) pair, for example, that actually reads from/writes to two different private fields for first and last name (leaving aside middle names, internationalisation, etc), then there's some minimal logic there that you might want to test.
In a duck-typed language, when you're trying to ensure a class obeys an implicit interface, it may also have value.
Apart from that, for vanilla getters/setters, it's a little pointless.
[+] [-] penguat|10 years ago|reply
[+] [-] mbrock|10 years ago|reply
The way to make testing pleasant is to extract more and more behavior into units with clear boundaries... realizing how to do this was a major event in my programming career, and I attribute the insight partly to doing TDD.
I don't agree with some posters who say that TDD encourages overly generalized design. Sure, it encourages some form of dependency injection... but mostly, it just encourages the creation of coherent and loosely coupled units, which is a universally lauded best practice.
[+] [-] jamestenglish|10 years ago|reply
It is hard to say for sure, because the author doesn't give any specifics but wording like:
"You therefore are more reluctant to make large-scale changes that will lead to the failure of lots of tests. Psychologically, you become conservative to avoid breaking lots of tests."
make me think the author wasn't actually making unit tests, instead they were likely end-to-end tests, or partial ETE tests that were running inside a unit test framework. I have had many disagreements with other developers that just because your test runs inside a unit testing framework doesn't actually make your test a "unit" test.
[+] [-] jon-wood|10 years ago|reply
[+] [-] jmathes|10 years ago|reply
It's true that it's harder to write TDD for code with side effects or that draws UI. It doesn't really make sense to use TDD for this.
You shouldn't conflate the two. Also, "always pass the majority of tests" is a trap. You should always pass all the tests.
Source: I've been managing and working in automated testing and continuous integration systems for 8 years, dating back to before the term was coined. I was the manager of the system, at IMVU, that coined the term "continuous integration". I've also worked on testing at Sauce Labs and Google.
[+] [-] burkestar|10 years ago|reply
[+] [-] nerdy|10 years ago|reply
And certainly TDD is harder as you approach the GUI, you want to test in vague ways which don't break with every change. If you thoroughly test all of the underlying behavior, implementing a GUI is typically incredibly trivial because everything beneath it is known (and proven) to work. Most of the article is not related to the GUI.
> ...it doesn’t work very well for a major class of program problems – those caused by unexpected data.
This is a hollow argument. Regardless of development methodology, if unexpected data isn't considered at all it could have all kinds of side effects.
Regarding conservatism with breaking tests (many tests failing for one change), it's likely the result of a structural problem within the application if it's an intimidating number of failures.
> It is easier to test some program designs than others. Sometimes, the best design is one that’s hard to test so you are more reluctant to take this approach because you know that you’ll spend a lot more time designing and writing tests (which I, for one, quite a boring thing to do)
Not sure how this applies to TDD, if you're writing tests first you aren't deeply concerned with designing tests because you're imagining what the interface for well-designed code would be, and then you write it. It frequently sounds like the author jumps into writing tests without any forethought.
> In my experience, lots of program failures arise because the data being processed is not what’s expected by the programmer. It’s really hard to write ‘bad data’ tests that accurately reflect the real bad data you will have to process because you have to be a domain expert to understand the data.
If you don't understand the variety of inputs, how can you possibly validate them? Programmers should have some domain understanding, certainly program inputs fall within that realm.
> Think-first rather than test-first is the way to go.
I agree; but step 2 should be testing in my opinion. Test first is just the first tangible work product, it isn't a ban on thinking.