top | item 5331496

The Pragmatics of TDD

73 points| MattRogish | 13 years ago |blog.8thlight.com | reply

79 comments

order
[+] NateDad|13 years ago|reply
I think the only reason TDD works as well as it does is because it forces you to _actually_ write the tests. If you write the tests after the code, it would be perfectly fine.... except no one ever (hyperbole) writes tests after the code, because management/sales/support sees that the code works in the general case, and now insists you work on the next feature and/or you get excited about another feature and don't want to write boring tests.

If you write the test first, that can't happen, because you haven't written the actual code yet.

[+] EvilTerran|13 years ago|reply
While that's undoubtedly part of it, I'd say there's definitely other advantages too.

For instance, as danso says at [http://news.ycombinator.com/item?id=5332448], it forces orthogonality, because you have to think how a function can be tested independently of the external objects that may use it; if you have to write your code with automated tests in mind, it's almost inevitably going to be more modular, because poor modularity turns writing those tests into a massive pain.

Also, the tests can act as something like a formal specification; in the process of writing tests, unspecified edge cases may become apparent that weren't noticeable in the natural-language spec.

[+] scott_w|13 years ago|reply
I think it's actually worse than that: people who write tests after code just write their tests to "verify" that the code is doing what the code is doing.

So, the code provides a list of numbers 1, 2, 5. Non-TDD would code a test that verifies the list is indeed 1, 2, 5.

However, the business wanted the list in reverse order: 5, 2, 1. TDD would force the developer to state this up-front. A non-TDD developer is more reluctant to change the code, since "it passes the test!"

I've done this, although I like to think I'm quite happy to rewrite something if it's not good enough (or just wrong).

[+] re_todd|13 years ago|reply
This certainly corresponds to me personally. I used to put off testing, thinking I'd have more time later, but then I'd add "just one more" feature before doing testing, over and over again. The list of tests I needed to write would grow until it seemed overwhelming, making me somewhat depressed thinking about all the testing I still needed to do. Since shifting to TDD, I get into the habit of writing tests before writing code, and the depressing test-related feelings are gone, which makes me a much happier programmer.
[+] philwelch|13 years ago|reply
If that works for you, it's fine. For me, sometimes I write tests first, sometimes last, but always in the same commit as the feature. If I'm working with someone that's all I can really ask of them, and if you adopt the same practice you can protect your code from inept managers and salesmen.
[+] AlwaysBCoding|13 years ago|reply
What has always bothered me, is that the ratio of people consistently talking/blogging/tweeting about how important TDD is to the number of actual resources that can help a junior developer learn and practice good testing habits is a million to one.

I truly believe that TDD works. I truly believe that the jury is already in and that anyone who is serious about becoming a software professional should write tests for every line of code. I really really believe it and want to use it.

That being said, it is so fucking hard to get started with TDD. Oh god, it's so difficult. I've done the katas, done an apprenticeship, read the RSpec book, watched every screencast I could find, everything you can reasonably ask a learning developer to do, and I still find it incredibly impractical to practice TDD when working on most projects, not because I don't want to, but because it's so difficult and time consuming and the resources just aren't there to help make my process quicker.

Here's an idea for the TDD crowd. Every time you're about to write a blog post about why people should use TDD, instead write a blog post about a situation where you applied TDD, the tests you wrote and the code it lead to. We need more examples of TDD in progress, more code snippets, more screencasts. I'm telling you the problem is that the resources just aren't there to encourage these habits. Instead of continuing to have this debate at a semantic level, if there were more testing resources available I think people would naturally flock to it and TDD would win out. Until then, I think it's twitter fights and bad habits for the foreseeable future.

*This comment applies verbatim to security best practices as well.

[+] jdlshore|13 years ago|reply
Shameless plug, since you asked for it: Let's Code Test-Driven JavaScript is an extensive and in-depth screencast series about doing TDD in practice. I promise you've never seen a TDD screencast that goes this deep.

http://www.letscodejavascript.com/

And if Java and Swing are more your thing, Let's Play TDD is its less-polished progenitor. http://www.jamesshore.com/Blog/Lets-Play/

You're right, by the way. It's much harder to do TDD for real than it is to do all those toy problems that involve maybe one class, some calculations, and nothing else. That's why I created the screencasts.

[+] MartinCron|13 years ago|reply
I put together a "getting started with pragmatic test automation" training seminar a few years ago that was based exclusively around real-world examples from my day-to-day life writing real production code. I thought it was good, and got some good feedback.

I'll see if I can dig that up and translate that in a way that's actually useful to an asynchronous audience.

[+] joshuacc|13 years ago|reply
Agreed. This has been the hardest part of learning TDD for me. I'm working on a presentation on TDD in JS which is probably a little simplistic, but goes through each and every step, explaining as I go along.
[+] DanielBMarkham|13 years ago|reply
I interviewed Uncle Bob on Monday (shameless plug: http://tiny-giant-books.com/blog/robert-uncle-bob-martin-int...). It was mostly biographical stuff, but I did cover functional programming. There was a bunch of technical stuff I left off because of time constraints.

The topic I really wanted to cover but didn't was TDD in startups. I have a simple belief: the value of your code debt can never exceed the value of your code. That is, if you code has no monetary value, it is impossible for you to have any code debt, no matter who you are, what your code does, or what the code looks like. Think about it. It makes sense.

It's interesting that Bob took a "saw the baby in half" approach here, outlining the various things he'd throw away and the various things he'd keep. While I think there are definitely shades of gray, it would also be useful for him to directly address the question of code that has no value. If I write a function that I save on my hard drive and never use, does it need a test? I belief the ludicrously obvious answer is "no", but I haven't heard him say that yet.

[+] surbas|13 years ago|reply
He did mention if it is throw way code he doesn't write tests in his second to last point. "A few months ago I wrote an entire 100 line program without any tests. (GASP!)"
[+] swanson|13 years ago|reply
Write a blog post about a concept or idea in the general sense => "I need specific examples or this is just a religion/consultant-speak".

Write a blog post about specific examples => "It might work for this example, but in my experience it didn't work in this other case."

If you write in the abstract, people will dismiss it as fluff. If you write concretely, people will dismiss it because it doesn't cover their exact case.

There is no way to please everyone in the world of opinionated software blogging, so stop trying.

[+] jiggy2011|13 years ago|reply
If you really want to convince people you need to talk in concrete, not abstract. But you need a large number of examples and it should come from a neutral third party.
[+] danso|13 years ago|reply
I've only recently started to use TDD, the biggest roadblock being the annoying steps it takes to set up the suite, directory structure (ok, that's pretty minor), and then properly use mocks and stubs. I sometimes forego the latter part.

I'm new to it but I find it an incredibly useful strategy. It forces orthogonality on me because I have to think how a function can be tested independently of the external objects that may use it...which causes me to challenge my initial assumptions and instincts about the overall application design.

In some sense, I guess, it is always frustrating to spend more time in the design stage than working with an actual prototype...but I find the medium-to-long term benefits to far outweigh the initial investment in time. And once the tests have been written, the actual functional code is almost trivial to write.

Even without the benefits TDD has in easing the maintenance/upgrade phase of a product, I find its effects on the design/prototype stage to be worth the effort alone.

[+] glenjamin|13 years ago|reply
Ideally mocks and stubs should be used only when the thing you're mocking/stubbing is either slow or complex.

In general, just call the real thing and don't worry about it. Otherwise you'll expend loads of time and energy setting up fakes to test relatively simple code.

[+] jfabre|13 years ago|reply
I've seen big company devs so overwhelmed by messy production code that any small feature change would require 2 weeks of work.

I've seen startup devs who couldn't cope with changing requirements fast enough. There was just too many moving parts at the same time.

Hell, I've been one of those devs! Great code is the exception, not the norm.

When I learned TDD, it greatly improved the quality of my code. If TDD is a bottleneck for you, maybe you need to learn to touch type.

My 2 cents.

[+] Glide|13 years ago|reply
The process of learning to TDD well makes a person code better, regardless of actually doing it.

TDD is one of the few actual disciplines I know of around coding that can't be just glossed over or faked. One either does or does not. This is doubly true in a pair programming environment. Not using one of the established patterns? One can argue around that. Not doing TDD? Justification usually has to tie with UI in some way.

[+] pnathan|13 years ago|reply
I just cranked out a pile of code under pressure with a tight time constraint. I didn't TDD. What I did was codesign my tests and code: I rotated between the test and code so that when I finished a particular function, it was reasonably tested.

While not, NOT a TDD approach, it did lead to a much higher code coverage and added a safety net when I had to change it: I knew I could change the code, and if the result was materially affected, I'd know.

It did slow me down in the initial writing, but in the revisions, it sped me up, IMO.

[+] tmoertel|13 years ago|reply
What did you do before your learned TDD? Did you write automated tests? To what degree?
[+] plinkplonk|13 years ago|reply
I'm surprised at how much attention this bit of process dogma is getting on HN.

This particular argument about the supposed efficacy of TDD depends on people accepting the equivalence of the efficiacy of TDD with the efficiacy of surgeons washing hands, just because the blog author says so.

Oh horror, how can you challenge my dogma just because I call it a 'discipline' and verbally equate it to surgeons washing hands.

Saying something is true doesn't prove it is true whether you call something a 'discipline' or not. Cults are full of such 'disciplines'. Cult members will vouch for them. A better word is 'ritual'. At best such rituals are cargo cult practices. [1]

Surgeons washing hands is a practice with empirical, unchallengeable scientific evidence supporting it,while TDD is a dogmatic practice evangelized by software process zealots, with next to no scientific evidence backing it up.

Also TDD != testing and TDD != automated testing (though the evangelizers tend to blur the differences. It is easier to argue that you should write tests, than that you should write tests before you write the code, which is a somewhat more shaky assertion. If you can paint your opponents as opposing tests (vs TDD) you have set up an easily knocked down strawman).

Programmers have written tests, including automated regression tests for decades without the blind adherence to the 'write a test first, write the code, refactor, repeat' cycle that TDD consists of.

Insisting on this as some kind of moral imperative [2] is snake oil, and it is natural that experienced devs push back against religious preaching.

The best 'poke holes in the zealotry gently but firmly' writing wrt TDD is at http://www.dalkescientific.com/writings/diary/archive/2009/1... . Bob Martin's TDD 'kata' is dealt with there in some detail. The comment thread at http://dalkescientific.blogspot.in/2009/12/problems-with-tdd... is hilarious too, with some familiar names popping up.

[1] http://en.wikipedia.org/wiki/Cargo_cult

[2] The author says as much here http://news.ycombinator.com/item?id=5331108

" [TDD] allowed us to go fast, and _keep_ going fast because the code stayed clean. I have come to view it as a moral imperative. No project team should ever lose control of their code; and any slowdown represents that loss of control."

[+] doktrin|13 years ago|reply
This reaction is a bit visceral, and unnecessarily so. The "cult" and "dogma" accusations sound fairly outlandish, to be honest.

Part of the reason experienced devs push back against TDD has to do with patterns and habits. It's safe to say that TDD feels awkward to anyone with other established work patterns.

TDD is of course simply a methodology. It works well for some teams, and perhaps not for others. What it does do is impose a culture of testing and rigorousness, which is rarely a negative.

Sure, you can decouple testing from TDD, but IME teams that apply TDD have more extensive coverage than those who don't. Causation correlation caveats apply.

I personally dislike TDD because it disrupts the thought pattern I have become accustomed to using when developing software. I consider myself relatively junior, so I can only imagine how much of a workflow departure it must be for more senior engineers. However, just because it doesn't work optimally for me (or, optimally at first) doesn't invalidate the approach nor make anyone who uses it some brainwashed cult member.

[+] d4vlx|13 years ago|reply
I interpret Bob's post as saying that having automated tests is critical, not that TDD is the one and only way to test. I know he uses the acronym TDD but his logic all points toward the value of automated tests and he did not mention that TDD is the only way to do it.

When looked at in this light his advice is spot on for me. The kind of projects I work on would quickly get bogged down and have high failure chances without automated tests. When my current employer was in the startup phase they ignored testing, were making tones of money, hired tones of devs and wrote tons of code. At least they wrote tons of code for the first couple years then they hit a wall where it became incredibly difficult to add new functionality. Now that they have been writing automated tests for 5 years new functionality is comparatively easy to add to anything but that original project (which is still running making money but very hard to change). This is server side financial processing, user management, business logic code.

[+] unclebobmartin|13 years ago|reply
Keep in mind that surgeons had plenty of empirical evidence that hand washing saved lives; but refused to adopt the discipline for several decades claiming, among other things, that doctors did not have time to keep washing their hands.
[+] MartinCron|13 years ago|reply
equivalence of the efficiacy of TDD with the efficiacy of surgeons washing hands

It's a metaphor, and as such, it's imperfect. Let's give people the benefit of the doubt in their ability to understand that.

[+] tieTYT|13 years ago|reply
"I don't write tests for one line functions or functions that are obviously trivial. Again, they'll be tested indirectly."

The funny thing about this is when I read this bullet point I thought, "Bob Martin would disagree with this". His rules for TDD ( http://butunclebob.com/ArticleS.UncleBob.TheThreeRulesOfTdd ) would force that method to be tested whether he wants to or not. Then I scroll down to see who wrote the article... Bob Martin

EDIT: But maybe he meant to emphasize "indirectly". Maybe it was under test at first but then got extracted into a simple method under refactoring.

[+] unclebobmartin|13 years ago|reply
Your edit is one of the reasons. However, writing two or three functions in order to get a test to pass is something I frequently do. The functions are small, and usually have very little implementation at first; but I don't follow the "One Function - One Test" rule.

The rule I follow is: Every line of code you write is to make a failing test pass. If that means writing three new functions, so be it.

[+] ollysb|13 years ago|reply
I only test public methods. These are often composed of private methods which I might pull out of a method once I get to green and I'm working on readability. A particularly common source of one line methods is to extract query methods(replace a calculated variable with an inline call to the new method). A nice side effect of only testing public methods is that if you find you don't have test coverage on the private methods you know you can delete them(because the public methods that used to use them have all been deleted).
[+] eaurouge|13 years ago|reply
For me the greatest benefit of TDD is that I can first specify how a unit of code is expected (by me the implementer) to behave. Then I write the code that fulfills (just) those specs, without writing any unnecessary lines of code. If you've read the specs then you know how the code behaves. Yes, you can write the code first then the tests, and I do and have done that. But there's something to be said for letting the specifications alone determine what makes it into code. Of course, you need to know the specifications for this to work, which isn't always the case.
[+] cshipley|13 years ago|reply
It looks to me like another blog posting by a TDD evangelist, most of which I ignore because he ain't preaching my religion. He did, however, touch on what the important part of the pragmatic vs dogmatic question. This jumped out at me:

> In general I don't write tests for any code that I have to "fiddle" into place by trial and error.

It is congruent with a some general rules I follow to decide if I should write a test for something:

1) Do I care if this code doesn't work on the non happy-path? Maybe I'm writing some isolated prototype code that will probably be thrown away, or perhaps I'm planning to rewrite it later. I'm not going to bother writing tests.

2) How important is it that this code is bug free, and how soon should there's a bug. If, say, the code is in a highly used part of the program that is required by the rest of the app to function properly. Or it is part of the main feature set. I definitely will write tests.

3) I'm under time constraints. If I don't have the time to write tests for a particular part of the code, then I don't have time.

4) How solid is the architecture/interfaces? If I expect them to change quite a bit, then I will not write so many tests, or perhaps any. I once worked on a project that was heavy into unit testing/tdd. There was hundreds of tests written very early on in the project, and since the code was changing so much, we spent a lot of time re-writing tests. It eventually got to be a huge time sink.

5) How much money does the project have? Writing lots of tests takes time, and time is money. I've worked on some projects that have budge (or time) constraints, so I don't have the luxury of such dogma.

All that said, I often dislike programming religions or dogma, because they often advocate following a practice somewhat blindly, without understanding when the rules/precepts should be applied.

[+] jbrains|13 years ago|reply
TDD is a fundamental learning technique. It teaches the principles of modular design. Notice! One can learn modular design in a variety of ways. I make no claim that TDD is "the only" nor "the best" of these, but I claim that it works for enough people to merit attention.

Learning requires investment. Investment carries risk. Risk aversion/tolerance is a very personal and contextual thing. There's almost no point arguing about when it's good to be risk averse and when it's good to be risk tolerant, because of this heavy coupling to the context. Better to be aware of the phenomenon and work things out case by case.

Some people generally don't like to learn. Nothing you do will force them to like to learn. You can invite them to try to learn; you can try to make it comfortable and safe for them. That might work.

Some people find such value in a learning technique that they continue to use it, even after learning 99% of what they will ever learn from it. Continuing to use the technique provides them comfort. Whatever works. Others eventually break free of the learning technique, knowing that they can fall back on it when they feel pressure.

I care about this: people who want to practise TDD should be free to do it; people who don't want to practise TDD should not be forced to do it. Everything else is noise.

[+] jbaudanza|13 years ago|reply
I usually don't write tests for frameworks, databases, web-servers, or other third-party software that is supposed to work. I mock these things out, and test my code, not theirs.

If your software has a dependency on a third party component, then you should include that component in your tests. It's not about testing the component, it's about testing your integration with that component. For example, if you upgrade that component, and the API changes in a way that breaks your integration, you want your test suite to break as well.

Sometimes if a component is too slow or requires network access, you have to mock it out. But as a general rule, it's best to leave your dependencies in place.

[+] azurelogic|13 years ago|reply
This gets into the differences between unit testing and integration testing. Sometimes it is worth just making sure that your repository class can actually insert, find, and delete data. It's just a different part of the "testing pyramid" (http://watirmelon.com/tag/software-testing-pyramid/)
[+] ternaryoperator|13 years ago|reply
My greatest reservation about TDD is one that's almost never referred to by its practitioners: the need for strong refactoring skills. To do TDD right, you've got to be really good at refactoring your code. But many developers know only basic refactoring techniques. So if they do TDD, they end up generating code that looks like it was written to satisfy lots of small requirements, and it lacks the cohesion and clarity that it should have.

I think TDD is taught the wrong way around. First, they should teach refactoring. And only when those skills are thoroughly mastered, should they move on to teaching TDD.

[+] njharman|13 years ago|reply
>I don't write tests for getters and setters. >I don't write tests for member variables. >I don't write tests for one line functions or functions that are obviously trivial.

I don't disagree, but I often have at least one test that explicitly exercises all of the public interface (of a class/module/whatever). The point is when I change that interface I want tests to break and not have to rely on my memory on what changed when writing release notes / incrementing version number. I mostly test python, YMMV.

[+] jdlshore|13 years ago|reply
Those tests are less necessary in languages with static typing, which is Bob Martin's background.
[+] unclebobmartin|13 years ago|reply
What I find fascinating in all this is the sheer amplitude of the invectives. Apparently TDD pushes some people's buttons. I think that's a good thing.
[+] codeulike|13 years ago|reply
The measure is: Find a startup that uses TDD religously, find one that just uses it when it suits them, find one that doesn't use TDD at all. Fix all other variables. See which startup does the best. This is a hard experiment to do but if anyone want to offer me a grant for the research, get in touch. Thanks.
[+] doktrin|13 years ago|reply
Too many variables, IMHO (dev experience, team size, technology stack, etc.). Startups in particular are inherently volatile places, and basing such a study on them is fraught with problems.

Most notably, the measure of "success" in startup-land is not irrevocably tied to code quality. Plenty of projects have succeeded despite their engineering and not because of it.

[+] TillE|13 years ago|reply
> Fix all other variables.

That's literally impossible to do in a straight one to one comparison. If nothing else, you have different people working at each one.

A broader study might get enough data to make meaningful comparisons, but I think strict TDD is too rare to get more than a few samples. It's a tough problem. I think you'd have to set up a completely artificial environment if you truly want to measure the relative efficiency of TDD.

[+] jdlshore|13 years ago|reply
The cost/value tradeoff of TDD keeps coming up. My comments on this last time were well-received. The question was "Should I TDD an MVP?" but the answer is really appropriate to any question of when and whether TDD is worth it:

This is a really good and interesting question, and it's one I've been struggling with myself.

The problem boils down to this: TDD makes your software more maintainable (if you do it well) and it lowers your cost of development. However, it also takes significant time and effort to figure out how to test-drive a technology for the first time. Everybody can TDD a Stack class; TDD'ing a database, or a web server, or JavaScript [0] is a lot harder.

So the answer seems simple: use TDD for the parts you already know how to TDD.

But it's not so simple! It's much harder to add tests to existing code than it is to TDD it from scratch. Sometimes, it's flat-out impossible. The expense is so high, there's a very good chance that you'll never get around to adding tests to the un-TDD'd code. It will hang around causing bugs, preventing refactoring, and sapping your agility forever, or until you rewrite... and a rewrite of any significance will halt your company in its tracks, so you won't do that.

So the reality is that, anything you don't TDD from the beginning, you'll probably never be able to TDD. Companies that go down this road find themselves doing a major rewrite several years down the road, and that's crippling [1].

There's another wrinkle on top of this: manually testing code and fixing bugs is expensive. Once your codebase gets above a certain size--about six developer-weeks of effort, let's say--the cost to manually test everything exceeds the cost to TDD it. (The six weeks number is a guess. Some people argue it's less than that.)

So the real answer is a bit more nuanced:

1. If your MVP is truly a throw-away product that will take less than six weeks to build and deploy and you'll never build on it after that, use TDD only where it makes you immediately faster.

2. If your MVP is the basis of a long-lived product, use TDD for the parts you know how to TDD and don't do the parts you don't know how to TDD. Be creative about cutting scope. If you must do something you don't know how to TDD, figure it out and TDD it.

3. It's okay to be a bit sloppy about TDD'ing the edges of your app that are easily rewritten or isolated in modules. But be very careful about the core of your system.

That's my opinion based on 13 years of doing this stuff, including building five successively-less-minimal MVPs over the last nine months for my JS screencast. The first three MVPs were zero coding, the fourth was a throw-away site, and the fifth was TDD'd with aggressive scope cutting to minimize the number of technologies that had to be TDD'd.

[0] Shameless plug: I have a screencast on TDD'ing JavaScript. http://www.letscodejavascript.com

[1] Rewrites are crippling: See Joel Spolsky's "Things You Should Never Do, Part I." http://www.joelonsoftware.com/articles/fog0000000069.html (There is no Part II, by the way.)