top | item 21443868

Test-Driven Development Is Fundamentally Wrong

43 points| psalminen | 6 years ago |hackernoon.com

28 comments

order
[+] rubyn00bie|6 years ago|reply
Like all ideologies, TDD has holes, and cannot be perfectly applied to the real world.

I think the the author is doing what TDD (or BDD) is fundamentally trying to get people to do, by writing such detailed requirements and specs: think about shit before they write the code.

When I take the time to properly spec something out down to the interface and calls, implementing it is almost always a cake-walk. If I do TDD to describe the interface, implementing it is almost always a cake-walk. But! I can only do these things if I truly understand the problem and its domain, if I have unknown unknowns my spec and tests will be wrong. C'est la vie.

Is TDD a panacea for software development problems? Nope. Does it help? It sure can.

Personally, I start writing something using BDD until it's hobbling along (~40% done), and then I switch to TDD for the rest of it since I think it allows me to write correct software faster.

[+] deeviant|6 years ago|reply
> I think the the author is doing what TDD (or BDD) is fundamentally trying to get people to do, by writing such detailed requirements and specs: think about shit before they write the code.

I hear this a lot, but I wonder if the people writing or saying have ever considered that TDD does not have a monopoly on "thinking about shit". Back in the day, we used to create a blueprint before we manufactured a metaphorical software constructed plane. We had design reviews, not just code reviews, we thought about shit far more than is commonly done today in the Agile software reality we live in.

I also find it ironic that many developers/managers that I have worked with in the past that have most ardently supported TDD, also tended to ardently support AGILE/scrum, which I find to be polar opposites. Agile development, to me, is the "fuck it, we're doing it live" development methodology where planning and other sorts of "thinking about shit" is dogmatically attacked "Don't go chasing waterfalls, take a bite and make some progress, etc", only to be invariably followed 6-12 months later with a retrospective bullet point that reads like something like, "X didn't take Y in consideration causing considerable delay/trouble/bad-shit with Z".

[+] DerpyBaby123|6 years ago|reply
>when the tests all pass, you’re done

>Every TDD advocate I have ever met has repeated this verbatim, with the same hollow-eyed conviction.

My experience has been much different, in that I've never heard this mantra. I have heard for years 'Red, green, refactor'.

I question what it is the author is railing against, as it doesnt seem to be the TDD that I'm familiar with.

[+] Driky|6 years ago|reply
I would laugh if it wasn't so crazy that someone that doesn't know how to do TDD was writing an article to say it sucks. EDIT: after reading the blog post a second time, I even think that the author doesn't know how to write more classical unit-test.
[+] meowface|6 years ago|reply
I completely agree with the author, and have seen more and more anti-TDD sentiment in recent years. Certainly nowhere near all, but some TDD adherents seem to have a bit of a cargo cult mentality regarding it.

At the end of the day, I think whatever you're personally most productive with is what you should use. If it's TDD, use it. If it isn't, don't. Kind of like the saying "the best diet is the one you stick to". Maybe you could be more productive switching to it, or away from it, so it's worth evaluating alternatives carefully, but in general I think whatever works for you is fine to use.

[+] gmiller123456|6 years ago|reply
What is it the author said that makes you think he didn't know how to do TDD? Or write unit tests?
[+] bbody|6 years ago|reply
I think TDD is over hyped, it is something many people seem to treat as a silver bullet. That being said I’m not sure if it is fair to say it is fundamentally wrong. As with many “silver bullets” it has its place, I’ve found it particularly useful when I’m writing a complicated function, it forced me to focus on inputs and expected outputs and I code to that specification. With regards to a changing specification, that is a problem regardless of when or who writes the tests, it is a part of life regardless.
[+] Udik|6 years ago|reply
Of course, if both your input and output assertions are written in stone from the beginning, and you're writing a single piece of code transforming an input into the output, then why not. But this is hardly the general case.

The general case is more that you'll discover both your requirements and your solution while coding, many times over. Writing a test that is tightly coupled with a solution you might discard anyway one hour, a day or a week later is pretty pointless.

On the other hand, I can understand that it's a good practice, while coding, to keep asking yourself 'how will I test this piece of code'- as it enforces a decent architecture of well isolated parts.

[+] al2o3cr|6 years ago|reply

    With this approach I write the tests after the odyssey of
    discovery, so the tests are only written to the final design
Or if your manager tells you there's another DOUBLE SUPER IMPORTANT TOP PRIORITY thing to do, the tests are written never.

Strict TDD is a technological solution to a management problem.

[+] mv1|6 years ago|reply
More generally, waiting until the end to write tests is a great way to get poor code coverage, and test cases that are very hard to debug. Unit tests as you go along is the way to go. If you must, reserve system testing until the end.
[+] coorasse2|6 years ago|reply
This article is so full of bullshit that listing all the wrong things that Chris Fox wrote would make an even longer article. And no, this time I'll not start making such a list because it would be a complete waste of time. This guy is completely ignorant and a very bad developer. Read books before start writing such shit.
[+] jdlshore|6 years ago|reply
Back in 2005, Microsoft published an article about TDD that was wrong. Not just a little bit wrong, completely and utterly wrong. I wrote about it at the time:

https://www.jamesshore.com/Blog/Microsoft-Gets-TDD-Completel...

The authors of that article described TDD the same way the OP's polemic does: 1) write your tests 2) implement the tests.

But that's not how TDD works.

Every complaint the author has stems from this misunderstanding.

If you're interested in how TDD and related practices actually work, my talk from last month's Pacific Northwest Software Quality Conference has been getting a lot of praise on Twitter. The whole thing's worth watching, but the TDD-specific part starts at 15:21.

Whole video: https://www.youtube.com/watch?v=_Dv4M39Arec

TDD part: https://youtu.be/_Dv4M39Arec?t=921

[+] zestyping|6 years ago|reply
I watched the video segment. I really appreciated the presentation style and visuals—the explanation of your procedure is very clear.

But I'm having trouble understanding how this works in the real world.

In your example, the thing that took 62 seconds to build and test four times is "invoke an empty constructor in another file". That is the sort of thing that I think of as a single task, perhaps taking 10 to 15 seconds. Dividing it into four tiny tasks would only generate work for me; testing it four times would provide no benefit because the task is so simple. The example feels to me like a toy example.

I'm having difficulty seeing how to extend this technique to non-trivial tasks. The moment I do "real work" (e.g. match a string against a regular expression), writing a series of tests that verifies enough cases to establish correctness does not take 10 seconds; it can take 2 or 5 or 20 minutes.

And that's where the author's complaint starts to make sense. It may be that when I write the code, the requirements I have in mind are underspecified or incorrect (e.g. I don't yet know whether I need whitespace to be significant because I haven't designed the rest of the program yet, so I plan to write the regular expression without allowing extra whitespace).

This is where I get stuck. In situations like this:

(a) If I write tests that verify only the requirements that I am absolutely certain will not change, then I risk ending up with a program that has lots of incomplete tests and bugs going undetected.

(b) If I write a test that completely verifies the behaviour of the code I'm about to write, then I risk ending up with tests that overconstrain or incorrectly constrain the code, so I get the problem the author described: as I'm building the rest of the program, I realize that I need to make adjustments (e.g. it becomes clear that I should ignore extra whitespace), which means I now need to go back and change the test as well as my code, and repeat.

It's not possible for the requirements to always be 100% complete and perfectly correct in my mind in advance. The type of situation the author is describing happens all the time because the process of constructing the program is a significant part of how the requirements become clear. This is what the author is getting at, I think.

Have I deeply misunderstood TDD?

[+] dhagz|6 years ago|reply
The only reason I like writing tests before code-complete is that I feel less likely to write my tests to the code. But really, that just amounts to defining the functionality of the app beforehand, but by way of unit/system tests rather than some design document.
[+] agsilvio|6 years ago|reply
I think TDD is fundamentally appropriate (and a blessing). I see it as generating proofs for claims that your software does X,Y,Z. This is invaluable to me and has given me confidence in rollouts to production.