Personally I completely disagree with this, I've never found myself randomly changing code in a desperate attempt to get a test to pass.
Maybe it's because I'd been coding for years before I ever tried TDD, but when a test fails, I logically debug the code the same way I would if I wasn't using TDD.
As far as I'm concerned having tests just flags possible errors much quicker, and also gives me more piece of mind that my code isn't gonna be riddled with hidden bugs.
An often touted "benefit" of TDD is that "addictive" feeling when you write tests and see them pass. "you feel like you have done a lot because you have a lot of code"; "you feel a great deal of accomplishment". Quite a few pages talk about it when you search for "tdd addictive".
I also disagree with this article. I think the point of TDD is not to blindly mash at the keyboard to make tests to pass and TDD encourages you to go back and refactor code and tests after you get it passing. TDD gives you confidence that the functionality you developed yesterday doesn't break when you add more functionality today.
TDD isn't a be all and end all, its just one more tool in a developers toolbox that allows us to be better at your job. If you solely rely on TDD or [Insert newest popular development technique] you are going to have a bad time.
One place that I've found TDD to be insanely helpful is exposing the interaction between pieces of the system before building it out in code. I tend to write my tests from the bottom up, i write the assertion first, and then build up the stuff i need to test that assertion. This makes it easier to see whats needed to test the functionality and whether or not the test looks good or not.
Gary Bernhardt does a really good job at explaining his philosophy on TDD, which I really agree with, http://destroyallsoftware.com
The other thing to consider is that if you don't know what changed between code change and test running, you aren't running your tests often enough. When you make a change, you test that change. That change should be small enough that it makes it clear where the problem lies.
I believe it boils down to a matter of discipline.
If you say that you never randomly modify code in order to make it work (according to the tests), then that is great, and that is how every developer should work. However, my experience is that people who lack discipline (or face a looming deadline) tend to not take their time to properly reason about code and instead rely on the test suite to tell them whether code is right or not.
The author made one slight mistake: he wrote "there is a tendency to mindlessly modify code" instead of "I have a tendency to mindlessly modify code".
Also, it's not like this we haven't seen this kind of behavior decades before the invention of TDD.
This is just another example of a craftsman blaming his tools. TDD is not a silver bullet, but no method or tool can serve as an excuse for mindlessly poking around until it works. This isn't limited to programming either.
If I only were in the past. I've seen this behavior with coworkers, changing random bits of the code, without any coherent system to speak, rerunning the application from scratch and manually testing if it works now.
He also completely overlooked the refactoring part of the equation, too: write test(s), write code that passes all the tests, then refactor the code until it's shiny enough and still passes the tests.
I also have the same problem the author has sometimes: If I know that there are a lot of tests covering a particular piece of code, I tend to be less diligent making modifications
I don't recall ever reading that just because you have tests, you should no longer understand the processes by which your code functions. Was this something that they've seen happen, or experienced personally?
I have occasionally been tempted into the mindless code modifying that I described. That is in the past though!
More importantly, I have also seen this happen in professional environments. A very large test suite is very useful, but it is absolutely not a catch-all safety net.
TDD is good for verifying that your code handles the set of requirements given by the customer - including any edge cases that matter to them. I probably agree that 100% test passes doesn't equal no bugs.
Nonetheless, it's still useful! You can still write TD code and use your brain - it is only slightly easier to be lazy (and specifically, lazy in a way you're not supposed to care about, yet.)
In the end, production use crash reports will reveal any bugs that matter in the system (if any), and you can write new tests for those extra cases and make the code pass again. Combined with the rest of Agile (sorry,) i.e. fast release cycles and so on, this isn't a road block.
> I probably agree that 100% test passes doesn't equal no bugs.
TDD never promised that, and practitioners of TDD understand that 100% coverage doesn't mean you won't have bugs. This doesn't invalidate the TDD or testing (as you are obviously aware of =)).
I think the problem mostly spans from the "do the simplest thing that could possibly work"[1] methodology that some practitioners of TDD advocate over thinking about the problem and solving it properly.
The problem isn't the advice, it's the misunderstanding of that advice. Thinking about a problem should happen, and when you sit down to code, you should already know what needs to happen. TDD doesn't propose to replace planning and thought.
I've always viewed TDD as a process that works for some people. It's always important to remember that people learn, develop and think differently. If TDD works for you, great. But do not force it upon other people, as it may not work for them.
(This isn't to say that unit tests are bad, but rather writing tests first may not benefit all people)
This sounds a bit like "we don't need no stinking testing", but I know the author is trying to hit at a deeper point. I only wish he had done better.
One of the problems here is language: TDD as a general concept can cover everything from high-level behavioral testing to a method-by-method way to design your program. There's a big difference between those two!
In general, of course, programming is balancing what the program is supposed to do with how the program is constructed. That's true whether you have TDD in the mix or not.
I'm inclined to agree that it is hard to create an algorithm using tdd (for example Dijkstra's algorithm). But "the example" mentioned in the post is not grounded. It would be nice if someone had a real-world example to back up this claim or else it is very easy to bring up the argument that the author is not applying tdd correcly
1. When I already know what I'm doing and it's just a matter of coding what's already in my mind
2. When I'm writing in a dynamically typed language, it forces me to be not lazy and have adequate test coverage since I don't have compile time type safety
I do less of TDD when dealing with a statically typed language and/or when I'm working in an exploratory mode. TDD doesn't help me when I'm just trying out different things to get going.
The thing that pisses me off is when people don't realize that EVERY technique has caveats and try to promote it as a golden rule - a lot of "agile" consultants preach TDD as the golden grail for writing code without any bugs.
1. When I already know what I'm doing and it's just a matter of coding what's already in my mind
A concept often used in TDD is spiking. If you don't know what you're doing, do a quick and dirty untested version until you do know what you're doing. Throw that code away and TDD it with your new found knowledge.
I've been mixing in TDD and BDD for the last 1.5 years of my 11 year coding career. I can't think of any reason not to test except for laziness and someone's unwillingness to truly use their brain to evaluate it's value.
Contrary to this article, one great reason is that TDD/BDD allows me to make refactors and major changes and know whether or not I broke something. I find it passe to have the opinion of this article.
A perfect example for TDD/BDD is a complex REST API with dozen of end-points and refactoring a piece of the authentication system. How do I know if I broke something or introduced a bug?
My experience is that most developers do not test and this is exactly the kind of way complex bugs get introduced. You actually make the job more difficult on yourself because instead of knowing YOU broke something, a bug gets introduced and you spend more time tracing the cause. I have worked at many places that have this obnoxious cycle of deploying, breaking, deploying, breaking.
It is irritating to see articles like this pop up because it's not like it's a school of thought or a religion. It's a purposeful tool that can and will save you time and effort and probably impose a few good design practices along the way. I'm not saying shoot for 100% coverage, fuck, I'm happy just knowing a few complex pieces are working. And I don't always think it's a good idea to design API's from the tests, especially when you are experimenting and researching.
Your "perfect example for TDD/BDD" is actually about testing in general, not TDD. You are stating the value of having a test suite when making a large change, not the value of writing tests first.
I think this is a more general problem in programming, namely "Programming by Coincidence" [1]. Some people just tries to solve the problem without actually thinking about it, but just tries match the output specification.
This article misunderstands TDD completely. In TDD, the tests are your specifications. Therefore, any code that passes the tests is formally correct - even though it should always be minimal (YAGNI).
In fact, TDD is not simply "tests first". It is: write ONE test, make it pass with the MINIMUM amount of code, refactor, loop.
Usually this makes people go for very simple solutions without thinking properly what are the right data structures and algorithms for the problem at hand.
I rather write proper designed code and write the tests afterwards, before delivering the code.
No one is disputing that the code is formally correct. The problem is that the code is generally focused on those specific tests and those tests alone. Meaning the code hasn't been designed or architected with a broader context in mind.
Hence over time the codebase becomes this huge tangled mess of "formally correct solutions".
FTA: Algorithms must be understood before being modified...
I would add to this that algorithms must be understood before being tested, something with which I suspect most TDD proponents would agree, and which would dispense with the need for the rest of the article.
I agree -- I've found myself in that exact case that he described (mindlessly adding and subtracting one on various loop indices until it worked) more than once.
TDD in theory is a great idea. In practice it is dreadful.
Because what has happened is that the obsession with code coverage has meant that developers create a whole raft of tests that serve no real purpose. Which due to TDD then gets translated into unworkable, unwieldy spaghetti like mess of code. Throw in IOC and UI testing e.g. Cucumber and very quickly the simplest feature takes 5x as long to develop and is borderline unmaintainable.
It just seems like there needs to be a better way to do this.
The thing about practice is that it takes practice. Here are the issues you raised:
- Focus on code coverage
- Testing nothing
- Spagetti code from doing TDD
- IOC + UI/Cucumber takes long to write and run
I would have to say I agree, there is a better way to do this. My guess from your last statement is that you are relatively new to software. Don't mistake your teams poor practices for the practices not working. Try to promote better practices.
Tell your team code coverage only informs you on what isn't being tested. It doesn't help with quality.
Tests, like code, should be deleted if they don't do anything. Strictly adhere to YAGNI,
If TDD is producing spagetti code, you are doing something very wrong in your tests. The tests should be short and focused, just like your code base. Those tests are hard to write on a messy code base, which forces you to refactor, which leads to clean code. Maybe read up on the SOLID principles and other code quality practices to see what you are missing. Refactoring techniques can be very helpful too. This takes years to get good at.
Cucumber is over used. Read about the testing triangle (http://jonkruger.com/blog/2010/02/08/the-automated-testing-t...). My guess is that your team is focusing on the top levels. Those tests provide little long term value, fail without good explanations and can be complicated to write and maintain.
Unit tests are still useful sometimes. Everyone, when they first start out, goes overboard with how many tests they write, and can't tell the difference between what should and shouldn't be tested. The first couple of projects that are unit tested for a developer tend to have so many tests that are brittle that it slows the entire process down.
What I do now is, well, I'm going to actually test stuff while I'm coding anyway right? Regardless if I'm doing TDD or not. Well unit tests give me a useful harness where I can write those tests, instead of hundreds of Console.WriteLines. It's basically not much more effort than Console.WriteLine() style "testing", except you are left with some reusable artifacts at the end that may come in handy later on.
It sounds like you've had some bad experiences. I'm not sure that you could attribute unwieldy spaghetti code to the use of TDD though. Do you believe the projects you've worked on with TDD would have been in better shape without TDD?
MattBearman|13 years ago
Maybe it's because I'd been coding for years before I ever tried TDD, but when a test fails, I logically debug the code the same way I would if I wasn't using TDD.
As far as I'm concerned having tests just flags possible errors much quicker, and also gives me more piece of mind that my code isn't gonna be riddled with hidden bugs.
Erwin|13 years ago
The canonical example is the master of XP solving Sudoku in the TDD way: http://xprogramming.com/articles/oksudoku/ (part 1 out of 5) -vs- Peter Norvig: http://norvig.com/sudoku.html
typicalbender|13 years ago
TDD isn't a be all and end all, its just one more tool in a developers toolbox that allows us to be better at your job. If you solely rely on TDD or [Insert newest popular development technique] you are going to have a bad time.
One place that I've found TDD to be insanely helpful is exposing the interaction between pieces of the system before building it out in code. I tend to write my tests from the bottom up, i write the assertion first, and then build up the stuff i need to test that assertion. This makes it easier to see whats needed to test the functionality and whether or not the test looks good or not.
Gary Bernhardt does a really good job at explaining his philosophy on TDD, which I really agree with, http://destroyallsoftware.com
jasonlotito|13 years ago
ddfreyne|13 years ago
If you say that you never randomly modify code in order to make it work (according to the tests), then that is great, and that is how every developer should work. However, my experience is that people who lack discipline (or face a looming deadline) tend to not take their time to properly reason about code and instead rely on the test suite to tell them whether code is right or not.
onemorepassword|13 years ago
Also, it's not like this we haven't seen this kind of behavior decades before the invention of TDD.
This is just another example of a craftsman blaming his tools. TDD is not a silver bullet, but no method or tool can serve as an excuse for mindlessly poking around until it works. This isn't limited to programming either.
bhaak|13 years ago
I can't describe how shocked I was.
ddfreyne|13 years ago
The reason for writing the article, however, is because I have seen the same mindless behaviour with other people as well.
anthonyb|13 years ago
oliao|13 years ago
seguer|13 years ago
ddfreyne|13 years ago
More importantly, I have also seen this happen in professional environments. A very large test suite is very useful, but it is absolutely not a catch-all safety net.
liw|13 years ago
pyre|13 years ago
tehwalrus|13 years ago
Nonetheless, it's still useful! You can still write TD code and use your brain - it is only slightly easier to be lazy (and specifically, lazy in a way you're not supposed to care about, yet.)
In the end, production use crash reports will reveal any bugs that matter in the system (if any), and you can write new tests for those extra cases and make the code pass again. Combined with the rest of Agile (sorry,) i.e. fast release cycles and so on, this isn't a road block.
jasonlotito|13 years ago
TDD never promised that, and practitioners of TDD understand that 100% coverage doesn't mean you won't have bugs. This doesn't invalidate the TDD or testing (as you are obviously aware of =)).
jfim|13 years ago
[1]http://c2.com/xp/DoTheSimplestThingThatCouldPossiblyWork.htm...
jasonlotito|13 years ago
duey|13 years ago
(This isn't to say that unit tests are bad, but rather writing tests first may not benefit all people)
DanielBMarkham|13 years ago
One of the problems here is language: TDD as a general concept can cover everything from high-level behavioral testing to a method-by-method way to design your program. There's a big difference between those two!
In general, of course, programming is balancing what the program is supposed to do with how the program is constructed. That's true whether you have TDD in the mix or not.
anthonyb|13 years ago
rdfi|13 years ago
karterk|13 years ago
1. When I already know what I'm doing and it's just a matter of coding what's already in my mind
2. When I'm writing in a dynamically typed language, it forces me to be not lazy and have adequate test coverage since I don't have compile time type safety
I do less of TDD when dealing with a statically typed language and/or when I'm working in an exploratory mode. TDD doesn't help me when I'm just trying out different things to get going.
The thing that pisses me off is when people don't realize that EVERY technique has caveats and try to promote it as a golden rule - a lot of "agile" consultants preach TDD as the golden grail for writing code without any bugs.
EDIT: grammar
ajanuary|13 years ago
anon1385|13 years ago
kevingadd|13 years ago
If your goal is to fix this behavior, go for the root causes. TDD isn't a root cause for this particular problem.
tytyty|13 years ago
Contrary to this article, one great reason is that TDD/BDD allows me to make refactors and major changes and know whether or not I broke something. I find it passe to have the opinion of this article.
A perfect example for TDD/BDD is a complex REST API with dozen of end-points and refactoring a piece of the authentication system. How do I know if I broke something or introduced a bug?
My experience is that most developers do not test and this is exactly the kind of way complex bugs get introduced. You actually make the job more difficult on yourself because instead of knowing YOU broke something, a bug gets introduced and you spend more time tracing the cause. I have worked at many places that have this obnoxious cycle of deploying, breaking, deploying, breaking.
It is irritating to see articles like this pop up because it's not like it's a school of thought or a religion. It's a purposeful tool that can and will save you time and effort and probably impose a few good design practices along the way. I'm not saying shoot for 100% coverage, fuck, I'm happy just knowing a few complex pieces are working. And I don't always think it's a good idea to design API's from the tests, especially when you are experimenting and researching.
bjeanes|13 years ago
reader_1000|13 years ago
[1] http://pragprog.com/the-pragmatic-programmer/extracts/coinci...
rmoriz|13 years ago
http://www.infoq.com/news/2009/03/TDD-Improves-Quality
http://research.microsoft.com/en-us/groups/ese/nagappan_tdd....
sklivvz1971|13 years ago
In fact, TDD is not simply "tests first". It is: write ONE test, make it pass with the MINIMUM amount of code, refactor, loop.
pjmlp|13 years ago
I rather write proper designed code and write the tests afterwards, before delivering the code.
taligent|13 years ago
Hence over time the codebase becomes this huge tangled mess of "formally correct solutions".
vannevar|13 years ago
I would add to this that algorithms must be understood before being tested, something with which I suspect most TDD proponents would agree, and which would dispense with the need for the rest of the article.
damncabbage|13 years ago
(More specifically, read everything from the "The Pragmatics: So when do I not practice TDD?")
iansinke|13 years ago
wwarner|13 years ago
ginko|13 years ago
davesims|13 years ago
taligent|13 years ago
Because what has happened is that the obsession with code coverage has meant that developers create a whole raft of tests that serve no real purpose. Which due to TDD then gets translated into unworkable, unwieldy spaghetti like mess of code. Throw in IOC and UI testing e.g. Cucumber and very quickly the simplest feature takes 5x as long to develop and is borderline unmaintainable.
It just seems like there needs to be a better way to do this.
nahname|13 years ago
- Focus on code coverage
- Testing nothing
- Spagetti code from doing TDD
- IOC + UI/Cucumber takes long to write and run
I would have to say I agree, there is a better way to do this. My guess from your last statement is that you are relatively new to software. Don't mistake your teams poor practices for the practices not working. Try to promote better practices.
Tell your team code coverage only informs you on what isn't being tested. It doesn't help with quality.
Tests, like code, should be deleted if they don't do anything. Strictly adhere to YAGNI,
If TDD is producing spagetti code, you are doing something very wrong in your tests. The tests should be short and focused, just like your code base. Those tests are hard to write on a messy code base, which forces you to refactor, which leads to clean code. Maybe read up on the SOLID principles and other code quality practices to see what you are missing. Refactoring techniques can be very helpful too. This takes years to get good at.
Cucumber is over used. Read about the testing triangle (http://jonkruger.com/blog/2010/02/08/the-automated-testing-t...). My guess is that your team is focusing on the top levels. Those tests provide little long term value, fail without good explanations and can be complicated to write and maintain.
Throwadev|13 years ago
What I do now is, well, I'm going to actually test stuff while I'm coding anyway right? Regardless if I'm doing TDD or not. Well unit tests give me a useful harness where I can write those tests, instead of hundreds of Console.WriteLines. It's basically not much more effort than Console.WriteLine() style "testing", except you are left with some reusable artifacts at the end that may come in handy later on.
anthonyb|13 years ago
Also, don't test stuff that isn't going to break, and avoid writing system and UI tests unless you absolutely have to.
rdfi|13 years ago
ollysb|13 years ago
largesse|13 years ago
pjmlp|13 years ago