Poll: Do you test your code?
There's always a lot of debate around testing and I'm interested to see how much people do and how satisfied they are with it
IF YOU'D LIKE TO ENCOURAGE OTHERS TO ANSWER, PLEASE UPVOTE - TY
There's always a lot of debate around testing and I'm interested to see how much people do and how satisfied they are with it
IF YOU'D LIKE TO ENCOURAGE OTHERS TO ANSWER, PLEASE UPVOTE - TY
[+] [-] callmeed|14 years ago|reply
What bugs me:
- Testing frameworks and "best practices" change way faster than language frameworks and I simply can't keep up. What rspec version do I use with what version of Rails? Now I have to use Cucumber? I learned some Cucumber ... oh, now Steak is better. [rage comic goes here]
- Most bugs/edge cases I encounter in our production apps are things I'd never think to write a test for ...
- I deal with custom domains, authentication, and 3rd party API calls in almost every app we have. IMO, this adds 20% or more to the (already high) testing overhead just to get these things configured right in the test suite
- More code is moving to front-end Javascript stuff ... so, now I have to write Rails tests AND JS tests? Sounds delightful
Feel free to try and convince me otherwise, but I don't ever see myself in the "test ALL the things" camp.
[+] [-] jakejake|14 years ago|reply
I try to start with just one or two tests to actually help do things that are tedious or require multiple steps. It takes some time to automate a good test but once you do it immediately starts saving time because you don't have to run the same sequence a thousand times while developing. You can think of it more like a macro that saves you time.
Once you write the main test it's easy then to run it with all combinations of good and bad input. By doing that you'll often wind up hitting a pretty good percentage of your code.
Then as bugs are discovered due to unexpected input you can just keeping adding more input situations.
[+] [-] ef4|14 years ago|reply
Smart automated testing just takes all that extra test work you're already doing and saves it as you go along.
No need to try to invent extra things to test. You just test what you would have tested anyway by hand.
[+] [-] pbiggar|14 years ago|reply
I think this is only in the Rails community, where all new things is quick to be pronounced "the new right way to do things", not just in testing.
> I deal with custom domains, authentication, and 3rd party API calls in almost every app we have. IMO, this adds 20% or more to the (already high) testing overhead just to get these things configured right in the test suite
We do the same (tests for interaction with EC2, Github, and a few other providers). It is more expensive, but we find it more worthwhile too. Normally, 3rd party APIs are insufficiently specified, especially for error conditions. So when we have a failure in production, we can easily add tests to make sure we handle that edge case in future.
[+] [-] sunir|14 years ago|reply
Just use Test::Unit and move on with your life. Write some tests. That's what counts.
[+] [-] pbiggar|14 years ago|reply
It's pretty painful to think "oh, this really needs a test, but I haven't got a test suite set up and besides, I don't know to write a test of this kind".
Writing tests for edge cases we see in production is the most valuable thing we do. We use Airbrake to find the bugs, and then we add a test for it, if possible (it's not always possible).
That gives us good confidence that other changes aren't fucking things up. It's also a pretty sane strategy for growing a test suite when you inevitably have some portion of your code which has no tests at all.
[+] [-] cr4zy|14 years ago|reply
This is why regression tests are my favorite type of test. The need for the test has been confirmed by real world usage and once you create the regression test to fail, fix the bug, and pass the test, you won't have to ever worry about users seeing that bug again :)
[+] [-] MartinCron|14 years ago|reply
Good for you. Extremists on all sides are usually wrong.
Shoot for "test MOST OF the things" or "test the MOST IMPORTANT things" or even "test just enough things so that you know if change Y totally breaks MOST IMPORTANT feature Z".
[+] [-] smsm42|14 years ago|reply
1. Protecting me from stupid mistakes like using wrong variable in parameters, etc. (yes, it is embarrassing to have something like this, but I better be embarrassed by test and fix it before anybody seen it than be embarrassed by somebody else hitting it when using my code).
2. Ensuring refactoring and adding new things didn't break anything.
3. After "hard" bug has been found, ensuring it never reoccurs again.
As for dealing with authentication, etc. - that's what unit tests are for, testing stuff that is under these layers directly. And I don't see it matters what you are using for tests - almost any framework would do fine, it's having tests that matters, not how you run them.
I think you can unit-test javascript too, though I never had to deal with it myself so I don't know how.
[+] [-] tikhonj|14 years ago|reply
This tool is very widely used in Haskell, but it's been ported to a whole bunch of other languages and could make your testing more thorough. In Haskell it's also easy to use and more fun than normal tests, but I don't know what it would be like in a different language.
[+] [-] netzpirat|14 years ago|reply
[+] [-] joske2|14 years ago|reply
[+] [-] nkassis|14 years ago|reply
I feel that way often too but I write test more as a specification for how I want the code to work then as a catch all bugs thing.
"- I deal with custom domains, authentication, and 3rd party API calls in almost every app we have. IMO, this adds 20% or more to the (already high) testing overhead just to get these things configured right in the test suite - More code is moving to front-end Javascript stuff ... so, now I have to write Rails tests AND JS tests? Sounds delightful"
I feel your pain, I code stuff that use WebGL currently and I find it hard to test that stuff.
[+] [-] diminoten|14 years ago|reply
[+] [-] lucian1900|14 years ago|reply
About bugs in production, after you find a bug write a test that exercises that bug. Then make the test pass. That way, you're unlikely to ever have a regression on that bug.
For browser-side UI tests, selenium is very useful.
[+] [-] harel|14 years ago|reply
[+] [-] Jach|14 years ago|reply
The biggest is that it encourages carelessness. I want to grow more careful and work with careful people, not the other way around. Tests don't seem to make people better at doing science--that is, people test the happy-case and don't try and falsify. Testing doesn't seem to make people better are writing code, and may even be hurtful. Secondly, testing instills a fear of code, like code is a monster under the bed that could do anything if you don't constantly have a flashlight under there pinning it down. Sure, I guess your entire project might depend on that one innocent-looking line of code you just changed, but if that's true, you have some serious design problems and testing is going to make it hard to fix those. Because, thirdly, it hinders design, it's very easy to code yourself into a corner in the name of passing a test-suite.
Related to the design issue is a simple fact of laziness. Your code makes a test fail. Is your code wrong? Or is the test wrong? Or are both wrong? If just the code is wrong, the correct action is to fix your code to fit the test. (Which may have serious ramifications anyway.) If just the test is wrong, the correct action is to change the test. (How many people test their tests for correctness? Then test their test-testing programs for correctness? "Test all the things!" is an infinite loop.) If both are wrong, you have to change both. Obviously people will be motivated to assume that only one is wrong rather than both because both means more work.
[+] [-] pbiggar|14 years ago|reply
In my experience, testing frees you from that fear. You have empirical evidence that you haven't broken things.
My company does Continuous Integration as a service. You would be utterly amazed at how often our customers break their code with tiny innocuous changes.
> How many people test their tests for correctness? Then test their test-testing programs for correctness? "Test all the things!" is an infinite loop.
Try to think of testing in terms of the value it brings to your business. Adding the first few tests to a module has immense value. Adding tests for the edge cases has some value, but you're probably at break even unless it's breaking in production [1]. Adding tests to test the tests? I would say that is valueless in nearly all cases [2].
[1] Bonus: use Airbrake to find the edge cases that happen in real life, and only add tests for them
[2] If you're writing software for cars, planes, medical stuff or transferring money, there is probably value here.
[+] [-] ericHosick|14 years ago|reply
Testing at the behavioral level/systems level/UX level is really verifying a lot more than just "is this code right". It provides a way to check correctness on the specifications, correctness on the behavior, complete coverage of expected usage by the end user, and assures that only the code necessary to get the behavior to work is being written (to name a few).
The carelessness I see are developers writing code without fully understanding the needs of the stake holders. The industry would be in a lot better position if managers/product owners/stakeholders/etc. were expected to provide a good set of behaviors to develop against (as an example, Gherkin or similar tools) before they start pushing developers to "deliver something on time". Note this is at the systems/behavior level and not at the Unit level.
Unit level tests provide robustness. Developers can never assure that software has no "bugs".
Behavior level tests assure completeness. Developers can assure they are meeting the requirements (Developers can't assure they are making what the customer wants: but that is not the responsibility of a developer. That is the responsibility of product owner/project manager/etc. I'm not saying that a developer can't ware that hat, but a developer not wearing that hat should not be held responsible for failings to provide for the wants of the customer).
All that being said, I can not emphasis enough how important I think Behavior Level testing is.
My 3 cents.
[+] [-] nahname|14 years ago|reply
Better to write tests to assert something works as expected. Then focus on what you actually want to do, finally returning to your tests and focusing on your changes impact.
If people are writing shitty tests, that is a different problem.
As to your second point, I am fearful of code that does not have tests. I do not know what it does, I have next to no confidence that it does what it is supposed to and no way to validate that I haven't broken it if I change it.
I find the whole pushback for tests automation very odd. Here we are working towards automating some business process, while manually testing that it works. Why wouldn't we automate the testing too? If you are not good enough to automate most of your testing, what business do you have automating something else?
[+] [-] krosaen|14 years ago|reply
A good reason to write tests beyond QA is to verify your code is at least somewhat modular - being able to get code under test ensures at least one additional use beyond being hooked into your application. For that reason, I would recommend having at least one test for every module in your code. It also makes it easy to write a test to reproduce a bug without having to refactor your code to be testable after the fact.
[+] [-] jmtame|14 years ago|reply
Think it makes more sense the later stage your startup is where you're more certain of what exactly it is you're building.
[+] [-] nagnatron|14 years ago|reply
It's cool to try to use the API you're building before you build it.
[+] [-] pbiggar|14 years ago|reply
It won't be great, but it will provide some form of sanity checking when you work on other stuff. Of course, it informs the design, which is a very overlooked feature of testing.
Lastly, it provides a foothold for more tests. When you're working on something hairy, there won't be any obstacle to "well, maybe I'll just add this one more test to save me some time".
[+] [-] mattbriggs|14 years ago|reply
[+] [-] obiefernandez|14 years ago|reply
[+] [-] geebee|14 years ago|reply
I'm using rails these days, and I have 100% test coverage on models and controllers (though that really just means that all the model and controller code is executed when I run my tests, these tools can't really tell if you've tested the code intelligently, though I hope I have).
I don't have a full suite of integration tests that validate all of the view logic, though there are some checks. I also have integration tests that validate external dependencies (file storage, database connectivity, etc), though again, there may be some holes.
I picked "all", since that's closest to where I am. But my best choice would be "we maintain a high (95%+) level of testing coverage". I don't think I'm splitting hairs here, because there may be a practical tradeoff between high levels and complete levels of test coverage.
NOTE: "high" levels of testing can mean different things to different people... doesn't have to be 95%, which I would consider to be higher than absolutely necessary. It depends so much on what you're actually testing (anyone who has used a coverage tool knows you can often "trick" the tool into awarding the 100% bar without doing much other than just making sure the tests run the code... which is useful in its way but can let all kinds of errors slip through).
[+] [-] joefiorini|14 years ago|reply
[+] [-] trustfundbaby|14 years ago|reply
[+] [-] shin_lao|14 years ago|reply
[+] [-] pbiggar|14 years ago|reply
Each test has the opportunity cost of writing some part of a new feature for your customers. But so does every minute spent of fixing bugs that would have been caught with more testing, at a fraction of the cost.
[+] [-] brown9-2|14 years ago|reply
[+] [-] latch|14 years ago|reply
[+] [-] d-roo|14 years ago|reply
This realization only made all the other arguments for testing that much stronger.
[+] [-] kevinherron|14 years ago|reply
Article about the group that writes the space shuttle software, sort of relevant?: http://www.fastcompany.com/magazine/06/writestuff.html
[+] [-] shrub|14 years ago|reply
No matter how many times I explain or quote higher or tell them the feature creep is becoming unreasonable (oh by the way, we have 18 products with complicated interactions, not the 3 we asked for on the quote, but we expect to still pay the same), such that I can't possibly write it all and test it all, they just don't listen and they leave me holding the bag. So, while I'd like to do testing, just getting the thing kind-of working isn't in the budget, never mind getting it working well.
Sorry for the rant and... come to think of it, it may be time for a new job.
[+] [-] pbiggar|14 years ago|reply
That said, I subscribe to the philosophy that testing is only there to support the business, not and end in itself. We often prototype features with no testing at all, because they get rewritten 3 times anyway. Often, writing the tests is what highlights flaws in our logic, so without it we would often we flying blind.
Testing slows down coding by about 135% (yes, more than twice as slow), but makes that time back in spades when you have to work on the same code again, or when changing lower layers (models, libraries, etc).
[+] [-] stdbrouw|14 years ago|reply
When I write a software package/library, I'll usually test the hell out of it for the very same reason so many others have given: if you're testing in a REPL anyway, why not just turn those snippets into unit tests? Hardly any effort.
But I usually don't bother with too much automated testing for websites or web apps, because (1) it's more difficult to actually catch the errors you care about, have good test coverage and keep tests up to date than it is for back-end stuff and (2) I actually like clicking through my app for a while after I've implemented a new feature or changed an existing one.
Manually testing a web app allows you to catch many different kinds of mistakes at the same time. Almost like an artist looking at an unfinished painting. Does the UI look off? Does X get annoying after doing it ten times in a row? Does everything flow nicely? What is this page missing? Did that customer's feature request you got three days ago actually make sense? Questions you should be asking anyway, even with automated tests. And basic functionality is tested because the underlying packages are tested.
... but then again, if I was writing a website backed by a RESTful API, testing that API is as easy as doing a couple of HTTP requests and checking the responses, so you'd be stupid not to go for that quick win.
So my answer is "We have a test suite that tests all functionality" and "Tests? We don't need no stinking tests." at the same time.
[+] [-] peteretep|14 years ago|reply
I am seriously considering putting together a "Software Engineering for Small Teams" course or set of articles. With a little bit of expertise, you can inject testing in to most projects, use the minimum of Agile that'll help, and generally massively raise your game - and by that I mean code faster, better, and more reliably, with considerably less stress.
(edited: turns out I forgot which year we're in :-P)
[+] [-] mtrimpe|14 years ago|reply
I used to always write proper full-fledged tests. Then I started my startup, building a product in the few hours left after a demanding high-stress job and a tumultuous private life.
Within a few weeks, I stopped writing tests. Within a few more weeks, I turned off the test suite.
I wrote the product, got it working, received market feedback, realized my model was all wrong, rewrote the entire domain model and UI multiple times all to finally realize that my component boundaries were all wrong and intuitively understanding where they should've been.
Now I feel confident about an architecture that will stay stable for 12+ months and each new component I write is properly tested.
In the meanwhile my lack of tests is starting to bite me very slowly, but I find that I'm just slowly replacing all 'bad parts' with properly tested components with clearly defined boundaries, rather than changing existing code.
And in the end I'm really happy that I decided not to test as much. It has it's place but when your time is really precious and you're trying to mold your software to fit the market needs, it just isn't worth it.
I don't know how many others are in a similar situation but, for me, sometimes it just ain't f*ing worth it.
[+] [-] Mc_Big_G|14 years ago|reply
[+] [-] boyter|14 years ago|reply
[+] [-] trustfundbaby|14 years ago|reply
With a small codebase that you know every inch of, its easy to test most of your interactions before you push something live, but when you get just one order of magnitude higher you start seeing how easy it is to write code in one section of your app, test it rigorously, but not catch some subtle breakage in another (seemingly unrelated) section of your app.
In production software, especially if you have paying clients, this is simply unacceptable; which is why I've recently been boning up on BDD, TDD, and continuous integration and am trying very hard to slowly integrate them into my development process.
To one of the comments before, in my experience, automated testing should actually makes you bolder with code not more fearful. We have this codebase where I work that is a frickin mammoth of interrelated modules and its so scary to go in there and add or change something, because I just know something else is going to break and I'm going to be stuck fixing it for days after I made the first edit.
This is the other reason I started exploring automated tests ... because I realized that if I had a test suite that could catch regressions when I refactor code, then I could actually spend more time whipping old code into shape instead of patching it up until such a time when I'd be able to just rewrite the whole thing.
[+] [-] rhizome|14 years ago|reply
[+] [-] netzpirat|14 years ago|reply
It took me a lot of effort to learn it properly, I have read many books about testing, have read the tests of plenty of open source software to see how others do it and I wrote thousands of wrong tests until I got at a stage where I can say I have mastered testing.
I was always fascinated about test driven development, but to be honest, it does not work for me and I seldom do it. In most cases I normally write new functionality, then I describe the behavior of it and finally do some refactoring until the code quality meet my needs. When you can refactor a class without breaking a single test, you know you've done it right.
It's important that you find your way and don't try to follow the rules from others. Take your time, mastering software testing is a complex discipline and it won't happen overnight.
Even with a high level of test coverage, I always encounter new bugs when using the software. But after fixing it and adding some tests, I know at least that I will not see the exact same bug again.
I believe that writing tests speeds up my development. This may seems illogical at first, but without the tests my development would slow down with increasing complexity (Lehman's Law), and instead of adding new functionality I'd find myself fixing old stuff. So testing allows me to manage a large and complex codebase, it allows me to do a complicated architectural refactoring and I know everything important still works as expected.
[+] [-] snambi|14 years ago|reply
[1] Initial stage where we are trying to make things work. At this stage code base is very small < 1000 lines. This is like prototyping. It works with limited functionality. No tests needed at this time.
[2] Heavy development phase. At this stage, we have proved the concept. Now we are adding a lot of new features. We identified some features as must have. Also, code is getting re-factored/re-designed based on what we learn. At this stage, we add tests for the must have functionality. Thus, we can ensure that important features are not broken by newer code.
[3] Mature phase. The code is mature. Most of the features are working fine. Code base may be large 100000+ lines. At this stage re-factoring/re-designing is not easy. Mostly incremental changes are happening. At this point, we should have upwards of 70% code coverage. Typically, the test code will be more than the code when we have 70%+ code coverage. But, it is very important to have tests, since it ensures that all features are tested even when a minor code change is made.
[+] [-] tfb|14 years ago|reply
[+] [-] Nogwater|14 years ago|reply
[+] [-] IanMechura|14 years ago|reply
Perhaps this is because I am in the enterprise development world as opposed to the start-up world.
The cost and frustration involved in delivering a critical bug into a QA or production environment is much higher than the cost and frustration of writing and maintaining tests.
Every action in business has a cost associated with it. The more people involved (customers, UAT, Managers, etc.) the higher the cost. The sooner you can discover the bugs and fix them the less people are impacted the lower the cost.
This is how you make yourself as a developer more valuable and justify your high salary/rate by ingraining habits into your daily routine that reduce costs for the business.
In this I also imply non monetary costs, like the personal costs involved in asking a VP to sign off on an off-cycle production release due to a bug that could have been identified by a test prior to the integration build.
[+] [-] darinrogers|14 years ago|reply