top | item 3702827

Poll: Do you test your code?

611 points| petenixey | 14 years ago | reply

Do you have tests that run every time you push and ensure that the functionality on your site works?

There's always a lot of debate around testing and I'm interested to see how much people do and how satisfied they are with it

IF YOU'D LIKE TO ENCOURAGE OTHERS TO ANSWER, PLEASE UPVOTE - TY

339 comments

order
[+] callmeed|14 years ago|reply
I answered "a few critical things" ... but, for the most part, testing is tedious, frustrating, and a time-sink for me. I recently paid someone $100+ an hour for some remote TDD coaching. It's helping a bit but hasn't really change my attitude towards testing (yet).

What bugs me:

- Testing frameworks and "best practices" change way faster than language frameworks and I simply can't keep up. What rspec version do I use with what version of Rails? Now I have to use Cucumber? I learned some Cucumber ... oh, now Steak is better. [rage comic goes here]

- Most bugs/edge cases I encounter in our production apps are things I'd never think to write a test for ...

- I deal with custom domains, authentication, and 3rd party API calls in almost every app we have. IMO, this adds 20% or more to the (already high) testing overhead just to get these things configured right in the test suite

- More code is moving to front-end Javascript stuff ... so, now I have to write Rails tests AND JS tests? Sounds delightful

Feel free to try and convince me otherwise, but I don't ever see myself in the "test ALL the things" camp.

[+] jakejake|14 years ago|reply
My approach to testing is not to be obsessed with the latest, greatest framework or 100% code coverage.

I try to start with just one or two tests to actually help do things that are tedious or require multiple steps. It takes some time to automate a good test but once you do it immediately starts saving time because you don't have to run the same sequence a thousand times while developing. You can think of it more like a macro that saves you time.

Once you write the main test it's easy then to run it with all combinations of good and bad input. By doing that you'll often wind up hitting a pretty good percentage of your code.

Then as bugs are discovered due to unexpected input you can just keeping adding more input situations.

[+] ef4|14 years ago|reply
Look at it this way: you must be testing code as you write it anyway. There's really no other sane way to do it. You make a change, you load the page and see that your change worked, or you call your new function from an interactive interpreter.

Smart automated testing just takes all that extra test work you're already doing and saves it as you go along.

No need to try to invent extra things to test. You just test what you would have tested anyway by hand.

[+] pbiggar|14 years ago|reply
> - Testing frameworks and "best practices" change way faster than language frameworks and I simply can't keep up. What rspec version do I use with what version of Rails? Now I have to use Cucumber? I learned some Cucumber ... oh, now Steak is better. [rage comic goes here]

I think this is only in the Rails community, where all new things is quick to be pronounced "the new right way to do things", not just in testing.

> I deal with custom domains, authentication, and 3rd party API calls in almost every app we have. IMO, this adds 20% or more to the (already high) testing overhead just to get these things configured right in the test suite

We do the same (tests for interaction with EC2, Github, and a few other providers). It is more expensive, but we find it more worthwhile too. Normally, 3rd party APIs are insufficiently specified, especially for error conditions. So when we have a failure in production, we can easily add tests to make sure we handle that edge case in future.

[+] sunir|14 years ago|reply
People write and play with test frameworks because they are procrastinating from writing actual tests. Think about it.

Just use Test::Unit and move on with your life. Write some tests. That's what counts.

[+] pbiggar|14 years ago|reply
I strongly feel you should try to add one test in each category. That adds a sanity check and lowers to cost to adding more tests when you really need it.

It's pretty painful to think "oh, this really needs a test, but I haven't got a test suite set up and besides, I don't know to write a test of this kind".

Writing tests for edge cases we see in production is the most valuable thing we do. We use Airbrake to find the bugs, and then we add a test for it, if possible (it's not always possible).

That gives us good confidence that other changes aren't fucking things up. It's also a pretty sane strategy for growing a test suite when you inevitably have some portion of your code which has no tests at all.

[+] cr4zy|14 years ago|reply
"- Most bugs/edge cases I encounter in our production apps are things I'd never think to write a test for ..."

This is why regression tests are my favorite type of test. The need for the test has been confirmed by real world usage and once you create the regression test to fail, fix the bug, and pass the test, you won't have to ever worry about users seeing that bug again :)

[+] MartinCron|14 years ago|reply
but I don't ever see myself in the "test ALL the things" camp

Good for you. Extremists on all sides are usually wrong.

Shoot for "test MOST OF the things" or "test the MOST IMPORTANT things" or even "test just enough things so that you know if change Y totally breaks MOST IMPORTANT feature Z".

[+] smsm42|14 years ago|reply
My experience shows that tests is not very useful in protecting from "hard" mistaken (like unusual combination of inputs, missing condition branch coverage, etc) because even with 100% code coverage you don't actually cover 100% of input/state combinations. And things you didn't think in development are usually things you didn't think in tests too. Tests are, however, always been amazingly helpful for me in:

1. Protecting me from stupid mistakes like using wrong variable in parameters, etc. (yes, it is embarrassing to have something like this, but I better be embarrassed by test and fix it before anybody seen it than be embarrassed by somebody else hitting it when using my code).

2. Ensuring refactoring and adding new things didn't break anything.

3. After "hard" bug has been found, ensuring it never reoccurs again.

As for dealing with authentication, etc. - that's what unit tests are for, testing stuff that is under these layers directly. And I don't see it matters what you are using for tests - almost any framework would do fine, it's having tests that matters, not how you run them.

I think you can unit-test javascript too, though I never had to deal with it myself so I don't know how.

[+] tikhonj|14 years ago|reply
You should check out QuickCheck for catching edge cases you did not think of. The idea behind QuickCheck is simple--you specify invariants in your code (called "properties") and the framework tests them with random inputs.

This tool is very widely used in Haskell, but it's been ported to a whole bunch of other languages and could make your testing more thorough. In Haskell it's also easy to use and more fun than normal tests, but I don't know what it would be like in a different language.

[+] netzpirat|14 years ago|reply
At first I was doubtful about testing my JS code, but nowadays I do enjoy it much more that testing the Rails backend. I use my own gem guard-jasmine that runs the specs headless on PhantomJS and it's a real joy! My whole spec suite with over 1000 specs runs in under 3 seconds. I use SinonJS for faking AJAX calls to the backend, but that's just a small subset of all specs since most stuff isn't interacting with the backend.
[+] joske2|14 years ago|reply
The point of testing/TDD for me is not (just) about preventing bugs, it is more about having quick feedback. Running a test is faster than waiting until it is deployed and manually clicking around in an application. It is kind of comparable to using a REPL.
[+] nkassis|14 years ago|reply
"- Most bugs/edge cases I encounter in our production apps are things I'd never think to write a test for ..."

I feel that way often too but I write test more as a specification for how I want the code to work then as a catch all bugs thing.

"- I deal with custom domains, authentication, and 3rd party API calls in almost every app we have. IMO, this adds 20% or more to the (already high) testing overhead just to get these things configured right in the test suite - More code is moving to front-end Javascript stuff ... so, now I have to write Rails tests AND JS tests? Sounds delightful"

I feel your pain, I code stuff that use WebGL currently and I find it hard to test that stuff.

[+] diminoten|14 years ago|reply
How do you feel about regression testing? Maybe instead of "writing" tests for potential bugs, you write tests for bugs you've found already.
[+] lucian1900|14 years ago|reply
I've found tests very useful for refactoring. I can pretty much go wild, as long as the tests pass at the end.

About bugs in production, after you find a bug write a test that exercises that bug. Then make the test pass. That way, you're unlikely to ever have a regression on that bug.

For browser-side UI tests, selenium is very useful.

[+] harel|14 years ago|reply
I don't want to convince you - just strengthen your point.
[+] Jach|14 years ago|reply
I test things that seem like they're important to test. I also do a lot of manual checking which boils down to "does it work?" When the manual checking is too tedious I'll write code to help. I don't do unit tests (but I don't think most people who think they're doing unit tests are, either). In general I have three big problems with the philosophy of testing, especially test-first. (Though I don't feel incredibly strongly about these--software is a big field of possibilities, to suggest One Way is the Only Way is pretty crazy.)

The biggest is that it encourages carelessness. I want to grow more careful and work with careful people, not the other way around. Tests don't seem to make people better at doing science--that is, people test the happy-case and don't try and falsify. Testing doesn't seem to make people better are writing code, and may even be hurtful. Secondly, testing instills a fear of code, like code is a monster under the bed that could do anything if you don't constantly have a flashlight under there pinning it down. Sure, I guess your entire project might depend on that one innocent-looking line of code you just changed, but if that's true, you have some serious design problems and testing is going to make it hard to fix those. Because, thirdly, it hinders design, it's very easy to code yourself into a corner in the name of passing a test-suite.

Related to the design issue is a simple fact of laziness. Your code makes a test fail. Is your code wrong? Or is the test wrong? Or are both wrong? If just the code is wrong, the correct action is to fix your code to fit the test. (Which may have serious ramifications anyway.) If just the test is wrong, the correct action is to change the test. (How many people test their tests for correctness? Then test their test-testing programs for correctness? "Test all the things!" is an infinite loop.) If both are wrong, you have to change both. Obviously people will be motivated to assume that only one is wrong rather than both because both means more work.

[+] pbiggar|14 years ago|reply
> Secondly, testing instills a fear of code, like code is a monster under the bed that could do anything if you don't constantly have a flashlight under there pinning it down

In my experience, testing frees you from that fear. You have empirical evidence that you haven't broken things.

My company does Continuous Integration as a service. You would be utterly amazed at how often our customers break their code with tiny innocuous changes.

> How many people test their tests for correctness? Then test their test-testing programs for correctness? "Test all the things!" is an infinite loop.

Try to think of testing in terms of the value it brings to your business. Adding the first few tests to a module has immense value. Adding tests for the edge cases has some value, but you're probably at break even unless it's breaking in production [1]. Adding tests to test the tests? I would say that is valueless in nearly all cases [2].

[1] Bonus: use Airbrake to find the edge cases that happen in real life, and only add tests for them

[2] If you're writing software for cars, planes, medical stuff or transferring money, there is probably value here.

[+] ericHosick|14 years ago|reply
Asking if the tests are correct is really asking if the requirements are correct. If this happens a lot it means developers are writing code before they really understand the requirements. If developers have to re-write behavioral level tests a lot, it probably means the product owner/project manager/managers/stake holders/etc. are changing the requirements. A lot of pain should be felt gathering and verifying what the customer wants before a single line of code is written. Really, code is bad and as little of it should be written as possible. Developers should yell loudly when they have to re-write behavioral level tests.

Testing at the behavioral level/systems level/UX level is really verifying a lot more than just "is this code right". It provides a way to check correctness on the specifications, correctness on the behavior, complete coverage of expected usage by the end user, and assures that only the code necessary to get the behavior to work is being written (to name a few).

The carelessness I see are developers writing code without fully understanding the needs of the stake holders. The industry would be in a lot better position if managers/product owners/stakeholders/etc. were expected to provide a good set of behaviors to develop against (as an example, Gherkin or similar tools) before they start pushing developers to "deliver something on time". Note this is at the systems/behavior level and not at the Unit level.

Unit level tests provide robustness. Developers can never assure that software has no "bugs".

Behavior level tests assure completeness. Developers can assure they are meeting the requirements (Developers can't assure they are making what the customer wants: but that is not the responsibility of a developer. That is the responsibility of product owner/project manager/etc. I'm not saying that a developer can't ware that hat, but a developer not wearing that hat should not be held responsible for failings to provide for the wants of the customer).

All that being said, I can not emphasis enough how important I think Behavior Level testing is.

My 3 cents.

[+] nahname|14 years ago|reply
What one person calls carelessness, another would say freeing up the time to consider other things. Such as, the code actually doing what it needs to. We are limited beings and can only keep so much in our heads at one time. If I have to remember how everything works at some level and then want to tackle how to clean it up (refactoring) or add something new without breaking it, that is a tremendous amount of state I am managing in my brain.

Better to write tests to assert something works as expected. Then focus on what you actually want to do, finally returning to your tests and focusing on your changes impact.

If people are writing shitty tests, that is a different problem.

As to your second point, I am fearful of code that does not have tests. I do not know what it does, I have next to no confidence that it does what it is supposed to and no way to validate that I haven't broken it if I change it.

I find the whole pushback for tests automation very odd. Here we are working towards automating some business process, while manually testing that it works. Why wouldn't we automate the testing too? If you are not good enough to automate most of your testing, what business do you have automating something else?

[+] krosaen|14 years ago|reply
I'm pretty much the one man code shop for our startup and I still write a lot of tests. The way I think of it is this: if something is tricky enough that I need to verify it in the repl, may as well capture that validation in an automated test. The trickier, more painful tests to setup are integration tests that make sure everything is hooked up correctly, from the datastore layer to the handler to the template arguments etc. I went through the pain to set this up so that we at least have smoke tests, e.g every page is visited with some data populated to make sure nothing blows up.

A good reason to write tests beyond QA is to verify your code is at least somewhat modular - being able to get code under test ensures at least one additional use beyond being hooked into your application. For that reason, I would recommend having at least one test for every module in your code. It also makes it easy to write a test to reproduce a bug without having to refactor your code to be testable after the fact.

[+] jmtame|14 years ago|reply
Early on, I asked most YC founders I met whether they did testing in the early days, and almost all of them said "no". I've also not written tests in the past simply because it's a time investment--why test if you could be working on something entirely different in a few weeks? Code can be very volatile in an early stage startup.

Think it makes more sense the later stage your startup is where you're more certain of what exactly it is you're building.

[+] nagnatron|14 years ago|reply
I honestly use tests more as a design tool than for testing functionality. After that you end up with a kind of a regression test suite.

It's cool to try to use the API you're building before you build it.

[+] pbiggar|14 years ago|reply
Even when you're prototyping, I find it useful to write one test. The gains from the first test are the biggest - pretty low investment, with reasonable returns.

It won't be great, but it will provide some form of sanity checking when you work on other stuff. Of course, it informs the design, which is a very overlooked feature of testing.

Lastly, it provides a foothold for more tests. When you're working on something hairy, there won't be any obstacle to "well, maybe I'll just add this one more test to save me some time".

[+] mattbriggs|14 years ago|reply
If you are throwing your code away every few weeks, it is probably wasted time. If your codebase is in a lot of flux, it will save you a ton of time, since a good test suite tells you what breaks every time you change something.
[+] obiefernandez|14 years ago|reply
These options are flawed. I am somewhere in the middle of of the first two: mostly integration tests, with critical domain logic unit tested. Certainly not 100% of the app's functionality, closer to 80%
[+] geebee|14 years ago|reply
I agree. this poll forces me to choose between a test suite that tests "all functionality" and "a few critical things". I think a lot of people who value high levels of testing coverage still fall somewhat short of all functionality, but are way above "a few critical things".

I'm using rails these days, and I have 100% test coverage on models and controllers (though that really just means that all the model and controller code is executed when I run my tests, these tools can't really tell if you've tested the code intelligently, though I hope I have).

I don't have a full suite of integration tests that validate all of the view logic, though there are some checks. I also have integration tests that validate external dependencies (file storage, database connectivity, etc), though again, there may be some holes.

I picked "all", since that's closest to where I am. But my best choice would be "we maintain a high (95%+) level of testing coverage". I don't think I'm splitting hairs here, because there may be a practical tradeoff between high levels and complete levels of test coverage.

NOTE: "high" levels of testing can mean different things to different people... doesn't have to be 95%, which I would consider to be higher than absolutely necessary. It depends so much on what you're actually testing (anyone who has used a coverage tool knows you can often "trick" the tool into awarding the 100% bar without doing much other than just making sure the tests run the code... which is useful in its way but can let all kinds of errors slip through).

[+] joefiorini|14 years ago|reply
I'm had the same thought Obie. I find high level integration tests provide most of the value for me, with unit testing when I need help with designing code. Having a decent suite of high level tests saves me from having to smoke test the entire app every time I make sweeping changes. If the suite is passing, I know the features are working, at least in the basic cases I was testing for. I still have to do some level of manual testing, but it's nowhere near as much as I did before I became more obsessed with testing.
[+] trustfundbaby|14 years ago|reply
Agreed. would have been great if manual testing was included. We have full time QA people who actually write very detailed test plans based on project specs/requirements and have time included in all our projects for testing and bug fixing at the end.
[+] shin_lao|14 years ago|reply
Never forget you write software, not tests. Tests are here to increase quality, they have no raison d'être by themselves.
[+] pbiggar|14 years ago|reply
I think you can go one step further. Never forget you're serving your customers, and your software has other raison d'etre. You only write software to provide value to them, so think of testing the same way.

Each test has the opportunity cost of writing some part of a new feature for your customers. But so does every minute spent of fixing bugs that would have been caught with more testing, at a fraction of the cost.

[+] brown9-2|14 years ago|reply
I suppose, but what is the value of untested code? This sounds like an excuse for coding without testing.
[+] latch|14 years ago|reply
tests are a lot more about design and refactoring than they are about quality.
[+] d-roo|14 years ago|reply
I was in the 'testing is too much overhead' crowd for years until one day I finally got it. I realized that as I code, I'm always testing. Who doesn't make a change and then test it? So, you consider writing a test too much overhead? How much overhead is it to manually test? How much overhead is it to fill out that registration form you're testing? Maybe there are two or three steps to it. How much time does that take each and every time you test? Being one that enjoys automating repetitive tasks, writing that test _once_ suddenly became a no-brainer.

This realization only made all the other arguments for testing that much stronger.

[+] shrub|14 years ago|reply
Unfortunately our sales people are obsessed with agreeing to whatever customers dictate in order to make a sale. The customer wants a full featured, fully customized, fully automated E-commerce solution and they want it for a flat $5000? Sold. Customer says "What is this 'testing' sh*t on the quote? It should just work the first time, or do you only have a Jr developer on staff who needs everything double checked for them? We can go some place more professional" and sales person replies "Oh yeah, that - you're right. Our developer is a wizard and I forgot to take that off."

No matter how many times I explain or quote higher or tell them the feature creep is becoming unreasonable (oh by the way, we have 18 products with complicated interactions, not the 3 we asked for on the quote, but we expect to still pay the same), such that I can't possibly write it all and test it all, they just don't listen and they leave me holding the bag. So, while I'd like to do testing, just getting the thing kind-of working isn't in the budget, never mind getting it working well.

Sorry for the rant and... come to think of it, it may be time for a new job.

[+] pbiggar|14 years ago|reply
We actually made a company to do other people's testing: http://CircleCI.com. Really easy Continuous Integration for web apps. Email [email protected] for a beta invite.

That said, I subscribe to the philosophy that testing is only there to support the business, not and end in itself. We often prototype features with no testing at all, because they get rewritten 3 times anyway. Often, writing the tests is what highlights flaws in our logic, so without it we would often we flying blind.

Testing slows down coding by about 135% (yes, more than twice as slow), but makes that time back in spades when you have to work on the same code again, or when changing lower layers (models, libraries, etc).

[+] stdbrouw|14 years ago|reply
I think the response anyone is likely to give to this poll depends a lot on the kind of work they do.

When I write a software package/library, I'll usually test the hell out of it for the very same reason so many others have given: if you're testing in a REPL anyway, why not just turn those snippets into unit tests? Hardly any effort.

But I usually don't bother with too much automated testing for websites or web apps, because (1) it's more difficult to actually catch the errors you care about, have good test coverage and keep tests up to date than it is for back-end stuff and (2) I actually like clicking through my app for a while after I've implemented a new feature or changed an existing one.

Manually testing a web app allows you to catch many different kinds of mistakes at the same time. Almost like an artist looking at an unfinished painting. Does the UI look off? Does X get annoying after doing it ten times in a row? Does everything flow nicely? What is this page missing? Did that customer's feature request you got three days ago actually make sense? Questions you should be asking anyway, even with automated tests. And basic functionality is tested because the underlying packages are tested.

... but then again, if I was writing a website backed by a RESTful API, testing that API is as easy as doing a couple of HTTP requests and checking the responses, so you'd be stupid not to go for that quick win.

So my answer is "We have a test suite that tests all functionality" and "Tests? We don't need no stinking tests." at the same time.

[+] peteretep|14 years ago|reply
People ... don't have tests? o_O In 2012?

I am seriously considering putting together a "Software Engineering for Small Teams" course or set of articles. With a little bit of expertise, you can inject testing in to most projects, use the minimum of Agile that'll help, and generally massively raise your game - and by that I mean code faster, better, and more reliably, with considerably less stress.

(edited: turns out I forgot which year we're in :-P)

[+] mtrimpe|14 years ago|reply
I think it all depends.

I used to always write proper full-fledged tests. Then I started my startup, building a product in the few hours left after a demanding high-stress job and a tumultuous private life.

Within a few weeks, I stopped writing tests. Within a few more weeks, I turned off the test suite.

I wrote the product, got it working, received market feedback, realized my model was all wrong, rewrote the entire domain model and UI multiple times all to finally realize that my component boundaries were all wrong and intuitively understanding where they should've been.

Now I feel confident about an architecture that will stay stable for 12+ months and each new component I write is properly tested.

In the meanwhile my lack of tests is starting to bite me very slowly, but I find that I'm just slowly replacing all 'bad parts' with properly tested components with clearly defined boundaries, rather than changing existing code.

And in the end I'm really happy that I decided not to test as much. It has it's place but when your time is really precious and you're trying to mold your software to fit the market needs, it just isn't worth it.

I don't know how many others are in a similar situation but, for me, sometimes it just ain't f*ing worth it.

[+] Mc_Big_G|14 years ago|reply
The day I changed one line of code and 100+ tests failed was the day I really got it.
[+] boyter|14 years ago|reply
Although that could just be a sign of brittle tests....
[+] trustfundbaby|14 years ago|reply
I've never done automated testing, but as I've grown as a developer and started dealing with more complicated codebases, I have come to see the importance of testing in a huge way.

With a small codebase that you know every inch of, its easy to test most of your interactions before you push something live, but when you get just one order of magnitude higher you start seeing how easy it is to write code in one section of your app, test it rigorously, but not catch some subtle breakage in another (seemingly unrelated) section of your app.

In production software, especially if you have paying clients, this is simply unacceptable; which is why I've recently been boning up on BDD, TDD, and continuous integration and am trying very hard to slowly integrate them into my development process.

To one of the comments before, in my experience, automated testing should actually makes you bolder with code not more fearful. We have this codebase where I work that is a frickin mammoth of interrelated modules and its so scary to go in there and add or change something, because I just know something else is going to break and I'm going to be stuck fixing it for days after I made the first edit.

This is the other reason I started exploring automated tests ... because I realized that if I had a test suite that could catch regressions when I refactor code, then I could actually spend more time whipping old code into shape instead of patching it up until such a time when I'd be able to just rewrite the whole thing.

[+] rhizome|14 years ago|reply
Trapping regressions is a HUGE driver for testing for me.
[+] netzpirat|14 years ago|reply
I do test almost anything in my apps and I can't imagine to write my software without it nowadays. I test my Ruby code in the backend, the CoffeeScript code in the frontend and I have integration tests to verify that the whole stack works fine.

It took me a lot of effort to learn it properly, I have read many books about testing, have read the tests of plenty of open source software to see how others do it and I wrote thousands of wrong tests until I got at a stage where I can say I have mastered testing.

I was always fascinated about test driven development, but to be honest, it does not work for me and I seldom do it. In most cases I normally write new functionality, then I describe the behavior of it and finally do some refactoring until the code quality meet my needs. When you can refactor a class without breaking a single test, you know you've done it right.

It's important that you find your way and don't try to follow the rules from others. Take your time, mastering software testing is a complex discipline and it won't happen overnight.

Even with a high level of test coverage, I always encounter new bugs when using the software. But after fixing it and adding some tests, I know at least that I will not see the exact same bug again.

I believe that writing tests speeds up my development. This may seems illogical at first, but without the tests my development would slow down with increasing complexity (Lehman's Law), and instead of adding new functionality I'd find myself fixing old stuff. So testing allows me to manage a large and complex codebase, it allows me to do a complicated architectural refactoring and I know everything important still works as expected.

[+] snambi|14 years ago|reply
I do the testcases based on where the project is at that point in time. Here are the three stages, that can help you decide how much tests needs to be there.

[1] Initial stage where we are trying to make things work. At this stage code base is very small < 1000 lines. This is like prototyping. It works with limited functionality. No tests needed at this time.

[2] Heavy development phase. At this stage, we have proved the concept. Now we are adding a lot of new features. We identified some features as must have. Also, code is getting re-factored/re-designed based on what we learn. At this stage, we add tests for the must have functionality. Thus, we can ensure that important features are not broken by newer code.

[3] Mature phase. The code is mature. Most of the features are working fine. Code base may be large 100000+ lines. At this stage re-factoring/re-designing is not easy. Mostly incremental changes are happening. At this point, we should have upwards of 70% code coverage. Typically, the test code will be more than the code when we have 70%+ code coverage. But, it is very important to have tests, since it ensures that all features are tested even when a minor code change is made.

[+] tfb|14 years ago|reply
Where's the option for "We thoroughly and immediately test every change (and all affected processes) ourselves to also ensure UX is top notch"?
[+] Nogwater|14 years ago|reply
Honest question: Do you believe that test == automated test?
[+] IanMechura|14 years ago|reply
WOW! I must say that I am actually surprised how many people have replied that they do little or no testing.

Perhaps this is because I am in the enterprise development world as opposed to the start-up world.

The cost and frustration involved in delivering a critical bug into a QA or production environment is much higher than the cost and frustration of writing and maintaining tests.

Every action in business has a cost associated with it. The more people involved (customers, UAT, Managers, etc.) the higher the cost. The sooner you can discover the bugs and fix them the less people are impacted the lower the cost.

This is how you make yourself as a developer more valuable and justify your high salary/rate by ingraining habits into your daily routine that reduce costs for the business.

In this I also imply non monetary costs, like the personal costs involved in asking a VP to sign off on an off-cycle production release due to a bug that could have been identified by a test prior to the integration build.

[+] darinrogers|14 years ago|reply
In my experience, on projects with often-run automated unit test suites with good coverage, development goes faster. Part of this might be because for code to be highly testable, it usually also has to be well-designed and architecturally sound.