(no title)
drharris | 12 years ago
Specifically, my software deals with hardware devices. Do I simulate those devices in code (and if so, do I need tests to test my device simulator)? Or do I somehow gather many MB of data and keep it stored somehow for testing? I'm thinking these are simple questions for a testing veteran, but nobody I work with is that. And getting permission to spend time learning is not easy in a bad economy. :)
genericsteele|12 years ago
2) is the entire reason why I'm writing the book. Building a testing habit isn't as simple as following some basic tutorials. It's a fundamental shift in how you think about writing code and can't be summed up in a 5 minute blog, like you say.
To address your software, the answer is a little stretchy. For the code that depends on device data, you simulate as little device data as possible needed for your code to work. This means that if you have a method that only needs a device id, you only provide a device id. If you have a method that generates a report, you provide all the data that is needed in the report.
Another approach would be to try to group the test data together into common traits. I don't know enough about your software to come up with some examples, but you likely don't need to collect test data for every single device, but instead data that is representative of every single device.
If you want to find me on the twitter (@genericsteele), we could keep this conversation going. I'm interested in how you see the world of testing and just this thread has helped me think of new perspectives. I would love to figure out you could overcome the obstacles your work is throwing at you.