Pretty interesting that according to him, Airbnb didn't really have a functioning testing infrastructure only a year ago. So you really can hit a billion dollars in valuation without testing :)
Google didn't really have a culture of testing until around 2005-2006. At that point they were public and had a $100B+ market cap. I'm sure there are other public companies out there who don't do automated testing at all.
The technical infrastructure and code quality of a company are secondary factors in its success. The biggest influence they have is in the ability to attract and retain brilliant engineers, because most brilliant engineers don't want to work in a place where they're just treading water with bugfixes (which is the situation you eventually get into without automated testing). But the mechanism is that brilliant engineers write features that your competitors can't match, not that testing itself will make you successful.
You really don't got a test culture unless you're 'allowed' to take time to debug errors that happens inside the test suite and not the product itself. This is the difference between writing tests and really having a TDD-culture.
I mean those issues where the test suites requires maintenance but the actual code base or product is "working".
Everyone has had them. I guess it that's what the article means by 'great pain'.
Recently I've heard a few non-engineers use "continuos integration" as a way of charging clients more as use per buzz word rules.
Interesting that you guys went with Solano. We used them back when they were TDDium and found the experience to be very bad. There was notable downtime, a poor interface, and a crappy configuration experience (getting environmental variables into it was very annoying). We've been happily with Codeship ever since.
What reasons did you have for choosing Solano? How has your experience been?
As someone running a company with around 50 people, and a quickly growing codebase, introducing testing as "a bar so low you can trip over it" is an amazing way to articulate exactly how i feel about this.
For us at first we did this by introducing tests on our most complex and commonly used code, that could be run locally. Moving onto using pull requests and having a more robust CI setup to enable more regular deploys is currently the task at hand.
Like testing, PRs are one of those things that seems like it will slow you down, but once you learn how to use them they can actually increase velocity (among many other benefits). It's been awesome to watch how good people have gotten at collaborating/communicating via PRs at Airbnb.
Does anybody have recommendations for where/how to start learning best practices for TDD?
As (nominally) top nerd at a tiny startup (2 engineers), I feel like I should set a precedent sooner rather than later for testing. This is currently not possible since I don't know anything about it, so any resources would be appreciated :)
Edit: Primarily looking for resources involving Node.js and client-side testing of a jQuery-based website.
One aha moment for me that's talked about here is to treat your tests more as a form of documentation and specification of how to use the system. They talk about how you should even do some basic tests to confirm enumerations and constants in the system as a way to be clear about their use.
You don't always have to be as thorough, but the mental shift from test to specification was helpful for me.
Edit: Unfortunately, if you download this podcast it's a bit out of order, so you want to read the blog and use it as a guide for which order to listen to things. They're in the process of writing a book about TDD and this blog is part of their process.
I've been setting up client side testing with grunt+browserify+karma+jasmine lately. It's pretty freaking powerful. But, none of the usual suspects for mocking http.get requests work well with it (sinon fails to execute in a browserify test environment; nock expects to be running in a node environment with the ClientRequest object available).
I wonder if the guys are doing code reviews for each PR along with making sure build is green. In our team, we've been doing code reviews for about three years now and can't imagine our workflow without them.
Yes, we're absolutely doing code reviews for each PR. Should have mentioned it in the post. Our general policy is to have engineers merge their own PRs, but only after at least 1-2 people have reviewed them (and obviously more for sensitive changes). The dialog that takes place in PRs helps enforce (and sometimes define) our style standards, teach engineers the idioms of a languages they may be new to, and ensure that we're always moving our codebase in the right direction. (They're also a great place to teach people how to write cleaner and less brittle tests!)
Ironically, AngelList posted a slideshow the other day about how they don't use tests because they increase development time and make it hard to be agile. They instead iterate quickly, pushing out new versions and fixing rapidly as things come up.
If you can test everything manually with confidence that every incremental change breaks nothing you've ever thought about prior to the present, I suggest you work on more complicated things. Otherwise, you're mistaken.
The irony is that AngelList allegedly generates funding for real engineering teams.
Wow. I'm surprised at how big they were able to scale, while still pushing most commits directly to master and having a test suite that took 1hr to run.
Still not disabled! Although at this point we're so habituated to PRs as a team that in practice it never happens. We did finally disable force pushes to master, though. Don't miss those one bit.
nchuhoai|12 years ago
nostrademons|12 years ago
The technical infrastructure and code quality of a company are secondary factors in its success. The biggest influence they have is in the ability to attract and retain brilliant engineers, because most brilliant engineers don't want to work in a place where they're just treading water with bugfixes (which is the situation you eventually get into without automated testing). But the mechanism is that brilliant engineers write features that your competitors can't match, not that testing itself will make you successful.
TheRealWatson|12 years ago
choonkeat|12 years ago
logicallee|12 years ago
(might be a true statement.)
TheRealWatson|12 years ago
[deleted]
seivan|12 years ago
You really don't got a test culture unless you're 'allowed' to take time to debug errors that happens inside the test suite and not the product itself. This is the difference between writing tests and really having a TDD-culture.
I mean those issues where the test suites requires maintenance but the actual code base or product is "working".
Everyone has had them. I guess it that's what the article means by 'great pain'.
Recently I've heard a few non-engineers use "continuos integration" as a way of charging clients more as use per buzz word rules.
timdorr|12 years ago
What reasons did you have for choosing Solano? How has your experience been?
verelo|12 years ago
As someone running a company with around 50 people, and a quickly growing codebase, introducing testing as "a bar so low you can trip over it" is an amazing way to articulate exactly how i feel about this.
For us at first we did this by introducing tests on our most complex and commonly used code, that could be run locally. Moving onto using pull requests and having a more robust CI setup to enable more regular deploys is currently the task at hand.
hoverkraft|12 years ago
elsbree|12 years ago
As (nominally) top nerd at a tiny startup (2 engineers), I feel like I should set a precedent sooner rather than later for testing. This is currently not possible since I don't know anything about it, so any resources would be appreciated :)
Edit: Primarily looking for resources involving Node.js and client-side testing of a jQuery-based website.
mseebach|12 years ago
Get, read and understand Kent Beck: "Test driven development by example". The rest will follow naturally.
hoka|12 years ago
2mur|12 years ago
[1] https://github.com/substack/tape
[2] http://www.catonmat.net/blog/writing-javascript-tests-with-t...
jimejim|12 years ago
http://www.sustainabletdd.com/
One aha moment for me that's talked about here is to treat your tests more as a form of documentation and specification of how to use the system. They talk about how you should even do some basic tests to confirm enumerations and constants in the system as a way to be clear about their use.
You don't always have to be as thorough, but the mental shift from test to specification was helpful for me.
Edit: Unfortunately, if you download this podcast it's a bit out of order, so you want to read the blog and use it as a guide for which order to listen to things. They're in the process of writing a book about TDD and this blog is part of their process.
davidjnelson|12 years ago
Misko Hevery's (also happens to be creator of angularjs!) Testability Explorer blog is also a great resource: http://misko.hevery.com/category/testability-explorer/
jcagalawan|12 years ago
warfangle|12 years ago
ulisesrmzroche|12 years ago
csgavino1|12 years ago
hiisi|12 years ago
I wonder if the guys are doing code reviews for each PR along with making sure build is green. In our team, we've been doing code reviews for about three years now and can't imagine our workflow without them.
hoverkraft|12 years ago
trekky1700|12 years ago
mattspitz|12 years ago
The irony is that AngelList allegedly generates funding for real engineering teams.
5vforest|12 years ago
mherrmann|12 years ago
theSshow|12 years ago
hoverkraft|12 years ago
Domenic_S|12 years ago
unknown|12 years ago
[deleted]
mathattack|12 years ago
mrdmnd|12 years ago