I'm more interested to know the actual technical details of the build and testing process than the "Task-List" software development based approach because that can be done by any software project management tools (even JIRA).
What's missing in this article is the whole Continuous Delivery technical aspect of it.
What do you guys use to build the NodeJS app?
What do you guys use to test the NodeJS app?
What do you guys use to check the code coverage of the NodeJS app?
What do you guys use to test the front-end?
What is the automated testing strategy?
How do you store artifacts of builds, schema migration (if using RDBMS) or handle different model versions, how do you rollback (what's the rollback strategy)?
I hope Doug writes a more technical post about the multi-client build/release process. We have unit tests for the whole API and very few automated tests for the front-end. Client builds are stored in S3. We use mongoDB and will do backfills if necessary (pretty rare). Rolling back the client is just pointing the stable branch to another build.
I've sent this to a couple of people around our multinational already. While I get what you are interested in, I have to commend them for this excellently written article. Would that our actuarial models were maintained with anything approximating this level of sophistication.
Fog Creek are the _kings and queens_ of dogfooding. Spolsky, you sure have nurtured a group of very loyal team players. I applaud you all. It must be really nice to work at a place where the love of the process and the product are both so strong.
In my opinion if there's one thing a reader should take away from this it should be that Single Page Apps and separation of server and client are The. Best. Thing. Ever. From the start, design your system this way.
While I don't think client side apps are the only way of doing things, I will say that I think client side apps will become much more popular once browsers get a little better.
I love the separation of concerns that is possible with JS apps - you can have one team working on the API and one on the interface and the only place they really need to communicate is in the API documentation. Once it's all done, you've already got a fully functional and secure API (because it wasn't an afterthought) that can be used for other clients.
What does the article say to advance "single page apps" as the "best thing ever" over other application models that use a clean interface and separation of concerns between client and service?
>In my opinion if there's one thing a reader should take away from this it should be that Single Page Apps and separation of server and client are The. Best. Thing. Ever
Those are two orthogonal things. All my sites have a complete separation of presentation from code and access a nice API to get data. Including the ones that are purely HTML and have no javascript at all.
Single page apps are good for things that are actually apps. Except that I want to leave the app open, and several other apps, and not have it interfere with my normal browsing. Until browsers realize this, it is actually pretty irritating to use browser apps.
Everything I've ever heard about Fog Creek indicates that they take their people seriously. Which probably makes them more likely to take their work at Fog Creek seriously.
Funny how that works, huh? A lesson many, many other companies could profit from.
> > The Trello API is well-written, has no bugs, and is totally rock solid. Or at
> least it doesn't change very often. That means we can put out new
> clients all the time without having to update the API. In fact, we can have
> multiple versions of the website out at any given time.]
A very counter-intuitive result: most people would not consider a stable API to let you iterate quickly!
I've used Trello & FogBugz over the years and we've even modeled some of our software after some of the practices they've written about. Amazing stuff!
On the Trello Android team we have a similar workflow to the web client and server developers. We merge in to a shared development branch when we have a complete feature or bugfix. With git's --no-ff this allows us to see when a new feature was implemented, and when bugs pop up we have a clear list of intersections where they could have been introduced. Our workflow is roughly based on an excellent post by Vincent Driessen, http://nvie.com/posts/a-successful-git-branching-model/.
With Subversion, you have a working copy with changes and then you do "svn update", and Subversion merges the upstream changes into your working copy. (But you don't normally call this merge.)
With Git, you have a working _repository_ with changes and then you do "git pull", and Git merges the upstream changes into your repository.
From the user's perspective, it looks about the same. But the Git merges are safer than the Subversion updates, because if the Subversion update messes up your working copy, you're stuck. But with Git, you always have (1) the commits you made locally, (2) the commits you just pulled from upstream, (3) the merge that was done by "git pull". And (1) and (2) can't get messed up, only (3) can get messed up. But you still have both your version (1) and their version (2) to go back to, so you have lots of chances to fix it.
Think of Git branches and the corresponding merges to be like Subversion updates with backups of the previous local working copy.
That's how we work on Brackets[1], and I think a lot of projects work that way. It depends what kind of version control system you use. It's pretty easy to do this with the "GitHub flow".
Yes, more or less a new branch for each fix. But with git and Kiln, it's really not a drag at all; the merges are generally really easy and mostly performed by our release manager when the card makes it to 'Ready to Merge'.
The alternative, having everyone work on the master branch, is a bigger drag, because it introduces all sorts of coordination problems like "is the master branch in a state to be deployed? are your changes done?".
I recently switched to this approach and it works very well. No more "I want to push out this quick fix, but I have to wrap up this other thing first so I don't break the build".
With github it is especially nice. I push to a remote branch, which I can see when I go to the project page. When I click merge it shows whether or not the TravisCI build passed. If it looks good, then I click to auto-merge and it's all set. It is possible to even automate away that part as well and have the whole thing merge and deploy on push if the build passes, but I am not quite ready to take the plunge yet with that (too easy to botch a production release IMHO).
Yep. My team follows a similar workflow with Github - every feature, bugfix, etc is developed on a branch off of master, then merged back into master via pull request when it is ready for deploying.
It's a nice workflow, changes are very visible to the entire team and well summarized (by the commit history and any comments/discussion on the pull request itself). Making a new branch is a one-line operation (two if you count hooking it up to the remote), so no, I've never personally felt that was a drag. Sometimes it feels a bit silly to create a branch for a one or two line fix, but the visibility to the team is worth it.
I agree. I would pick 1 dirty branch with continuous builds and tests any time over several branches. I have worked with several branches too and it is always messy, lots of confusion and too many overheads (including the Release Manager guy).
I love trello... but I don't like the branching model... :)
That's how my team at work does things as well. It actually speeds things along much more quickly than doing a bunch of fixes in one branch, because if one of your fixes is held up by QA, all the others can still be merged independently.
What I'm interested in is the mechanics behind how they know were to send a user based on their channel (beta/stable/alpha). We wanted to do something like this, but we couldn't figure out how to route users to the right app server either using AWS ELBs or nginx proxying ... admittedly we didn't really spend a lot of time thinking about it though.
We do that decision-making inside the web servers. It only affects the client you get, so when someone requests a page we do a lookup on the logged-in member to decide which channel they get. API reqs don't care what channel you're on. No need for any fancy nginx proxying/etc.
I'd love to hear—maybe I missed another blog post—why they went with the single release manager, where only one person can merge and deploy. What happens if Doug is sick or on vacation? Or even just in a meeting? What is a typical amount of time for a change to sit in "ready for merge" or merged but not deployed?
The primary benefit is that it creates a single point of communication for the developers, the designers, and the QA. It also helps with prioritizing changes that could be conflicting and, in the same vein, helps prevent bad releases or merges. And, of course, anyone can do this job. If I'm not available, someone picks up the slack. I just volunteered because I was interested and available.
All that said, I think that letting every developer deploy would not be a bad idea at all. The problem is that our team is too big to do that without creating more robust deployment tools and too small to dedicate enough time to doing so. My hope is that one day we can get there, though.
This task used to be more distributed over the server-side devs, but having a single release manager who does this most of the time and other people who do it occasionally and/or when the release manager isn't working seems to work well.
I don't really see why a developer couldn't also merge changes into "the official Build repo". Or is the "release manager" just a term for the person deciding what gets released when?
AWS, which came about when Hurricane Sandy took down the data center's backup generator fuel supply and much of the team spent days bucket brigading diesel fuel up 17 flights of stairs.
To add to what bobby said, changes to the API are far more likely to be additions rather than actual behavior changes (except for bug fixes, which of course all clients handle fine). This makes it easy; old clients just ignore new routes/arguments.
If there are necessary API changes, we just need to push updates to the API ahead of time. The API needs to be stable and backwards compatible anyway for the mobile apps (and all third party apps).
Does anyone know how the server determines which channel the client should be using? Are they doing this check at the apache/nginx level, or on the server right after the user is authenticated, and before the client code is sent?
Because we are only delivering multiple clients, it means we only have one server version running at a time. Then, after authentication, we decide which client version you will be receiving based on your chosen channel and hashed member id in the case of multiple distributions within the channel.
In trello's case they have a browser app, ios, and android. Having one well designed api makes these easier to build and maintain. It appears to be working pretty well for them. I use trello in browser, on an iphone, and on an ipad, and they all work together very seamlessly.
edwinnathaniel|12 years ago
What's missing in this article is the whole Continuous Delivery technical aspect of it.
What do you guys use to build the NodeJS app?
What do you guys use to test the NodeJS app?
What do you guys use to check the code coverage of the NodeJS app?
What do you guys use to test the front-end?
What is the automated testing strategy?
How do you store artifacts of builds, schema migration (if using RDBMS) or handle different model versions, how do you rollback (what's the rollback strategy)?
bobbygrace|12 years ago
hessenwolf|12 years ago
disclaimer: my entire life is run via trello.
tel|12 years ago
basicallydan|12 years ago
In my opinion if there's one thing a reader should take away from this it should be that Single Page Apps and separation of server and client are The. Best. Thing. Ever. From the start, design your system this way.
Good post, and an entertaining read.
iLoch|12 years ago
I love the separation of concerns that is possible with JS apps - you can have one team working on the API and one on the interface and the only place they really need to communicate is in the API documentation. Once it's all done, you've already got a fully functional and secure API (because it wasn't an afterthought) that can be used for other clients.
reddiric|12 years ago
copergi|12 years ago
Those are two orthogonal things. All my sites have a complete separation of presentation from code and access a nice API to get data. Including the ones that are purely HTML and have no javascript at all.
Single page apps are good for things that are actually apps. Except that I want to leave the app open, and several other apps, and not have it interfere with my normal browsing. Until browsers realize this, it is actually pretty irritating to use browser apps.
badman_ting|12 years ago
smacktoward|12 years ago
Funny how that works, huh? A lesson many, many other companies could profit from.
steveklabnik|12 years ago
A very counter-intuitive result: most people would not consider a stable API to let you iterate quickly!
hiisi|12 years ago
allcentury|12 years ago
sidcool|12 years ago
yen223|12 years ago
upops|12 years ago
hibbelig|12 years ago
With Git, you have a working _repository_ with changes and then you do "git pull", and Git merges the upstream changes into your repository.
From the user's perspective, it looks about the same. But the Git merges are safer than the Subversion updates, because if the Subversion update messes up your working copy, you're stuck. But with Git, you always have (1) the commits you made locally, (2) the commits you just pulled from upstream, (3) the merge that was done by "git pull". And (1) and (2) can't get messed up, only (3) can get messed up. But you still have both your version (1) and their version (2) to go back to, so you have lots of chances to fix it.
Think of Git branches and the corresponding merges to be like Subversion updates with backups of the previous local working copy.
dangoor|12 years ago
[1]: https://github.com/adobe/brackets
dodger|12 years ago
brown9-2|12 years ago
morganherlocker|12 years ago
With github it is especially nice. I push to a remote branch, which I can see when I go to the project page. When I click merge it shows whether or not the TravisCI build passed. If it looks good, then I click to auto-merge and it's all set. It is possible to even automate away that part as well and have the whole thing merge and deploy on push if the build passes, but I am not quite ready to take the plunge yet with that (too easy to botch a production release IMHO).
grmarcil|12 years ago
It's a nice workflow, changes are very visible to the entire team and well summarized (by the commit history and any comments/discussion on the pull request itself). Making a new branch is a one-line operation (two if you count hooking it up to the remote), so no, I've never personally felt that was a drag. Sometimes it feels a bit silly to create a branch for a one or two line fix, but the visibility to the team is worth it.
ssi1111|12 years ago
I love trello... but I don't like the branching model... :)
joshuacc|12 years ago
odonnellryan|12 years ago
trustfundbaby|12 years ago
thedufer|12 years ago
jamessocol|12 years ago
bobbygrace|12 years ago
dpatti|12 years ago
All that said, I think that letting every developer deploy would not be a bad idea at all. The problem is that our team is too big to do that without creating more robust deployment tools and too small to dedicate enough time to doing so. My hope is that one day we can get there, though.
dodger|12 years ago
brown9-2|12 years ago
chrismorgan|12 years ago
jcastro|12 years ago
tghw|12 years ago
giulianob|12 years ago
thedufer|12 years ago
bobbygrace|12 years ago
mikegioia|12 years ago
dpatti|12 years ago
gomox|12 years ago
laurenstill|12 years ago
Need it for compliance documentation.
banachtarski|12 years ago
The no bugs claim indicates to me this wasn't written by a technical person. Or at least, not very technical.
bobbygrace|12 years ago
unknown|12 years ago
[deleted]
jw2013|12 years ago
somid3|12 years ago
morganherlocker|12 years ago
kalnuezis|12 years ago
anton_gogolev|12 years ago
nevster|12 years ago
aravindb|12 years ago
unknown|12 years ago
[deleted]
takacsv|12 years ago
AznHisoka|12 years ago
haymills|12 years ago
nick2021|12 years ago
[deleted]