Ugh. Pull requests again? OK, look, unless your team is so large that you can't effectively communicate otherwise, pull requests are just friction. Use feature branches, then merge them when they are ready. Trying to pretend like your two person team is a huge open source project with many distributed part-time and full-time developers is cargo cult at its worst.
I couldn't disagree with this more. Pull requests provide a quick and relatively pain less code review tool. Unless you are suggesting that code review is a bad practice, which in that case I'm pretty well dumb founded as I think it's a requirement for intelligent modern development.
We are a trusting team and don't do pull requests with our closed source repos either.
To avert chaos it's much better to have an orderly branching model in place e.g. feature branches! My fellow partners made us go all "git flow" last year and the result has been a really tidy and satisfying workflow:
Or don't use branches at all. Develop on trunk/master with continuous integration. It reduces work-in-progress, exposes conflicts sooner and has a host of other knock-on benefits.
Meh, it doesn't have to be friction. If the goal is to ensure your other team member sees the changes you're making before they're merged in a pull request both accomplishes that and gives you both a place to put some notes related to that process.
I think you're ascribing too much ceremony to this. It's useful to have something written down for any discussion about these changes, and a pull request is as good of a place as any for that. Plus it ensures everything is all set up to merge once you're both satisfied with the changes.
While I'm often lazy about this, I think that there are advantages to doing this with this once your team is over just a solo developer. Encouraging your fellow developers to read (and at least understand, if not internalize) your code has some bus-proofing to it, and I don't think it's that much friction if done well.
The "12 Factor App" manifesto http://12factor.net/ is in the same vein as this, perhaps a bit more in-depth.
I like the idea of these guides generally. I think it'd be more valuable if they linked to example production code that followed the principles however, since there's no substitute for the real thing.
Come to think of it, I've never seen anybody write up any kind of index of (for example) Github projects that exemplify good design patterns for people to look at. Or some kind of recommended code reference list.
I like that idea too, but I suspect that there are pretty much zero non-trivial projects that actually follow best practices like these consistently. Maybe I'm wrong.
I disagree with "Keep the master branch deployable at all times." Both my CSS framework Min (http://minfwk.com) and several other popular GitHub projects use the strategy of 'only use one branch, and use tags to mark stable versions'.
Min's only branch (gh-pages, so we can serve the site with GitHub) is usually "unstable" (in the sense that a CSS framework can be unstable.) If someone wants a stable release, that's what Git's tag system is for.
I think the idea is sound, but the reasoning isn't. Keeping master deployable at all times forces breaking changes off into feature branches so unrelated pieces of functionality can't interfere with each other's release schedules.
Keeping master deployable for bug fixes doesn't make sense to me, though. Are you going to deploy new features sitting in master because something else had a bug? Just go look up the last release tag, make your bug fixes there, then merge them into master.
I've been bitten too many times by unstable features holding up releases on master (both my features and other team member's). Feature branches do a wonderful job of solving this and the required merge back into master gives you a nudge to prevent scope creep.
I wouldn't call the part about configurations (in the post and in the 12 factor app reference) a best practice. Using environment variables is a hack that has negative side effects including security side effects.
This statement is just silly: "A good goal is that the application can be open sourced at any time without compromising any credentials." It's silly because the use of environment variables doesn't prevent anybody from putting them in a shell script that gets committed to git...
I think they made a mistake by making the "gets configuration parameters from the environment" specific to a UNIX/system environment. You can accomplish the same effect in a much more elegant manner using a tool like etcd or consul.
But the important part is really that the deployable unit pulls its configuration from the environment where it's deployed. There are a ton of ways to accomplish this...environment variables are one way, etcd/consul is another, you can use something language-specific like JNDI or you can even use a file with configs in a well-known location, but you really need to be deploying the same artifact to QA, E2E testing, production and whatever other environments you might have.
You're reversing the "goal" and "practice". It's not that Environment Vars allow you to "open source / credentials".
It's (goal) "be able to open source any time without compromising any credentials". One method (practice), use Environment Vars. Evars being a poor choice in your and somewhat my opinion, does not translate into the goal being poor. It's a laudable goal.
First of all, use git for version control. It is modern, works great and the majority of developers are either comfortable in using it or want to start using it.
Its a pain in the arse to learn, overcomplicated for 90% of the cases it is used in. How many people actually need a distributed version control."
(I use Git. It is powerful and useful, but the justification for using it in the article isn't really great. No mention of Mercurial or other version control systems).
The fact of the matter is that love it or hate it, Git is now the most widely used VCS on the market in corporate settings (see e.g. http://www.itjobswatch.co.uk/). More importantly though, it is also the de facto standard for version control -- the lingua franca of source code management, communication and collaboration. Subversion and TFS still have significant market share, but everything else has pretty much fallen off a cliff in the past couple of years.
Also, many of Git's most powerful features have nothing to do with being distributed. Easy branching and merging, private versioning, bisect, rewritable changesets, stash, and the ability to cherry-pick individual changes within a file to commit, back out or stash could all theoretically be built into a centralised system.
The benefit of standardizing on a tool that is "good enough" is worth it imo. Learning git is easier than learning svn, darcs, hg, perforce and git and juggling between them when you want to submit patches to open source projects or get new clients.
The world's inconsistent internet connections are a great reason to require distributed version control. I'm often trying to work from a location without a reliable connection to the internet, be it a plane, train or bus, a coffee shop with spotty internet or a meetup. Requiring a connection to a server kills productivity in those situations.
Perhaps there is something to be said for Mercurial, but unless you were previously on Subversion it's certainly not that it's easy to learn.
All of this really covers the primary thing I preach when working with new devs. "You shouldn't be the only one who knows how to set this up and run it."
One thing that might be missing is good code documentation. Using DocBlock for PHP, for example. Ideally this is maintained in code, but that can get bloaty. At the very least, a git repo of markdown files. Even the built-in github wikis would suffice.
>Serve these through a CDN that is optimised for serving static files to ensure high transfer speeds and therefore increased user happiness.
I was not aware this was that common or even considered always the best practice. Is this really the best practice for any website? How exactly is a CDN more optimized than nginx on a dedicated server with 1GB connection?
I personally don't find much use for a CDN when dealing with a typical website's CSS, JS, images until you have very large traffic. You can concat and minify your CSS and JS files, and add a cache control header to all static content. That should be good enough until you get very large traffic.
But like anything, it depends on the use case. If the product is primarily a file server like Dropbox then maybe using a CDN early makes sense. Or your smallish number of users are spread across the world might be another use case.
The very nature of the kind of content you're serving over a CDN seems to indicate that unless you're extremely high traffic, the benefits are few: most of your clients are going to grab the static content from you once at a cost of a few hundred extra ms, and then rarely need it again.
I think having CDN as a basic requirement for all projects is... ill-considered. Remember, using a CDN is giving away data about your users' browsing practices without their consent.
Honest question: what are the benefits of using environment variables over having an actual configuration file (that is obviously not added to version control) ?
Unless your environment demands it, it doesn't matter. In fact, it can be a bit of a pain in the ass to implement on your own, if you are not using Heroku or some such. The main point there is to not put secrets into your git repo. How you accomplish that for the most part doesn't matter.
No idea how this is getting so many points. There's absolutely nothing of any practical interest here. Use version control and documentation... End of post. I'd say these are more industry standard practices than leading edge.
There will always be a new freshman class. Every year, someone joins HN (or the community at large) knowing next to nothing. We should welcome those people, not make them feel bad for being new.
Because the basics have to be pointed out again ... and again.
I think the upvotes don't primarily mean "omg I never thought to do this" but rather "will someone please staple this to the foreheads of all the clueless devs I've had to clean up after."
> I'd say these are more industry standard practices than leading edge.
Yeah, that's sort of what "best practices" are all about. They are for teaching outsiders what insiders have learned through practice. The intended audience is everyone who doesn't already know this.
If you consider the article of no practical interest for a seasoned insider, that's about the highest praise you can give.
It's best practices, not new practices, and a very clear concise write up at that. I wish if read this 6 months ago, rather than learning it the hard way.
>First of all, use git for version control. It is modern, works great and the majority of developers are either comfortable in using it or want to start using it.
Not to mention those points were mainly subjective. Don't get me wrong, I like and use Git myself but statements like "Use this, most people use it so you must use it too!" are just horrible.
True, although you'd be surprised to see how many people fail to provide proper docs, or use git for source control, still in 2014. I agree with others in here, maybe the title is just misleading more than anything else.
IgorPartola|11 years ago
jaegerpicker|11 years ago
hoggle|11 years ago
To avert chaos it's much better to have an orderly branching model in place e.g. feature branches! My fellow partners made us go all "git flow" last year and the result has been a really tidy and satisfying workflow:
http://nvie.com/posts/a-successful-git-branching-model/
Atlassian's SourceTree also helps us with maintaining "the flow":
http://www.sourcetreeapp.com
https://www.atlassian.com/git/workflows
Git extensions and screencast on how to setup on OSX:
https://github.com/nvie/gitflow
http://build-podcast.com/git-flow/
buffportion|11 years ago
awj|11 years ago
I think you're ascribing too much ceremony to this. It's useful to have something written down for any discussion about these changes, and a pull request is as good of a place as any for that. Plus it ensures everything is all set up to merge once you're both satisfied with the changes.
eropple|11 years ago
Nexialist|11 years ago
I like the idea of these guides generally. I think it'd be more valuable if they linked to example production code that followed the principles however, since there's no substitute for the real thing.
Come to think of it, I've never seen anybody write up any kind of index of (for example) Github projects that exemplify good design patterns for people to look at. Or some kind of recommended code reference list.
pmichaud|11 years ago
owenversteeg|11 years ago
Min's only branch (gh-pages, so we can serve the site with GitHub) is usually "unstable" (in the sense that a CSS framework can be unstable.) If someone wants a stable release, that's what Git's tag system is for.
awj|11 years ago
Keeping master deployable for bug fixes doesn't make sense to me, though. Are you going to deploy new features sitting in master because something else had a bug? Just go look up the last release tag, make your bug fixes there, then merge them into master.
I've been bitten too many times by unstable features holding up releases on master (both my features and other team member's). Feature branches do a wonderful job of solving this and the required merge back into master gives you a nudge to prevent scope creep.
parham|11 years ago
A more appropriate title could be "list of must read articles for modern web development".
csbrooks|11 years ago
kylequest|11 years ago
This statement is just silly: "A good goal is that the application can be open sourced at any time without compromising any credentials." It's silly because the use of environment variables doesn't prevent anybody from putting them in a shell script that gets committed to git...
curun1r|11 years ago
But the important part is really that the deployable unit pulls its configuration from the environment where it's deployed. There are a ton of ways to accomplish this...environment variables are one way, etcd/consul is another, you can use something language-specific like JNDI or you can even use a file with configs in a well-known location, but you really need to be deploying the same artifact to QA, E2E testing, production and whatever other environments you might have.
njharman|11 years ago
It's (goal) "be able to open source any time without compromising any credentials". One method (practice), use Environment Vars. Evars being a poor choice in your and somewhat my opinion, does not translate into the goal being poor. It's a laudable goal.
st3fan|11 years ago
collyw|11 years ago
First of all, use git for version control. It is modern, works great and the majority of developers are either comfortable in using it or want to start using it.
Its a pain in the arse to learn, overcomplicated for 90% of the cases it is used in. How many people actually need a distributed version control."
(I use Git. It is powerful and useful, but the justification for using it in the article isn't really great. No mention of Mercurial or other version control systems).
jammycakes|11 years ago
Also, many of Git's most powerful features have nothing to do with being distributed. Easy branching and merging, private versioning, bisect, rewritable changesets, stash, and the ability to cherry-pick individual changes within a file to commit, back out or stash could all theoretically be built into a centralised system.
Expez|11 years ago
couchand|11 years ago
Perhaps there is something to be said for Mercurial, but unless you were previously on Subversion it's certainly not that it's easy to learn.
jiggy2011|11 years ago
ericcholis|11 years ago
One thing that might be missing is good code documentation. Using DocBlock for PHP, for example. Ideally this is maintained in code, but that can get bloaty. At the very least, a git repo of markdown files. Even the built-in github wikis would suffice.
Demiurge|11 years ago
I was not aware this was that common or even considered always the best practice. Is this really the best practice for any website? How exactly is a CDN more optimized than nginx on a dedicated server with 1GB connection?
regularjack|11 years ago
rpedela|11 years ago
But like anything, it depends on the use case. If the product is primarily a file server like Dropbox then maybe using a CDN early makes sense. Or your smallish number of users are spread across the world might be another use case.
GrinningFool|11 years ago
I think having CDN as a basic requirement for all projects is... ill-considered. Remember, using a CDN is giving away data about your users' browsing practices without their consent.
vukers|11 years ago
ape4|11 years ago
unknown|11 years ago
[deleted]
Arkadir|11 years ago
IgorPartola|11 years ago
didip|11 years ago
But most importantly, it lends to dynamic configuration when using etcd or zookeeper.
blogimus|11 years ago
AlexMuir|11 years ago
joshdotsmith|11 years ago
Isamu|11 years ago
Because the basics have to be pointed out again ... and again.
I think the upvotes don't primarily mean "omg I never thought to do this" but rather "will someone please staple this to the foreheads of all the clueless devs I've had to clean up after."
pietro|11 years ago
Yeah, that's sort of what "best practices" are all about. They are for teaching outsiders what insiders have learned through practice. The intended audience is everyone who doesn't already know this.
If you consider the article of no practical interest for a seasoned insider, that's about the highest praise you can give.
opendais|11 years ago
btrombley|11 years ago
romanovcode|11 years ago
Not to mention those points were mainly subjective. Don't get me wrong, I like and use Git myself but statements like "Use this, most people use it so you must use it too!" are just horrible.
euphemize|11 years ago
ricardolopes|11 years ago