GitLab CEO here. I agree that GitFlow is needlessly complex and that there should be one main branch. The author advises to merge in feature branches by rebasing them on master. I think that it is harmful to rewrite history. You will lose cherry-picks, references in issues and testing results (CI) of those commits if you give them a new identifier.
The power of git is the ability to work in parallel without getting in each-others way. No longer having a linear history is an acceptable consequence. In reality the history was never linear to begin with. It is better to have a messy but realistic history if you want to trace back what happend and who did and tested what at what time. I prefer my code to be clean and my history to be correct.
I prefer to start with GitHub flow (merging feature branches without rebasing) and described how to deal with different environments and release branches in GitLab Flow https://about.gitlab.com/2014/09/29/gitlab-flow/
The obsession of git users with rewriting history has always puzzled me. I like that the feature exists, because it is very occasionally useful, but it's one of those things you should almost never use.
The whole point of history is to have a record of what happened. If you're going around and changing it, then you no longer have a record of what happened, but a record of what you kind of wish had actually happened.
How are you going to find out when a bug was introduced, or see the context in which a particular bit of code was written, when you may have erased what actually happened and replaced it with a whitewashed version? What is the point of having commits in your repository which represent a state that the code was never actually in?
It always feels to me like people just being image-conscious. Some programmers really want to come across as careful, conscientious, thoughtful programmers, but can't actually accomplish it, so instead they do the usual mess, try to clean it up, then go back and make it look like the code was always clean. It doesn't actually help anything, it just makes them look better. The stuff about nonlinear history being harder to read is just rationalization.
I'm starting to think that part of the problem we are facing at the moment is that feature branching itself can be exceptionally harmful.
I see statements like "The power of git is the ability to work in parallel without getting in each-others way" and get really worried about what people are trying to achieve. I want my team's code to be continuously integrated so that problems are identified early, not at some arbitrary point down the line when two features are finished but conflict with each other when both are merged. We seem to be reversing all the good work the continuous integration movement gave us; constant integration makes integration issues smaller and easier to fix.
I personally prefer to use toggling and techniques like branch by abstraction to enable a single mainline approach. Martin Fowler has a very good article on it here http://martinfowler.com/bliki/FeatureBranch.html
If you're using CI testing results and tying them to particular commits, you end up with the same problem whether you merge or rebase.
If you test a commit and it passes, and then merge that commit into master, the merge may have changed the code that the commit modified, or something that the commit's code depended on. The green flag you had on the commit is no longer valid because the commit is in a new context now and may not pass.
If you rebase the commit onto master, you're explicitly changing the context of the commit. Yes, you get a different SHA and you're not linked to the original CI result anymore, but that CI result wasn't valid anymore anyway. This is exactly the same situation that the merge put you into, but without the false assurance of the green flag on the original test result.
As many others have noted, rebasing is only recommended on private branches to prepare them for sharing on a public branch. If you're running CI it's probably only on the public branches, so rebasing wouldn't affect that. But if you're running CI on your private branch too, then you're going to want to run it after rebasing onto the public branch and before merging into the public branch. That gives you assurance that your code works on the public branch before you share it. Again, if you're using a merge-based workflow you'd have to do the same testing regardless of your earlier test results.
There is nothing wrong with rebasing a feature branch imho. Feature branches should be considered ephemeral. But it probably depends on your team and project size.
It is better to have a messy but realistic history if you want to trace back what happend and who did and tested what at what time. I prefer my code to be clean and my history to be correct.
^^^ Couldn't agree more.
However, I don't know why people want to avoid rebasing feature branches. Rebasing feature branches means that you only have to resolve the conflict once and have a clean history for your release branch. Granted, it works well in my team where only a single developer owns a given feature branch.
Sytse, do you mind if I ask a slightly OT question?
How does GitLab store the code-review data? Is it stored in the (or a) git repo? Is the feature compatible with rebasing feature branches before merge?
Also, pricing: I only just noticed that your pricing was per YEAR, not per MONTH. Most boostrap-pricing-page software is priced monthly and the user/year text is lowlighted. This has to be costing you sales.
Looking at the recent history i can see how you'd come to like it. You seem to mostly be doing merges or documentation changes, which probably means you don't have to do a lot of history spelunking to fix bugs caused months ago.
Are you sure your developers feel the same as you do? Are you sure they're willing to be open enough to you about their misgivings?
At our offices we use Pull Requests and Rebase as a combined work flow to get mostly linear history. Before issuing a PR we get the latest master, rebase our branch onto that so that all commits in our branch are approximately at the same time stamp and then issue a PR. This creates a nicely linear history for the most part.
The only evil is a willingness to force push the updated history over our branches before the PR goes up. But no one shares branches usually. Or the collaborators on a branch are few and they agree when to rewrite history.
> * In reality the history was never linear to begin with. It is better to have a messy but realistic history if you want to trace back what happend and who did and tested what at what time. I prefer my code to be clean and my history to be correct.*
Basically my beef is the idea that never rebasing is "true" history, and rebasing gives you "corrupt" or "whitewashed" history. In fact, the only thing you have weeks, months, years after pushing code is a logical history. It's not as if git automatically records every keystroke or every thought in someone's head—that would be an overwhelming amount of information and difficult to utilize anyway—instead it's all based on human decisions of what to commit and when. Rebasing doesn't "destroy" history, it's just a decision of where a commit goes that is distinct from the time it was first written, but in fact you lose almost no information—the original date is still there, and from that you can infer more or less where it originally branched from.
"But," you say, "surely complete retention of history is preferable to almost-complete retention?". Well, sure, all else being equal I would agree with that. But here's the crux of the issue: merging also loses information. What happens when you merge two conflicting commits is that bugs are swallowed up in merge commits rather than being pinned down to one changeset. This is true whether it is a literal merge conflict that git detects, or a silent logic error that no one discovers until the program blows up. With two branches merging that are logically incompatible, whose responsibility is it to fix their branch? Well, whoever merges last of course, and where does that fix happen under a never-rebase policy? In that single monstrous merge commit that can not be reasoned about or bisected.
But if you always rebase before merging to master, then the second integrator has to go back and correct each commit in the context of the new branch. In essence, they have to fix their work for the current state of the world, as if they had written it today. In this way each tweak is made where it is visible and bisectable instead of squashed into an intractable merge commit.
I get that there is some inconvenience around rebasing coordination and tooling integration (although GitHub PRs handle it pretty well), but the idea that the unadulterated original history has significant value is a straw man. If the branch as written was incompatible at the point it got merged, there is no value in retaining the history of that branch in an incompatible state because you won't be able to use it anyway. In extreme cases you might decide the entire branch is useless and just pitch it away entirely, and certainly no one is arguing to save history that doesn't make it onto master right?
> ...the history of a project managed using GitFlow for some time invariably starts to resemble a giant ball of spaghetti. Try to find out how the project progressed from something like this...
It's simple. Read backwards down the `develop` branch and read off the `feature/whatever` branches. Just because the graph isn't "pretty" doesn't mean it's useless.
In general, I'm starting to dislike "XXX considered harmful" articles. It seems to me like you can spout any opinion under a title of that format and generate lingering doubt, even if the article itself doesn't hold water. Not to generalize, of course--not all "XXX considered harmful" articles are harmful. They generally make at least some good points. I just think the title format feels kind of clickbaity at this point.
That said, kudos to the author for suggesting an alternative rather than just enumerating the shortcomings of GitFlow.
Articles like this are harmful. There are many of us successfully using gitflow without _any_ of the issues raised in the article.
And yet people who want to argue against the use of it simply because they don't want to learn something new now have a useful link to throw around as "proof" that a very successful strategy is "harmful". I guarantee in the next year I will have to go point by point and refute this damn article to some stubborn team lead or another senior dev.
Nobody should ever write an article outright bashing a strategy that they either don't fully understand or personally have not managed to integrate successfully in their own day-to-day. Bare minimum, if you're going to publish an article critical of a tool, don't name it so aggressively as to sound like it's fact rather than a single personal point of view.
Also this is a problem with Git not GitFlow. In Mercurial, for example, every commit is attached to a branch so it's very easy to follow where the changes happened.
I'm not familiar with the gui git tools, preferring instead to work on the command line, but does whatever tool the author used to generate the graphs used on the web page support command line flags?
Specifically, git log --first-parent (--oneline --decorate) would look much better with the documented strategy. Instead of seeing all the commits in the branch, all that's shown is the merge commit. If you used the article's branch names, all you'd see is:
* Merge branch 'feature/SPA-138' into develop
* Merge branch 'feature/SPA-156' into develop
* Merge branch 'feature/SPA-136' into develop
If you actually used descriptive branch names, that would actually seem to be quite useful - see immediately see the features being added without seeing all the gritty details!
The approach discussed on the article seems to take into account only one possibility: you deploy master in prod, and it's always considered correct.
That works for small projects, but in my experience, when you have a bunch of people (let's say 20) pushing code to a repo, you need several levels of "correctness"
- branches: Work in progress.
- develop: Code ready to share with others. It can break the build (merge conflicts, etc) and it won't be the end of the world.
- master: This shouldn't be broken. It needs to point to a commit that has already been proven not break the build/pass all the required tests.
As always, you need to find a balance with these things and adapt to the peculiarities of your code base and team. I really see them as suggestions...
Yeah, we also use something like this for building a website/webapp (for a client) with 5-10 people.
- Feature branch: do whatever you want
- Develop: should be good enough for the client (product owner) to look at
- Release branch: should be good enough to be tested by the test/QA team
- Master: should be good enough for website visitors
Branches are meant to be shortlived and merged (and code reviewed) into develop as soon as possible. We use feature toggles to turn off functionalities that end up in develop but can not go to production.
Does having a lower level of quality on develop hinder your ability to actually release code?
We have one particular repo at work that is just a pain in the ass to work with (Typescript analytics code that has to be backwards compatible to about forever), and we've pretty much abandoned the develop branch since releases got held up due to bugs in less important features that had been merged in without comprehensive testing. Pretty much everything now gets tested extensively on a feature branch and then gets merged directly into a release branch.
We might have swung a little too far in the other direction, I'm thinking we want to at least merge well tested bits back onto develop, but at least we can release the features that are actually done and cut those that are still having issues without having to do any major git surgery.
This is how my team works. I always thought GitFlow was way too complex for our use case. The way you describe seems to work for us, mostly. There are still the occasional mistakes and commits to the wrong branch, often accidentally bypassing develop, which I guess supports the article's premise.
I don't agree with master being the production code. It's the default branch when you clone. I like to keep the production branch on a more explicit branch so my team knows they're dealing with release code.
I have never understood why people hate merge commits so much. Their advantages are not insignificant: you know when a feature got merged in master, its much easier to revert a feature if you have a merge commit for it, much easier to generate a change log with merge commits, and you have none of the problems that pushing "cleaned up" histories will have: https://www.mail-archive.com/[email protected]...
The main disadvantage, as the article rightly points out, is that it makes it much harder to read the history. But that's easily solved with a simple switch: --no-merges. It works with git-log, gitk, tig, and probably others too. Use --no-merges, and get a nice looking linear history without those pesky merge commits.
You don't need to implement ALL of gitflow - I see it as scalable.
Master should always be latest production code, development branch contains all code pre-release. That's the core.
The other branches let you scale gitflow - if you need to track upcoming release bugfixes etc, you can use a release branch. A team of maybe 6 or 7 would likely start to need a release branch. Feature branches at this point are best left local on the developers repository. They rebase to fixup commits, and then merge those into develop when they're ready.
If you get into bigger teams - like maybe 6 agile teams working on different larger features, then you can introduce feature branches for the teams to use on sprints to keep the work separate.
The issue with gitflow is the lack of continuous integration, so I personally like to get teams to work only on a develop branch during sprints and use feature toggles to commit work to the develop branch without breaking anything.
As I see it, gitflow and CI are at odds and that's my biggest gripe with integrating lots of feature branching for teams - everyone has to integrate at the end of the day.
So I believe the model can and should be scaled back as far as possible, using only develop and master and release as primary workflow branches, introducing the others when the need arises - doing it just because it says so in the doc isn't the right approach.
I am. I resent such articles unless they come from a very clear eminence who has public and verifiable evidence to support his case.
This seems more of a: "This tool is popular but it doesn't work for me so it's bad".
In fact, as you say, he dislikes the tool (from the get go):
> I remember reading the original GitFlow article back when it first came out. I was deeply unimpressed - I thought it was a weird, over-engineered solution to a non-existent problem. I couldn't see a single benefit of using such a heavy approach. I quickly dismissed the article and continued to use Git the way I always did (I'll describe that way later in the article). Now, after having some hands-on experience with GitFlow, and based on my observations of others using (or, should I say more precisely, trying to use) it, that initial, intuitive dislike has grown into a well-founded, experienced distaste.
Throwing my two cents. There's no perfect methodology and teams that communicate and adhere to a set of standards will probably find a good way to work productively with git. They can always be helped with scripts like the gitflow plugin or some other helper if they think the possibility of human errors is big.
I also have anecdotal experience of working with and without and, being fine with either although I do appreciate git flow in any project that starts getting releases and supporting bug fixes, hot fixes and has been living for a while so it incorporate orthogonal features at the same time.
At this point, I just assume that "X considered harmful" contains an element of satire. That's not always the case, but I have no problem saying "Satirical 'Considered Harmful' Articles Considered Not Harmful".
He is suggesting to use 90% of what GitFlow suggests (feature/hotfix/release branches) but doesn't like the suggestion of non-fast-forward merge and master/Dev and that makes GitFlow harmful? I don't think I agree.
I think having the Dev branch is useful. Consider this actual scenario at my current workplace.
1. We have 4 developers. Nature of the project is such that we can all work independently on different features/changes.
2. We have Production/QA/Dev environment.
3. When we are working on our individual features, we do the work in individual branches and merge in to Dev branch (which is continuously deployed).This lets us know of potential code conflicts between developers in advance.
4. When a particular feature is 'developer tested', he/she merges it into a rolling release branch (Release-1.1, Release-1.2 etc) and this is continuously deployed to QA environment. Business user does their testing in QA environment and provides sign off.
5. We deploy the artifacts of the signed off release branch to Production and then merge it in to the master and tag it.
Without the development branch, the only place to find out code conflicts will be in the release branch. I and others on my team personally prefer the early feedback we can get thanks to the development branch.
Advantages of an explicit merge commit:
1. Creating the merge commit makes it trivial to revert your merge. [Yes, I know it is possible to revert the merge but it's not exactly a one step process.]
2. Being able to visually see that set of commits belongs to a feature branch. This is more important to me (and my team) than a 'linear history' that the author loves.
We have diverted from GitFlow in only one way, we create all feature/release/bugfix branches from 'master' and not 'develop'.
Now, don't get me wrong, GitFlow is not simple but it's not as complicated as author seems to suggest. I think the author was better served with article title like 'What I don't like in GitFlow'.
A reason that we are switching from full git flow to a reduced model (basically one master branch + feature branches, occasional hotfix branches) is that git flow isn't compatible with continuous integration and continuous delivery.
The idea of CI is that you integrate all commits, so you must integrate the develop branch - build the software, run the tests, deploy it to a production-like environment, test it there too.
So naturally, most testing happens in that environment; and then you make a production release starting from the master branch, and then install that -- and it's not the same binary that you tested before.
Sure, you could have two test/staging environments, but I don't think I could get anybody to test their features in both environments. That's just not practical.
Is this more a limitation of your CI/deployment tools? What we do (not saying it's the true way) is have TeamCity automatically build both master (production code + hotfixes) and development branches. Our deployment tool (Octopus) can just grab a particular release (e.g. 1.1.500=master 1.1.501=development) and send it to the server for testing. Hotfixes would be committed to master and tested with a build from there.
I guess this does open up the possibility that merging master (with hotfixes) back into development could cause a regression, but we certainly try to keep hotfixes minimal and simple.
Now database changes...that's the real pain point. Both master and development need their own DB to apply changes scripts to. Otherwise, deltas from development make testing master an issue.
Open source projects should stick with Merging over cherry-picking and rebasing especially if you want others to contribute. Unless you feel fine doing all of the rebasing and cherry-picking for them. Otherwise, good luck gathering a large enough pool of people to contribute. Simplicity always wins here.
2. GitFlow vs X
Once again do what is good for your company and the people around you. If you have a lot of developers having multiple branches is actually /beneficial/ as Master is considered ALWAYS working. Develop branch contains things that are ready to ship, and only things that are READY TO SHIP. So if your feature isn't ready yet, it can't go to develop, and it won't hit master. Your features are done in other branches.
3. Rewriting history
Never do this. Seriously, it will come to bite you in the ass.
> I thought it was a weird, over-engineered solution to a non-existent problem.
To be fair, its a cookie-cutter approach that resonates with people unfamiliar with git but not ready/willing to invest the time to understand it deeply. That is understandable; a lot of people come from other systems and just need to get going right away and gits poor reputation for command-line consistency etc. is well-earned.
His point is that this cookie-cutter approach is more complex to the development model he presents (and is in fact pretty common among open source software). You don't need to understand Git deeply to realize that master is stable and merges in finished and cleaned up features.
The only thing GitFlow had going for it is that it has a clearly written article about it with pictures that explain how it works. That's it; the freedom of git and being able to define what works for you is too much for people and they think they need to turn development back into Subversion-style or desktop-release style.
I agree with you, with one addition in GitFlow's favor--it standardizes how your team works. When you have multiple team members collaborating on a project, a poor standard is better than none at all.
GitFlow is also in my opinion a bad flow as it does end up with a merge commit spaghetti over time.
Merge commits are great. They are here to group a list of commits into a logical set. This logical set could represent one "feature", but not necessarily. It is up to you to decide whether commits A B C D should or shouldn't be grouped by a merge commit. Merge commits also make regression searchs (i.e. git bisect) a lot faster. And to top it of, they will make your history extremely readable, but that is granted you merge correctly... and that is where git rebase and git merge --no-ff come into play.
At my company, every developer must rebase their topical branch on top of the master branch before merging. Once the topical branch is rebased, the merge is done with a --no-ff. With this extremely easy flow, you end up with a linear history, made of a master branch going straight up and only everyonce in a while a merge commit.
Following the simple rule "commit, commit, commit..., rebase, merge --no-ff" avoided the merge spaghetti a lot of people compain about. Although, I have to admit our repository is small (6583 commits to date).
This works even when multiple devs work on the same branch: they must get in touch on a regular basis, rebase the branch they are working on and force push it. Rewritting history of topical branches is only bad if it is not agreed on. As long as it is done in a controlled manner nothing's wrong with it.
Another rule we follow is to always "git pull --rebase" (or git config branch.autosetuprebase=true).
Our approach might not, however, scale for larger teams or open source projects.
I have had this ideological debate about "fast forwarding" more times that I can count. I agree with the author "no-ff" is silly. I've been working on professional software teams for over a decade. When I encountered fast forwarding/rebasing it was absolutely a breath of fresh air. I've been using git now for 5 years and I have not encountered a single instance where using either of these tools has presented any sort of problem. I can't remember a week where having a concise, readable history hasn't proven its value. I also can't remember a single time I've said "Man I'm glad I had all these merge commits around they really saved my proverbial bacon"
From what I can tell no-ff exists to satisfy the aesthetic preference of your local team pedant. It gives them something to do between harping on whether your behavior is in the correct "domain", deciding if a list comprehensions are truly "pythonic", and spending that extra month perfecting the event sourcing engine to revolutionize the "contact us" section of your site.
no-ff is useful when you need to revert the whole merge - the merge commit delimits the boundaries of what used to be a separate branch. If you fast-forward, then you have to use your human brain to figure out how many commits to revert and when you're reacting to "augh I just broke master by adding these 12 commits" you might make a mistake.
The time it takes to carefully rebase a branch onto another, and to compress commits for a feature into one, is still much longer than the time it takes for my eyes to pass over so-called "empty" merge commits.
If I want to look at when a feature entered a branch, I can look at its merge commit. And the feature branches are there to show how a feature was built; bugs could be the result of a design decision that happened in one of the midway commits.
I looked at OP's example pic in the blog, and I read all of his words, but I wasn't sold. His picture looks like a normal git history to me. It requires almost no effort to find what I'm looking for.
And that's not even touching his rage against the idea of a canonical release branch (master). But that's for another day.
It's not confusing, maybe it's not suited for your particular case, as any other tool. There's no magic tool/process/etc that does it all for everybody.
GitFlow has been working great for us. A team of 15 developers, working with feature branches, we have our CircleCI configured to automatically deploy the "develop" branch to our "QA environment", and our "master" branch to "production" environment.
The "hotfix" and "release" are proven to be useful to us too; we just need to have effective communication with our team, so everybody rebase their feature branch after a merge in our main branches.
I have come to actually like the two permanent branches approach. I know that for any repository that follows this model that:
* "master" is the current stable release
* "develop" is the current "mostly stable" development version
The first time you clone a repository this is an extremely helpful convention to quickly get your head around the state of things.
If you're doing it right (and don't use --no-ff, which I agree is unreasonable), I can't think of a scenario where this causes extra merge commits. Merges to master should always be fast forward merges.
Advising rebasing over explicit merges is dangerous and foolish. Rebasing does have its place, but you really need to know what you're doing.
Also, I don't see his point about that messy history. I can see exactly what happened in that history (though the branch names could have been more informative). With multiple people working on the same project, feature branches will save your sanity when you need to do a release, and one feature turns out to not be ready.
[+] [-] sytse|10 years ago|reply
The power of git is the ability to work in parallel without getting in each-others way. No longer having a linear history is an acceptable consequence. In reality the history was never linear to begin with. It is better to have a messy but realistic history if you want to trace back what happend and who did and tested what at what time. I prefer my code to be clean and my history to be correct.
I prefer to start with GitHub flow (merging feature branches without rebasing) and described how to deal with different environments and release branches in GitLab Flow https://about.gitlab.com/2014/09/29/gitlab-flow/
[+] [-] mikeash|10 years ago|reply
The whole point of history is to have a record of what happened. If you're going around and changing it, then you no longer have a record of what happened, but a record of what you kind of wish had actually happened.
How are you going to find out when a bug was introduced, or see the context in which a particular bit of code was written, when you may have erased what actually happened and replaced it with a whitewashed version? What is the point of having commits in your repository which represent a state that the code was never actually in?
It always feels to me like people just being image-conscious. Some programmers really want to come across as careful, conscientious, thoughtful programmers, but can't actually accomplish it, so instead they do the usual mess, try to clean it up, then go back and make it look like the code was always clean. It doesn't actually help anything, it just makes them look better. The stuff about nonlinear history being harder to read is just rationalization.
[+] [-] ChrisMeek|10 years ago|reply
I see statements like "The power of git is the ability to work in parallel without getting in each-others way" and get really worried about what people are trying to achieve. I want my team's code to be continuously integrated so that problems are identified early, not at some arbitrary point down the line when two features are finished but conflict with each other when both are merged. We seem to be reversing all the good work the continuous integration movement gave us; constant integration makes integration issues smaller and easier to fix.
I personally prefer to use toggling and techniques like branch by abstraction to enable a single mainline approach. Martin Fowler has a very good article on it here http://martinfowler.com/bliki/FeatureBranch.html
[+] [-] masklinn|10 years ago|reply
* most git UI don't provide for branch filtering or --left-only (which hides "accessory"/"temporary" merged branches unless explicitly required)
* developers won't necessarily care for correct merge order, breaking "left-only" providing a mainline view
The end result is, especially for largeish projects, merge-based workflows lead to completely unreadable logs.
[+] [-] DougWebb|10 years ago|reply
If you test a commit and it passes, and then merge that commit into master, the merge may have changed the code that the commit modified, or something that the commit's code depended on. The green flag you had on the commit is no longer valid because the commit is in a new context now and may not pass.
If you rebase the commit onto master, you're explicitly changing the context of the commit. Yes, you get a different SHA and you're not linked to the original CI result anymore, but that CI result wasn't valid anymore anyway. This is exactly the same situation that the merge put you into, but without the false assurance of the green flag on the original test result.
As many others have noted, rebasing is only recommended on private branches to prepare them for sharing on a public branch. If you're running CI it's probably only on the public branches, so rebasing wouldn't affect that. But if you're running CI on your private branch too, then you're going to want to run it after rebasing onto the public branch and before merging into the public branch. That gives you assurance that your code works on the public branch before you share it. Again, if you're using a merge-based workflow you'd have to do the same testing regardless of your earlier test results.
[+] [-] brento|10 years ago|reply
[+] [-] solutionyogi|10 years ago|reply
^^^ Couldn't agree more.
However, I don't know why people want to avoid rebasing feature branches. Rebasing feature branches means that you only have to resolve the conflict once and have a clean history for your release branch. Granted, it works well in my team where only a single developer owns a given feature branch.
[+] [-] radicalbyte|10 years ago|reply
How does GitLab store the code-review data? Is it stored in the (or a) git repo? Is the feature compatible with rebasing feature branches before merge?
Also, pricing: I only just noticed that your pricing was per YEAR, not per MONTH. Most boostrap-pricing-page software is priced monthly and the user/year text is lowlighted. This has to be costing you sales.
[+] [-] Mithaldu|10 years ago|reply
Are you sure your developers feel the same as you do? Are you sure they're willing to be open enough to you about their misgivings?
[+] [-] mrbobdobbs|10 years ago|reply
The only evil is a willingness to force push the updated history over our branches before the PR goes up. But no one shares branches usually. Or the collaborators on a branch are few and they agree when to rewrite history.
[+] [-] dasil003|10 years ago|reply
I disagree very very strongly with this. I wrote about this years ago at http://www.darwinweb.net/articles/the-case-for-git-rebase, but it's due to for an update to clarify my thinking.
Basically my beef is the idea that never rebasing is "true" history, and rebasing gives you "corrupt" or "whitewashed" history. In fact, the only thing you have weeks, months, years after pushing code is a logical history. It's not as if git automatically records every keystroke or every thought in someone's head—that would be an overwhelming amount of information and difficult to utilize anyway—instead it's all based on human decisions of what to commit and when. Rebasing doesn't "destroy" history, it's just a decision of where a commit goes that is distinct from the time it was first written, but in fact you lose almost no information—the original date is still there, and from that you can infer more or less where it originally branched from.
"But," you say, "surely complete retention of history is preferable to almost-complete retention?". Well, sure, all else being equal I would agree with that. But here's the crux of the issue: merging also loses information. What happens when you merge two conflicting commits is that bugs are swallowed up in merge commits rather than being pinned down to one changeset. This is true whether it is a literal merge conflict that git detects, or a silent logic error that no one discovers until the program blows up. With two branches merging that are logically incompatible, whose responsibility is it to fix their branch? Well, whoever merges last of course, and where does that fix happen under a never-rebase policy? In that single monstrous merge commit that can not be reasoned about or bisected.
But if you always rebase before merging to master, then the second integrator has to go back and correct each commit in the context of the new branch. In essence, they have to fix their work for the current state of the world, as if they had written it today. In this way each tweak is made where it is visible and bisectable instead of squashed into an intractable merge commit.
I get that there is some inconvenience around rebasing coordination and tooling integration (although GitHub PRs handle it pretty well), but the idea that the unadulterated original history has significant value is a straw man. If the branch as written was incompatible at the point it got merged, there is no value in retaining the history of that branch in an incompatible state because you won't be able to use it anyway. In extreme cases you might decide the entire branch is useless and just pitch it away entirely, and certainly no one is arguing to save history that doesn't make it onto master right?
[+] [-] ryanthejuggler|10 years ago|reply
It's simple. Read backwards down the `develop` branch and read off the `feature/whatever` branches. Just because the graph isn't "pretty" doesn't mean it's useless.
In general, I'm starting to dislike "XXX considered harmful" articles. It seems to me like you can spout any opinion under a title of that format and generate lingering doubt, even if the article itself doesn't hold water. Not to generalize, of course--not all "XXX considered harmful" articles are harmful. They generally make at least some good points. I just think the title format feels kind of clickbaity at this point.
That said, kudos to the author for suggesting an alternative rather than just enumerating the shortcomings of GitFlow.
[+] [-] detaro|10 years ago|reply
http://meyerweb.com/eric/comment/chech.html
[+] [-] developer1|10 years ago|reply
And yet people who want to argue against the use of it simply because they don't want to learn something new now have a useful link to throw around as "proof" that a very successful strategy is "harmful". I guarantee in the next year I will have to go point by point and refute this damn article to some stubborn team lead or another senior dev.
Nobody should ever write an article outright bashing a strategy that they either don't fully understand or personally have not managed to integrate successfully in their own day-to-day. Bare minimum, if you're going to publish an article critical of a tool, don't name it so aggressively as to sound like it's fact rather than a single personal point of view.
[+] [-] giulianob|10 years ago|reply
[+] [-] fragmede|10 years ago|reply
Specifically, git log --first-parent (--oneline --decorate) would look much better with the documented strategy. Instead of seeing all the commits in the branch, all that's shown is the merge commit. If you used the article's branch names, all you'd see is:
* Merge branch 'feature/SPA-138' into develop
* Merge branch 'feature/SPA-156' into develop
* Merge branch 'feature/SPA-136' into develop
If you actually used descriptive branch names, that would actually seem to be quite useful - see immediately see the features being added without seeing all the gritty details!
[+] [-] jaimebuelta|10 years ago|reply
- branches: Work in progress.
- develop: Code ready to share with others. It can break the build (merge conflicts, etc) and it won't be the end of the world.
- master: This shouldn't be broken. It needs to point to a commit that has already been proven not break the build/pass all the required tests.
As always, you need to find a balance with these things and adapt to the peculiarities of your code base and team. I really see them as suggestions...
[+] [-] Alexandervn|10 years ago|reply
- Feature branch: do whatever you want
- Develop: should be good enough for the client (product owner) to look at
- Release branch: should be good enough to be tested by the test/QA team
- Master: should be good enough for website visitors
Branches are meant to be shortlived and merged (and code reviewed) into develop as soon as possible. We use feature toggles to turn off functionalities that end up in develop but can not go to production.
[+] [-] BillinghamJ|10 years ago|reply
For example:
Team is working on feature A and feature B. Each feature is developed on its own branch.
Feature A is ready for testing/integration, it is merged into develop.
Feature B is ready, it is merged into develop.
Now here is the problem:
Feature B is ready for release, but feature A is not. It is now not possible to merge develop into master without including both features.
The solution I use is to have master and branches. That's it. Master represents currently-running production code. Branches contain everything else.
This also happens to be how Github work. This article explains how it works with various levels of deployment before production - http://githubengineering.com/deploying-branches-to-github-co...
[+] [-] Eridrus|10 years ago|reply
We have one particular repo at work that is just a pain in the ass to work with (Typescript analytics code that has to be backwards compatible to about forever), and we've pretty much abandoned the develop branch since releases got held up due to bugs in less important features that had been merged in without comprehensive testing. Pretty much everything now gets tested extensively on a feature branch and then gets merged directly into a release branch.
We might have swung a little too far in the other direction, I'm thinking we want to at least merge well tested bits back onto develop, but at least we can release the features that are actually done and cut those that are still having issues without having to do any major git surgery.
[+] [-] the_af|10 years ago|reply
[+] [-] mmgutz|10 years ago|reply
[+] [-] chaitanya|10 years ago|reply
The main disadvantage, as the article rightly points out, is that it makes it much harder to read the history. But that's easily solved with a simple switch: --no-merges. It works with git-log, gitk, tig, and probably others too. Use --no-merges, and get a nice looking linear history without those pesky merge commits.
[+] [-] ninjakeyboard|10 years ago|reply
Master should always be latest production code, development branch contains all code pre-release. That's the core.
The other branches let you scale gitflow - if you need to track upcoming release bugfixes etc, you can use a release branch. A team of maybe 6 or 7 would likely start to need a release branch. Feature branches at this point are best left local on the developers repository. They rebase to fixup commits, and then merge those into develop when they're ready.
If you get into bigger teams - like maybe 6 agile teams working on different larger features, then you can introduce feature branches for the teams to use on sprints to keep the work separate.
The issue with gitflow is the lack of continuous integration, so I personally like to get teams to work only on a develop branch during sprints and use feature toggles to commit work to the develop branch without breaking anything.
As I see it, gitflow and CI are at odds and that's my biggest gripe with integrating lots of feature branching for teams - everyone has to integrate at the end of the day.
So I believe the model can and should be scaled back as far as possible, using only develop and master and release as primary workflow branches, introducing the others when the need arises - doing it just because it says so in the doc isn't the right approach.
[+] [-] LukeB_UK|10 years ago|reply
[+] [-] mayoff|10 years ago|reply
[+] [-] isaacremuant|10 years ago|reply
This seems more of a: "This tool is popular but it doesn't work for me so it's bad".
In fact, as you say, he dislikes the tool (from the get go):
> I remember reading the original GitFlow article back when it first came out. I was deeply unimpressed - I thought it was a weird, over-engineered solution to a non-existent problem. I couldn't see a single benefit of using such a heavy approach. I quickly dismissed the article and continued to use Git the way I always did (I'll describe that way later in the article). Now, after having some hands-on experience with GitFlow, and based on my observations of others using (or, should I say more precisely, trying to use) it, that initial, intuitive dislike has grown into a well-founded, experienced distaste.
Throwing my two cents. There's no perfect methodology and teams that communicate and adhere to a set of standards will probably find a good way to work productively with git. They can always be helped with scripts like the gitflow plugin or some other helper if they think the possibility of human errors is big.
I also have anecdotal experience of working with and without and, being fine with either although I do appreciate git flow in any project that starts getting releases and supporting bug fixes, hot fixes and has been living for a while so it incorporate orthogonal features at the same time.
[+] [-] cleaver|10 years ago|reply
[+] [-] pskocik|10 years ago|reply
[+] [-] solutionyogi|10 years ago|reply
He is suggesting to use 90% of what GitFlow suggests (feature/hotfix/release branches) but doesn't like the suggestion of non-fast-forward merge and master/Dev and that makes GitFlow harmful? I don't think I agree.
I think having the Dev branch is useful. Consider this actual scenario at my current workplace.
1. We have 4 developers. Nature of the project is such that we can all work independently on different features/changes.
2. We have Production/QA/Dev environment.
3. When we are working on our individual features, we do the work in individual branches and merge in to Dev branch (which is continuously deployed).This lets us know of potential code conflicts between developers in advance.
4. When a particular feature is 'developer tested', he/she merges it into a rolling release branch (Release-1.1, Release-1.2 etc) and this is continuously deployed to QA environment. Business user does their testing in QA environment and provides sign off.
5. We deploy the artifacts of the signed off release branch to Production and then merge it in to the master and tag it.
Without the development branch, the only place to find out code conflicts will be in the release branch. I and others on my team personally prefer the early feedback we can get thanks to the development branch.
Advantages of an explicit merge commit:
1. Creating the merge commit makes it trivial to revert your merge. [Yes, I know it is possible to revert the merge but it's not exactly a one step process.]
2. Being able to visually see that set of commits belongs to a feature branch. This is more important to me (and my team) than a 'linear history' that the author loves.
We have diverted from GitFlow in only one way, we create all feature/release/bugfix branches from 'master' and not 'develop'.
Now, don't get me wrong, GitFlow is not simple but it's not as complicated as author seems to suggest. I think the author was better served with article title like 'What I don't like in GitFlow'.
[+] [-] perlgeek|10 years ago|reply
The idea of CI is that you integrate all commits, so you must integrate the develop branch - build the software, run the tests, deploy it to a production-like environment, test it there too.
So naturally, most testing happens in that environment; and then you make a production release starting from the master branch, and then install that -- and it's not the same binary that you tested before.
Sure, you could have two test/staging environments, but I don't think I could get anybody to test their features in both environments. That's just not practical.
[+] [-] briHass|10 years ago|reply
I guess this does open up the possibility that merging master (with hotfixes) back into development could cause a regression, but we certainly try to keep hotfixes minimal and simple.
Now database changes...that's the real pain point. Both master and development need their own DB to apply changes scripts to. Otherwise, deltas from development make testing master an issue.
[+] [-] nijiko|10 years ago|reply
1. Merging vs Rebasing
Open source projects should stick with Merging over cherry-picking and rebasing especially if you want others to contribute. Unless you feel fine doing all of the rebasing and cherry-picking for them. Otherwise, good luck gathering a large enough pool of people to contribute. Simplicity always wins here.
2. GitFlow vs X
Once again do what is good for your company and the people around you. If you have a lot of developers having multiple branches is actually /beneficial/ as Master is considered ALWAYS working. Develop branch contains things that are ready to ship, and only things that are READY TO SHIP. So if your feature isn't ready yet, it can't go to develop, and it won't hit master. Your features are done in other branches.
3. Rewriting history
Never do this. Seriously, it will come to bite you in the ass.
4. Have fun.
Arguing is for people who don't get shit done.
[+] [-] jacobparker|10 years ago|reply
To be fair, its a cookie-cutter approach that resonates with people unfamiliar with git but not ready/willing to invest the time to understand it deeply. That is understandable; a lot of people come from other systems and just need to get going right away and gits poor reputation for command-line consistency etc. is well-earned.
(To be clear, I am not a fan of git flow.)
If anyone is interested in truly understanding git, start here: http://ftp.newartisans.com/pub/git.from.bottom.up.pdf
[+] [-] matthiasv|10 years ago|reply
[+] [-] omouse|10 years ago|reply
[+] [-] ryanthejuggler|10 years ago|reply
[+] [-] gouggoug|10 years ago|reply
Merge commits are great. They are here to group a list of commits into a logical set. This logical set could represent one "feature", but not necessarily. It is up to you to decide whether commits A B C D should or shouldn't be grouped by a merge commit. Merge commits also make regression searchs (i.e. git bisect) a lot faster. And to top it of, they will make your history extremely readable, but that is granted you merge correctly... and that is where git rebase and git merge --no-ff come into play.
At my company, every developer must rebase their topical branch on top of the master branch before merging. Once the topical branch is rebased, the merge is done with a --no-ff. With this extremely easy flow, you end up with a linear history, made of a master branch going straight up and only everyonce in a while a merge commit.
Our commit history looks like this:
Following the simple rule "commit, commit, commit..., rebase, merge --no-ff" avoided the merge spaghetti a lot of people compain about. Although, I have to admit our repository is small (6583 commits to date).This works even when multiple devs work on the same branch: they must get in touch on a regular basis, rebase the branch they are working on and force push it. Rewritting history of topical branches is only bad if it is not agreed on. As long as it is done in a controlled manner nothing's wrong with it.
Another rule we follow is to always "git pull --rebase" (or git config branch.autosetuprebase=true).
Our approach might not, however, scale for larger teams or open source projects.
[+] [-] nsfyn55|10 years ago|reply
From what I can tell no-ff exists to satisfy the aesthetic preference of your local team pedant. It gives them something to do between harping on whether your behavior is in the correct "domain", deciding if a list comprehensions are truly "pythonic", and spending that extra month perfecting the event sourcing engine to revolutionize the "contact us" section of your site.
[+] [-] JanezStupar|10 years ago|reply
> From what I can tell no-ff exists to satisfy the aesthetic preference of your local team pedant. Incredible irony.
[+] [-] jes5199|10 years ago|reply
[+] [-] Mithaldu|10 years ago|reply
[+] [-] orbitur|10 years ago|reply
The time it takes to carefully rebase a branch onto another, and to compress commits for a feature into one, is still much longer than the time it takes for my eyes to pass over so-called "empty" merge commits.
If I want to look at when a feature entered a branch, I can look at its merge commit. And the feature branches are there to show how a feature was built; bugs could be the result of a design decision that happened in one of the midway commits.
I looked at OP's example pic in the blog, and I read all of his words, but I wasn't sold. His picture looks like a normal git history to me. It requires almost no effort to find what I'm looking for.
And that's not even touching his rage against the idea of a canonical release branch (master). But that's for another day.
[+] [-] rattray|10 years ago|reply
[1] http://scottchacon.com/2011/08/31/github-flow.html
[2] https://guides.github.com/introduction/flow/
[+] [-] talles|10 years ago|reply
[+] [-] CrystalGamma|10 years ago|reply
And I must say I agree with all you say in that post.
[+] [-] marvel_boy|10 years ago|reply
[+] [-] juandazapata|10 years ago|reply
GitFlow has been working great for us. A team of 15 developers, working with feature branches, we have our CircleCI configured to automatically deploy the "develop" branch to our "QA environment", and our "master" branch to "production" environment.
The "hotfix" and "release" are proven to be useful to us too; we just need to have effective communication with our team, so everybody rebase their feature branch after a merge in our main branches.
[+] [-] menssen|10 years ago|reply
* "master" is the current stable release
* "develop" is the current "mostly stable" development version
The first time you clone a repository this is an extremely helpful convention to quickly get your head around the state of things.
If you're doing it right (and don't use --no-ff, which I agree is unreasonable), I can't think of a scenario where this causes extra merge commits. Merges to master should always be fast forward merges.
[+] [-] mcv|10 years ago|reply
Also, I don't see his point about that messy history. I can see exactly what happened in that history (though the branch names could have been more informative). With multiple people working on the same project, feature branches will save your sanity when you need to do a release, and one feature turns out to not be ready.