Very often on HN I see links describing the latest and greatest GitLab feature. It really seems like they've been working hard and doing a good job, and they seem to have a clear vision ("master plan", in their terms). Where is GitHub in all of this? Aren't they concerned about being outinnovated?
Github is so ubiquitous that they can't afford to move fast and break things like Gitlab can - another team at work considered moving to Gitlab to use all these new features, found bugs everywhere and eventually went back to Github.
The HN community at some point decided to collectively turn on GitHub, because of some incident where (choose your own adventure: someone at GitHub was accidentally trying to help someone who wasn't white/male/heterosexual OR they're too busy waging identity wars to actually work OR someone with the right intentions definitely went too far, actual damage was limited, but it's the type of issue that people remember forever)
I think they collective criticism has long ago become unfair. Two reasons:
– The OSS community is currently working incredibly well. It's innovative, productive, inclusive, and the quality is excellent. Github is responsible for a large chunk of this development. Go check out sourceforge to get an idea of how it used to be. Seriously – it's like David Foster Wallace's Fish/Water metaphor: we don't know how good we have it, because there's no point of reference to compare it too.
It's also quite obvious that GitLab profits from the conceptual work done at GitHub. I'm not advocating that there should be any protection for software concepts, screen layouts etc. But I like to at least acknowledge where good ideas originated, and GitHub has had quite a few (once again: it's easy to forget after getting used to it).
I also had the experience that GL was dog-slow when I tried it. That probably changed, or I really couldn't understand anybody using it.
Super quick one. One of the things I'm working on at work is a staging env per branch system. We use k8s too. Biggest consideration here is individual environment variables per branch.
Eg. For testing we have a separate db to work on so we have test data and a db where we can perform migrations if needed.
Another example would be adding a new feature that requires a different env variable for another api key.
Curious how or if these features support it. Super sorry if it's in the docs. Since you were doing ama I thought I might ask it here :)
This video is really neat and it seems like the promised plan is almost done.
Can you clarify at all what the plan is for supporting other container manager systems? I looked at Openshift and it doesn't look like something you can run on your own hardware. Maybe I am misunderstanding this though.
--
EDIT: I had spent my time looking at openshift.com and didn't notice that there is an open source version located at openshift.org. Sorry for the confusion.
--
I'm interested in using all the functionality shown in the video, but for a small team and on our own hardware. Maybe it doesn't make sense that way, any clarification would help!
Thanks for your kind words. We would love supporting other container management systems. From the OP: "We believe container schedulers such as Kubernetes are the future of application lifecycle management and are working on Mesosphere support. We would love it if people would contribute support for other container schedulers such as Docker Swarm and for other Kubernetes providers such as Tectonic."
In the demo we use our own Openshift Origin installation on our own cloud servers, for more information please see https://www.openshift.org/
> I looked at Openshift and it doesn't look like something you can run on your own hardware.
I work on a competing platform, Cloud Foundry. But I'm pretty darn sure OpenShift can be installed on your own platforms. Red Hat know a thing or two about Linux, after all.
Speaking of Red Hat, they have a dog in this fight in Fabric8.
Disclosure: I work for Pivotal, we're the majority donors of engineering to Cloud Foundry.
This is probably most interesting to those who don't want to use OpenShift. There's some more fleshing out to do re: CI runners, but the basics are there.
What's the minimum monthly cost for running this setup? Can it be done on a single server? If the minimum setup requires a lot of redundancy, then I think this will still be a blocker for many side projects and small startups.
I can't comment on the monthly cost. You should be able to run everything on a single server, but I suspect that if everything is under load, that would need to be a decent server.
You can use our Docker registry on GitLab.com for free.
I wasn't sure what to thing about Cycle Analytics, and the metrics it provides ("time from thoughts to issue", "time from issue to code", "time spent reviewing"). IMHO these numbers may be too synthetic to give meaningful information. Plus, once we have this kind of dashboard, there is a risk of starting to optimize for the numbers (instead of optimizing the reality underneath).
That said, I see with this demo how these metrics could be useful to track, well, when reviews are taking too much time (needs more people? better repartition of reviewers?) – or when maintenance tasks are slowing new features (needs better tests? more maintainers?). I guess I'll have to try this out :)
Do you mean that number are to coarse to indicate a specific problem?
Of course there is the problem of gaming the numbers. But we do think that compared to many other ways of measuring productivity (for example the number of issues solved) this is relatively robust against manipulation. Getting something out sooner is better most of the time.
But thanks for the thoughts and I hope you try it out soon.
At Pivotal some of the PMs have kicked around "Time to Value", which is the gap between an entry in Pivotal Tracker and a buck being turned on that feature.
Then there's Time to Customer Value, which is the time it takes before a customer using the feature turns a buck on it.
> there is a risk of starting to optimize for the numbers
There absolutely is. You can only use your own judgment of the balance of risk between flying blind and becoming obsessed with instrumentation.
Nothing, we just had to pick a quick way to install it and Openshift looked nice. All the features are intended to work with all Kubernetes installations down the road.
We release all of GitLabs components at the same time, guaranteeing that they work together. Every month on the 22nd. Updating GitLab is no more work than 'apt-get install'.
Idea -> Design -> Build -> Test -> Deploy -> Maintain
For example, Infrastructure Operations used to be responsible for providing a Test environment, now developers can test without Ops because tools like GitLab automatically provision test environments on demand (and then destroy them when they are no longer needed). You still need Ops to take care of that CI system, but you don't need as many personnel.
I'm sorry to hear that. I should have probably included a drawing. Can others maybe paraphrase what they think it means or does it not make sense to anyone?
[+] [-] dcgoss|9 years ago|reply
[+] [-] Untit1ed|9 years ago|reply
[+] [-] matt4077|9 years ago|reply
I think they collective criticism has long ago become unfair. Two reasons:
– The OSS community is currently working incredibly well. It's innovative, productive, inclusive, and the quality is excellent. Github is responsible for a large chunk of this development. Go check out sourceforge to get an idea of how it used to be. Seriously – it's like David Foster Wallace's Fish/Water metaphor: we don't know how good we have it, because there's no point of reference to compare it too.
It's also quite obvious that GitLab profits from the conceptual work done at GitHub. I'm not advocating that there should be any protection for software concepts, screen layouts etc. But I like to at least acknowledge where good ideas originated, and GitHub has had quite a few (once again: it's easy to forget after getting used to it).
I also had the experience that GL was dog-slow when I tried it. That probably changed, or I really couldn't understand anybody using it.
[+] [-] flukus|9 years ago|reply
[+] [-] sytse|9 years ago|reply
[+] [-] nstart|9 years ago|reply
Eg. For testing we have a separate db to work on so we have test data and a db where we can perform migrations if needed. Another example would be adding a new feature that requires a different env variable for another api key.
Curious how or if these features support it. Super sorry if it's in the docs. Since you were doing ama I thought I might ask it here :)
[+] [-] annerajb|9 years ago|reply
How is the terminal being done websockets?
[+] [-] no_protocol|9 years ago|reply
Can you clarify at all what the plan is for supporting other container manager systems? I looked at Openshift and it doesn't look like something you can run on your own hardware. Maybe I am misunderstanding this though.
--
EDIT: I had spent my time looking at openshift.com and didn't notice that there is an open source version located at openshift.org. Sorry for the confusion.
--
I'm interested in using all the functionality shown in the video, but for a small team and on our own hardware. Maybe it doesn't make sense that way, any clarification would help!
[+] [-] sytse|9 years ago|reply
In the demo we use our own Openshift Origin installation on our own cloud servers, for more information please see https://www.openshift.org/
[+] [-] jacques_chester|9 years ago|reply
I work on a competing platform, Cloud Foundry. But I'm pretty darn sure OpenShift can be installed on your own platforms. Red Hat know a thing or two about Linux, after all.
Speaking of Red Hat, they have a dog in this fight in Fabric8.
Disclosure: I work for Pivotal, we're the majority donors of engineering to Cloud Foundry.
[+] [-] planetix|9 years ago|reply
[+] [-] gtaylor|9 years ago|reply
While it's still in the process of working towards a merge, it will make for a very easy installation process: https://github.com/gtaylor/charts/blob/gitlab-ce/stable/gitl...
This is probably most interesting to those who don't want to use OpenShift. There's some more fleshing out to do re: CI runners, but the basics are there.
[+] [-] sytse|9 years ago|reply
[+] [-] nathan_f77|9 years ago|reply
Does gitlab.com offer hosting for containers?
[+] [-] jobvandervoort|9 years ago|reply
You can use our Docker registry on GitLab.com for free.
[+] [-] kemenaran|9 years ago|reply
That said, I see with this demo how these metrics could be useful to track, well, when reviews are taking too much time (needs more people? better repartition of reviewers?) – or when maintenance tasks are slowing new features (needs better tests? more maintainers?). I guess I'll have to try this out :)
[+] [-] sytse|9 years ago|reply
Do you mean that number are to coarse to indicate a specific problem?
Of course there is the problem of gaming the numbers. But we do think that compared to many other ways of measuring productivity (for example the number of issues solved) this is relatively robust against manipulation. Getting something out sooner is better most of the time.
But thanks for the thoughts and I hope you try it out soon.
[+] [-] jacques_chester|9 years ago|reply
Then there's Time to Customer Value, which is the time it takes before a customer using the feature turns a buck on it.
> there is a risk of starting to optimize for the numbers
There absolutely is. You can only use your own judgment of the balance of risk between flying blind and becoming obsessed with instrumentation.
[+] [-] wmf|9 years ago|reply
[+] [-] sytse|9 years ago|reply
[+] [-] Nux|9 years ago|reply
As a sysadmin though, I do wonder how maintainable and especially upgradable is that thing without breaking half the world.
[+] [-] jobvandervoort|9 years ago|reply
[+] [-] dmourati|9 years ago|reply
[+] [-] atsaloli|9 years ago|reply
Dev -> QA -> Ops
or
Idea -> Design -> Build -> Test -> Deploy -> Maintain
For example, Infrastructure Operations used to be responsible for providing a Test environment, now developers can test without Ops because tools like GitLab automatically provision test environments on demand (and then destroy them when they are no longer needed). You still need Ops to take care of that CI system, but you don't need as many personnel.
[+] [-] sytse|9 years ago|reply