top | item 10397496

Red Hat is buying Ansible

336 points| dlapiduz | 10 years ago |venturebeat.com

182 comments

order

mattzito|10 years ago

I think the price point here is about a couple of things:

- Chef and Puppet are too expensive for most companies to acquire, and have too much operational cost for too little revenue

- Ansible got a strong following in the SMB space, Red Hat probably thinks they can move that upmarket some

- Ansible's agentless configuration management has potentially strong applicability in a container world (why do I need a chunky agent to configure resources on my docker image? What if, for some reason, I need to affect change on running docker images? - I realize this is a bit of an anti-pattern for docker, but it was something I heard a lot from big enterprises)

$100m still sounds very high, kudos to the ansible folks who have come a long way in the last few years.

EDIT: one more piece I didn't think of here - the openstack side of things is an area where Red Hat has made big long-term bets for the future of the company, and it probably helps to justify the price in terms of backstopping their openstack support.

superuser2|10 years ago

I really don't understand why an organization would want to use Docker (besides buzzword compliance) if they were planning on mutating running containers. What's the advantage?

nailer|10 years ago

Ansible's Python as well, so will integrate well with the rest of RHEL, whereas Puppet was nearly the only Ruby tool in the Unix sysadmin community.

Not saying that one is better than the other, just than there's more Python out there in sysadminland.

mugsie|10 years ago

Yeah, OpenStack support will be a big thing for them I think.

Ansible is growing its OpenStack support, and they might see an opportunity for the RDO product.

walrus|10 years ago

SMB = small business?

samstave|10 years ago

Redhat and Mirantis are now direct competitors in the Openstack world. Redhat buying ansible, among the the great points you made, will further solidify their position in the Openstack world against Mirantis going forward...

leg100|10 years ago

This clearly is much more about Tower, consultancy, etc, than their main product, but their yaml encoded language is an abomination; masquerading as 'declarative' and easy to read, yet piling on loops and conditional statements and an unintuitive inheritance tree of global and local variables.

bryanlarsen|10 years ago

You missed the quoting mess, adding their own compact list and map grammar, the convention of having a comment on every line, and there's more I can't think of right now.

I hate ansible, it's just better than any of the alternatives for different reasons. Luckily we're moving away from needing any of them. Scripting an image build is a lot easier than updating a machine using CI: you start from a blank slate every time, and an out of date script isn't the catastrophe it is with CI since you have the images saved.

dorfsmay|10 years ago

I think when ansible started it wasn't obvious that logic (loops, conditional etc...) would be needed eventually. By the time it became obvious it was going to be required, it was too late to change.

Using jinja2 for markup compounded the issue in my opinion, as it has no loops and logic is less than obvious (compared to mako for example).

Still I find its agentless model, the idempotent model, being able to use it on machines where you don't have root access etc... gives it a place that nobody had fulfilled.

anton_gogolev|10 years ago

Spot on. I'm constantly amazed at how many projects use _serialization formats_ as a "programming language". LiquiBase, NAnt, MSBuild, Ansible.

bbrazil|10 years ago

Syntax and semantics are separate, not having to learn a new syntax is handy.

Syntactically the problem I run into is that it's got it's own DSL in task definitions, so it can be hard to keep in mind what's YAML and what's the DSL.

Semantically loops and conditions are essential features so I don't have a problem with that. The inheritance could use some clarification, I was hit last year by a regression that remains unresolved.

Schiphol|10 years ago

I found this sentence funny "Representatives of Red Hat and Ansible did not immediately respond to requests for comment". I take it to mean: "we wanted to run the story as quickly as possible; still it would have been nice to get superquick comments by RH or Ansible; tough luck, though."

dspillett|10 years ago

To me "did not immediately respond to requests for comment" smacks of neediness and self importance on the reporter's part (answer me now, you fools, don't you know who I am and what power I behold?!) and the people that would respond to such comments being in the middle of dealing with something more important at the time (perhaps answering a queue of queries that came in first, or queries from people who are more important to their world view). If I were RedHat or Ansible and read that sentence the reporter and/or outlet would be added to a "never respond to these people for at least 24 hours" list...

verytrivial|10 years ago

And it's the ambiguity of what they actuall tried. Maybe they pulled open their desk drawer and whispered the question into it.

nzoschke|10 years ago

Ansible is best of breed. But didn't Red Hat hear? Immutable infrastructure is the future!

http://michaeldehaan.net/post/118717252307/immutable-infrast...

mmahemoff|10 years ago

Yes it is, but the future is not evenly distributed, to paraphrase William Gibson. For many enterprises, even Ansible's current model is already way out there in the distant future.

Also, I think Ansible's idempotent model actually works nicely with immutable infrastructure. Why? For development of your stack. While messing around with it, you probably don't want to rebuild the whole thing from scratch. Of course you can play funny games with caching of remote packages and so on, but that's getting into Ansible territory anyway.

So I think a good model for immutable infrastructure is to use a tool like Ansible to develop the stack, then in production you would use the same tool to spin up immutable instances.

awjr|10 years ago

I was using ansible with packer https://www.packer.io/ to build AMIs (Amazon Machine Images). I'm spending a lot more time with docker these days though.

I see Ansible as primarily an orchestration tool.

rdeboo|10 years ago

I can see how that would work for stateless services. Just build a new image and discard the old one.

But what do you do when you want to change my MySQL config file? Create a new image and somehow transfer the data? Or are the datastores somehow externalized? Then how do you synchronize shutting down the old image, then starting the new updated one, preventing them from accessing the store at the same time?

The linked article kind of waves these issues away ('externalize state in Cassandra or RDS'). Then am I supposed to use two mechanisms/tools to run my infrastructure? Docker for stateless servers and something like Ansible for stateful servers?

vasco|10 years ago

We're using it for immutable infrastructure where we build images with ansible and deploy those images. It's basically the same as a dockerfile and ultimately instead of a container you use a right sized machine. I don't really get the need to containerise everything unless you are buying big metal and deploying on top of that.

listic|10 years ago

What's RDS? Seriously.

KarlPlatt|10 years ago

My experience with Ansible has not been so pleasant. Especially performance is a jobstopper. In my environment it takes 20 min for 12 Servers to be setup with some Redis, Elasticsearch stuff. Quite some become_user directives, but 20 min for this kind of stuff is just not acceptable. After all, application settings needs to be tuned and iterated over, too.

My idea was to develop the infrastructure with Ansible, e.g. no ssh to change some httpd settings at all. Everything via Ansible. It worked very well as long as the playbooks and number of servers was very small.

bovermyer|10 years ago

This has been my experience as well. Even using a small subset of a playbook via tags can take a long time, especially if you're doing a run in serial. One of our deployments that only affects six servers takes fifteen minutes.

This can be mitigated somewhat by putting Ansible on the target machine, downloading all the necessary files to that machine, and then running Ansible locally... but that seems awfully fragile to me.

I am much more interested in Salt's ZeroMQ path these days. It seems to scale better, at least on paper and in my few small tests.

pm90|10 years ago

If you're using Ansible for orchestration, you could try using the cloud's orchestration service instead. e.g. Rackspace Cloud Orchestration, AWS Cloudformation etc. In this specific case, you can use the orchestration api to spin up and manage the servers, and use ansible to manage the software (although there is a way to manage software as well [0]; I'm just not familiar enough with it to suggest it)

Disclaimer: I work in the Cloud Orchestration team at Rackspace.

[0]: https://github.com/openstack/heat-templates/tree/master/hot/...

crdoconnor|10 years ago

>After all, settings needs to be tuned and iterated over, too.

That's why it has tags. So you can run just the settings states rather than running the whole 20 minute thing over and over again.

justingood|10 years ago

Ansible 2.0 should have some new strategies to speed things up, depending on your requirements: https://docs.ansible.com/ansible/playbooks_strategies.html It will be interesting to see how performance is after it's released.

We eventually settled on having Ansible build an AMI for us that can then be spun up by as part of a Cloudformation template (also initiated by Ansible).

We've actually been moving further and further away from having Ansible handle the configuration management side of things, and deal with Orchestration primarily.

ptio|10 years ago

Ansible creator, founder and CTO Michael DeHaan previously worked at Red Hat where he helped build Cobbler.

srvg|10 years ago

Also, Michael DeHaan stepped down and left the company early 2015.

srvg|10 years ago

FYI- A lot of Ansible employees have a Red Hat past. The lead developer James Cammarata has, and is also the current Cobbler maintainer.

xorcist|10 years ago

Interesting! Ansible is great technology. Not as mature as Puppet or Chef, but it's getting there. However Red Hat is currently heavily pushing (what I understand to be) their own fork of Puppet inside Satellite 6. So quite a few RHEL customers in the process of rolling out the latest Satellite is probably going to want to hedge their investment in it. Perhaps there is some Red Hatter here who could comment?

chr15p|10 years ago

Its not a fork of Puppet, Satellite ships with its own copy of Puppet (3.6 iirc) which it integrates to provide the configuration management side of the product but its stock un-modified puppet.

In fact the puppet side of Satellite is built around Foreman (http://theforeman.org/) which is an open source project that isn;t Red Hat controlled so even if Red Hat wanted to move 100% to Ansible it would be very hard work for little gain. It would also be a really bad commercial idea Puppet is by far the market leader and most of their customers buy satellite precisely because it integrates with their existing puppet manifiests.

So I expect Puppet to stay as Red Hat's goto configuration management tool, and ansible to be used more for its ad-hoc remote execution capabilities where puppet is nowhere near as good. RH already uses ansible in the installer for Open Shift for example because it can set up multiple boxes without needing an agent pre-installed.

atsaloli|10 years ago

Speaking of mature, CFEngine has been around since 1993 and is now in its third generation. I just wish they would do a little marketing.

thejerz|10 years ago

Ansible is a fantastic tool. I put it up there with Rails, Backbone, and jQuery. The shadow of Puppet and Chef is large, but many are starting to see the light.

I hope that Redhat will accelerate the growth of this very well engineered platform.

Congrats to the Ansible team!

carlsborg|10 years ago

Congrats mpdehaan2. Good to see good engineering getting rewarded. Testament to a great project you conceived and started.

geerlingguy|10 years ago

Supposedly a > $100mm deal. Both companies are already headquartered in N.C., and Ansible has a ton of momentum in the RHEL and OpenStack arenas, so it would make sense to pull the project into the fold.

One thing I wonder is how much the project's priorities would shift away from (if at all) anything non-RHEL-centric.

rodgerd|10 years ago

As a Red Hat customer I'll be interesting to see how it affects the complete fucking shambles that has been the Satellite 6 rollout, which was supposed to be full Foreman/Puppet integration for provisioning and config management.

Apart from the fact that it's been a shambles, Red Hat have been solidly pushing customers down the puppet route. I expect there will be some grumpy meetings in the next few weeks.

creshal|10 years ago

> One thing I wonder is how much the project's priorities would shift away from (if at all) anything non-RHEL-centric.

Going by projects like NetworkManager it'll work okay, but you'll need to be a paying RedHat customer to get any useful documentation.

akurilin|10 years ago

Hopefully not too much, super heavy Ansible + Ubuntu user here.

pdeva1|10 years ago

Wonder what caused such a high valuation. They definitely didnt seem to have enough revenues to justify it.

devnonymous|10 years ago

I would imagine it is not so much about Ansible's general valuation in the industry but about its value for Red Hat (a.k.a -- Red Hat is not buying Ansible for its revenues but for its technology).

stock_toaster|10 years ago

My guess is ansible tower, along with the dev team and existing customer base for paid services.

devit|10 years ago

Every time I use some "configuration management" tool I wonder whether it's really better than just using shell.

Basically you lose a lot of time searching the web for how to do things that you already know how to do in shell, but the benefits are not so clear.

pilif|10 years ago

I thought so too for a long time. Until that time when I upgraded the RAID10 on our database servers from a 4 drive to a 8 drive configuration (which requires rebuilding the whole array if you want the performance benefits). Getting the intricate configuration of the two machines (postgres streaming replication works, but has a lot of moving parts to keep in mind) back without having to remember any details was absolutely priceless.

Completely wiping and reinstalling the main database servers (one after another of course) during the day while the system was in active use and completing the process with zero user intervention, that felt amazing.

Since then, whenever I had to reinstall a machine for one reason or another, I always appreciated the immense speed-up I gained by not having to ever manually re-do the configuration.

Better yet: All the years of growing the configuration, all the small insights learned over time, all the small fixes to the configuration: All are preserved and readily available. Even better: By using git, I can even go back in time and learn why I did what and when.

"Why am I using TCP for NFS? Oh right - that was back in december of 2012 when we were using UDP and we ran into that kernel deadlock" - that's next to impossible to do when you're configuring servers manually.

objectified|10 years ago

Well, one of the main advantages of using a configuration management tool is that the configurations you're writing are actually repeatable, and these tools tend to provide you with a lot of modules that take this in regard for you. If you were to use pure shell, you'd have to take a lot of things in account just to take care of this aspect alone. Also, these tools provide abstractions that make it easier to execute things as a unit (such as adding a user and a number of things having to do with it) without having to think about all the details. Often, they can be used on multiple platforms in the same way, too. So yeah, I do think configuration management tools solve real problems.

crucialfelix|10 years ago

there is always script:

    - script: /some/local/script.sh --some-arguments 1234
originally one of the selling points of ansible was that you could just include a shell script and run it.

I've also often found much of my time wasted trying to get ansible to do something simple.

mugsie|10 years ago

Just being able to have your tool know the list of servers, and their roles makes it worth it.

I did a fair bit of work based on the OpenStack tripleO project, which suffered from the OpenStack NIH syndrome. They could not agree on a CM tool, and wrote it in bash. Never, ever, ever again. Trying to cluster RabbitMQ / Percona across 3 different machines, via bash is an abomination, whereas in Ansible / Salt etc. it is pretty easy :)

togusa|10 years ago

That's ok if you have a known good baseline configuration. In that case it's no different to say a Dockerfile.

However the config management stuff seems to come to light when you've got a mess on your hands and need to rationalise it and make it consistent.

I'm slightly leaning towards the "rebuild with known good baseline" state of affairs these days however even as a long time Ansible user. Rather than upgrade stuff, I build something new alongside and then do a switcheroo nearly every time.

One day, hopefully containers will allow us to have consistent state everywhere.

hedwall|10 years ago

How many machines and how many types of machines do you deal with?

wtbob|10 years ago

I think that the big problem with shell is that it doesn't really offer the right abstractions for a lot of this: one doesn't (normally) want to run:

    if [ ! -d /opt/foothing ]
      then rm -f /opt/foothing && mkdir /opt/foothing
    fi
    cd /opt/foothing
    tar xf /tmp/instpkg.tar.gz
    sed -e s/QQQbarvalQQQ/$BAR_SETTING/ -i /opt/foothing/config
    …
Normally, one just wants to install & configure foothing. Abstracting that away in shell is possible but a pain: it doesn't really have a rich language for composing paths and other variable values; quoting is a right royal pain; by the time one's written a fully-working shell script (note that the snippet above has no error-handling, breaks if /opt doesn't exist, breaks if $BAR_SETTING contains whitespace and doesn't enable one to override the foothing installation location), it's nearly impossible to read & understand.

The Right Answer would involve a language which enabled one to create one's own syntactic abstractions in order to satisfy the general and specific needs of software installation. As an example, it'd be nice to have a WITH-INSTALLATION-DIRECTORY construct, which ensures that a directory exists, ensures that it's owned by the appropriate user, ensures that no other package already claims it (except that a previous version of the currently-being-installed package is okay), registers the directory and everything created in it during WITH-INSTALLATION-DIRECTORY as belonging to the currently-being-installed package, handles errors in a well-defined and useful manner for calling code, and so on and on and on.

And of course even that isn't high-level enough: If I'm installing bazit, which depends on foothing and quuxstuff, then I'll want to call something which ensures they exist. Or maybe there's an optional 'dependency,' and I want to do certain things if they exist and certain if not.

And maybe it's not low-level enough either. What if I want to override one particular sort of installation behaviour, but not the rest? What if I want to install a package in my own account, as myself? Wouldn't it be cool if I could set a few variables and the package manager Just Worked™?

As another user indicated, what all these tools really need is to be Lisp: versionable data which is code. As Shiver's work with scsh demonstrated, a Lisp-like language can be very pleasant to write POSIX applications in. Macros enable one to create useful syntactic constructs which make meaning, rather than details, clear. Dynamic variables (as in Common Lisp) easily enable customisation based on the call stack. CL's condition and restart systems are the gold standard for error signalling and recovery.

Florin_Andrei|10 years ago

The benefits become very clear as soon as you need to manage more than 10 entities (instances, VMS, etc) in a consistent, reproducible, clean manner.

srvg|10 years ago

Wondering if RH will let Tower become Open Source.

chr15p|10 years ago

Red Hat has a history of buying closed source software and releasing it as Open Source (KVM, Gluster, Cloudforms etc) so I would expect Tower to be open sourced. Assuming Ansible have the rights to all the code of course and dont license it from someone else of course.

KrestenKjaer|10 years ago

They have a strong tradition of opensourcing their products. So they most probably will.

poooogles|10 years ago

This does make me wonder how it'll impact their eventual move to Python3. They've been hesitant to move due to a lot of their customer base being on RHEL5/CentOS5, I can't imagine that this move will help matters.

qznc|10 years ago

I always wonder why cf-engine is so unpopular on HN. It has some nice advantages like no dependency on ssh or a scripting language. It is not as simple to get started, though.

atsaloli|10 years ago

I'm working on a CFEngine Tutorial to help people get started. I was inspired by Michael Hartl's "Learn Enough Tutorial Writing To Be Dangerous" talk at LA Ruby Conf to finally turn my CFEngine course materials into a book. It'll be my first commercial product so I'm excited!

Edit: added link to mhartl's tutorial: http://www.learnenough.com/tutorial-writing

betaby|10 years ago

As a former user of puppet and chef, I would say cfengine has a higher entrance brier. It pays the efforts thought.

ybx|10 years ago

Is SSH really a dependency you have to worry about though? Basically every server out there runs SSH.

maweki|10 years ago

I think that this will fit nicely with the Cockpit project which should "revolutionize" remote administration (it isn't bad). So now Red Hat wants to add something for wholesome orchestration, which was really needed in that space.

homulilly|10 years ago

I like what I've seen of ansible but a lot of their modules are a complete mess. I've run into problems with both their AWS and Docker modules and ended up resorting to a series of tasks running shell commands because it was more reliable and didn't require me to install a specific version of some python library on every single machine.

lvandeyar|10 years ago

I hope they don't kill the free version of Ansible!

MrOwen|10 years ago

Has Red Hat ever done this with anything? I think a lot of their products exist as open-source versions. Satellite -> Katello, OpenShift is open-source, CloudForms -> ManageIQ, Red Hat Identity Management -> FreeIPA, RHEL -> CentOS. I suspect the list goes on and I have a hunch they will open-source Tower in the near future.

WestCoastJustin|10 years ago

sbierwagen|10 years ago

Wow, what's with all the spammy replies to this comment?

dethos|10 years ago

Thanks for the videos, I'm starting to learn it right now and they couldn't have come in a better time.

dadoprso|10 years ago

Solid site, added it to my Feedly.

kangman|10 years ago

yeah really helpful. I paid for these lessons and Justin was kind enough to refund me the cost when he moved on to Docker.

surapaneni|10 years ago

Awesome. Will watch this weekend. Thanks.

occsceo|10 years ago

thanks for putting these together

mianos|10 years ago

If ansible is worth 100 what is saltstack worth?

hyperliner|10 years ago

Probably depends on who needs it. With almost no revenues, then it makes monetization only possible through enriching some platform. Maybe other major distro vendors will look at Chef, Puppet and Salt now and find them more expensive.

Here is the financial disclosure from RedHat. NOTICE THE FIRST SENTENCE.

The acquisition is expected to have no material impact to Red Hat's revenue for the third and fourth quarters of its fiscal year ending Feb. 29, 2016 (“fiscal 2016”). Management expects that non-GAAP operating expenses for fiscal 2016 will increase by approximately $2.0 million, or ($0.01) per share, in the third quarter and approximately $4.0 million, or ($0.02) per share, in the fourth quarter as a result of the transaction. Red Hat calculates non-GAAP operating expense by subtracting from GAAP operating expense the estimated impact of non-cash share-based compensation expense, which for fiscal 2016 is expected to increase by approximately $1 million for each of the third and fourth quarters, and amortization of intangible assets, which for fiscal 2016 is expected to increase by approximately $1 million for each of the third and fourth quarters, in addition to transaction costs related to business combinations, which are expected to increase by approximately $1 million in the third quarter. Management expects GAAP operating expense to increase for fiscal 2016 by approximately $5 million, or ($0.02) per share, in the third quarter and approximately $6 million, or ($0.02) per share, in the fourth quarter as a result of the transaction. Excluding the operating expense impact as noted above to GAAP and non-GAAP operating margin and GAAP and non-GAAP earnings per share, Red Hat is otherwise re-affirming its fiscal 2016 third quarter and full year guidance provided in its Sept. 21, 2015, earnings press release.

kzhahou|10 years ago

Are you implying it's worth more or less?

Or not implying anything but hoping someone here has an answer or analysis?