I think the price point here is about a couple of things:
- Chef and Puppet are too expensive for most companies to acquire, and have too much operational cost for too little revenue
- Ansible got a strong following in the SMB space, Red Hat probably thinks they can move that upmarket some
- Ansible's agentless configuration management has potentially strong applicability in a container world (why do I need a chunky agent to configure resources on my docker image? What if, for some reason, I need to affect change on running docker images? - I realize this is a bit of an anti-pattern for docker, but it was something I heard a lot from big enterprises)
$100m still sounds very high, kudos to the ansible folks who have come a long way in the last few years.
EDIT: one more piece I didn't think of here - the openstack side of things is an area where Red Hat has made big long-term bets for the future of the company, and it probably helps to justify the price in terms of backstopping their openstack support.
I really don't understand why an organization would want to use Docker (besides buzzword compliance) if they were planning on mutating running containers. What's the advantage?
Redhat and Mirantis are now direct competitors in the Openstack world. Redhat buying ansible, among the the great points you made, will further solidify their position in the Openstack world against Mirantis going forward...
This clearly is much more about Tower, consultancy, etc, than their main product, but their yaml encoded language is an abomination; masquerading as 'declarative' and easy to read, yet piling on loops and conditional statements and an unintuitive inheritance tree of global and local variables.
You missed the quoting mess, adding their own compact list and map grammar, the convention of having a comment on every line, and there's more I can't think of right now.
I hate ansible, it's just better than any of the alternatives for different reasons. Luckily we're moving away from needing any of them. Scripting an image build is a lot easier than updating a machine using CI: you start from a blank slate every time, and an out of date script isn't the catastrophe it is with CI since you have the images saved.
I think when ansible started it wasn't obvious that logic (loops, conditional etc...) would be needed eventually. By the time it became obvious it was going to be required, it was too late to change.
Using jinja2 for markup compounded the issue in my opinion, as it has no loops and logic is less than obvious (compared to mako for example).
Still I find its agentless model, the idempotent model, being able to use it on machines where you don't have root access etc... gives it a place that nobody had fulfilled.
Syntax and semantics are separate, not having to learn a new syntax is handy.
Syntactically the problem I run into is that it's got it's own DSL in task definitions, so it can be hard to keep in mind what's YAML and what's the DSL.
Semantically loops and conditions are essential features so I don't have a problem with that. The inheritance could use some clarification, I was hit last year by a regression that remains unresolved.
I found this sentence funny "Representatives of Red Hat and Ansible did not immediately respond to requests for comment". I take it to mean: "we wanted to run the story as quickly as possible; still it would have been nice to get superquick comments by RH or Ansible; tough luck, though."
To me "did not immediately respond to requests for comment" smacks of neediness and self importance on the reporter's part (answer me now, you fools, don't you know who I am and what power I behold?!) and the people that would respond to such comments being in the middle of dealing with something more important at the time (perhaps answering a queue of queries that came in first, or queries from people who are more important to their world view). If I were RedHat or Ansible and read that sentence the reporter and/or outlet would be added to a "never respond to these people for at least 24 hours" list...
Yes it is, but the future is not evenly distributed, to paraphrase William Gibson. For many enterprises, even Ansible's current model is already way out there in the distant future.
Also, I think Ansible's idempotent model actually works nicely with immutable infrastructure. Why? For development of your stack. While messing around with it, you probably don't want to rebuild the whole thing from scratch. Of course you can play funny games with caching of remote packages and so on, but that's getting into Ansible territory anyway.
So I think a good model for immutable infrastructure is to use a tool like Ansible to develop the stack, then in production you would use the same tool to spin up immutable instances.
I was using ansible with packer https://www.packer.io/ to build AMIs (Amazon Machine Images). I'm spending a lot more time with docker these days though.
I can see how that would work for stateless services. Just build a new image and discard the old one.
But what do you do when you want to change my MySQL config file? Create a new image and somehow transfer the data? Or are the datastores somehow externalized? Then how do you synchronize shutting down the old image, then starting the new updated one, preventing them from accessing the store at the same time?
The linked article kind of waves these issues away ('externalize state in Cassandra or RDS'). Then am I supposed to use two mechanisms/tools to run my infrastructure? Docker for stateless servers and something like Ansible for stateful servers?
We're using it for immutable infrastructure where we build images with ansible and deploy those images. It's basically the same as a dockerfile and ultimately instead of a container you use a right sized machine. I don't really get the need to containerise everything unless you are buying big metal and deploying on top of that.
My experience with Ansible has not been so pleasant. Especially performance is a jobstopper. In my environment it takes 20 min for 12 Servers to be setup with some Redis, Elasticsearch stuff. Quite some become_user directives, but 20 min for this kind of stuff is just not acceptable. After all, application settings needs to be tuned and iterated over, too.
My idea was to develop the infrastructure with Ansible, e.g. no ssh to change some httpd settings at all. Everything via Ansible. It worked very well as long as the playbooks and number of servers was very small.
This has been my experience as well. Even using a small subset of a playbook via tags can take a long time, especially if you're doing a run in serial. One of our deployments that only affects six servers takes fifteen minutes.
This can be mitigated somewhat by putting Ansible on the target machine, downloading all the necessary files to that machine, and then running Ansible locally... but that seems awfully fragile to me.
I am much more interested in Salt's ZeroMQ path these days. It seems to scale better, at least on paper and in my few small tests.
If you're using Ansible for orchestration, you could try using the cloud's orchestration service instead. e.g. Rackspace Cloud Orchestration, AWS Cloudformation etc. In this specific case, you can use the orchestration api to spin up and manage the servers, and use ansible to manage the software (although there is a way to manage software as well [0]; I'm just not familiar enough with it to suggest it)
Disclaimer: I work in the Cloud Orchestration team at Rackspace.
We eventually settled on having Ansible build an AMI for us that can then be spun up by as part of a Cloudformation template (also initiated by Ansible).
We've actually been moving further and further away from having Ansible handle the configuration management side of things, and deal with Orchestration primarily.
Interesting! Ansible is great technology. Not as mature as Puppet or Chef, but it's getting there. However Red Hat is currently heavily pushing (what I understand to be) their own fork of Puppet inside Satellite 6. So quite a few RHEL customers in the process of rolling out the latest Satellite is probably going to want to hedge their investment in it. Perhaps there is some Red Hatter here who could comment?
Its not a fork of Puppet, Satellite ships with its own copy of Puppet (3.6 iirc) which it integrates to provide the configuration management side of the product but its stock un-modified puppet.
In fact the puppet side of Satellite is built around Foreman (http://theforeman.org/) which is an open source project that isn;t Red Hat controlled so even if Red Hat wanted to move 100% to Ansible it would be very hard work for little gain. It would also be a really bad commercial idea Puppet is by far the market leader and most of their customers buy satellite precisely because it integrates with their existing puppet manifiests.
So I expect Puppet to stay as Red Hat's goto configuration management tool, and ansible to be used more for its ad-hoc remote execution capabilities where puppet is nowhere near as good. RH already uses ansible in the installer for Open Shift for example because it can set up multiple boxes without needing an agent pre-installed.
Ansible is a fantastic tool. I put it up there with Rails, Backbone, and jQuery. The shadow of Puppet and Chef is large, but many are starting to see the light.
I hope that Redhat will accelerate the growth of this very well engineered platform.
Supposedly a > $100mm deal. Both companies are already headquartered in N.C., and Ansible has a ton of momentum in the RHEL and OpenStack arenas, so it would make sense to pull the project into the fold.
One thing I wonder is how much the project's priorities would shift away from (if at all) anything non-RHEL-centric.
As a Red Hat customer I'll be interesting to see how it affects the complete fucking shambles that has been the Satellite 6 rollout, which was supposed to be full Foreman/Puppet integration for provisioning and config management.
Apart from the fact that it's been a shambles, Red Hat have been solidly pushing customers down the puppet route. I expect there will be some grumpy meetings in the next few weeks.
I would imagine it is not so much about Ansible's general valuation in the industry but about its value for Red Hat (a.k.a -- Red Hat is not buying Ansible for its revenues but for its technology).
I thought so too for a long time. Until that time when I upgraded the RAID10 on our database servers from a 4 drive to a 8 drive configuration (which requires rebuilding the whole array if you want the performance benefits). Getting the intricate configuration of the two machines (postgres streaming replication works, but has a lot of moving parts to keep in mind) back without having to remember any details was absolutely priceless.
Completely wiping and reinstalling the main database servers (one after another of course) during the day while the system was in active use and completing the process with zero user intervention, that felt amazing.
Since then, whenever I had to reinstall a machine for one reason or another, I always appreciated the immense speed-up I gained by not having to ever manually re-do the configuration.
Better yet: All the years of growing the configuration, all the small insights learned over time, all the small fixes to the configuration: All are preserved and readily available. Even better: By using git, I can even go back in time and learn why I did what and when.
"Why am I using TCP for NFS? Oh right - that was back in december of 2012 when we were using UDP and we ran into that kernel deadlock" - that's next to impossible to do when you're configuring servers manually.
Well, one of the main advantages of using a configuration management tool is that the configurations you're writing are actually repeatable, and these tools tend to provide you with a lot of modules that take this in regard for you. If you were to use pure shell, you'd have to take a lot of things in account just to take care of this aspect alone. Also, these tools provide abstractions that make it easier to execute things as a unit (such as adding a user and a number of things having to do with it) without having to think about all the details. Often, they can be used on multiple platforms in the same way, too. So yeah, I do think configuration management tools solve real problems.
Just being able to have your tool know the list of servers, and their roles makes it worth it.
I did a fair bit of work based on the OpenStack tripleO project, which suffered from the OpenStack NIH syndrome. They could not agree on a CM tool, and wrote it in bash. Never, ever, ever again. Trying to cluster RabbitMQ / Percona across 3 different machines, via bash is an abomination, whereas in Ansible / Salt etc. it is pretty easy :)
That's ok if you have a known good baseline configuration. In that case it's no different to say a Dockerfile.
However the config management stuff seems to come to light when you've got a mess on your hands and need to rationalise it and make it consistent.
I'm slightly leaning towards the "rebuild with known good baseline" state of affairs these days however even as a long time Ansible user. Rather than upgrade stuff, I build something new alongside and then do a switcheroo nearly every time.
One day, hopefully containers will allow us to have consistent state everywhere.
I think that the big problem with shell is that it doesn't really offer the right abstractions for a lot of this: one doesn't (normally) want to run:
if [ ! -d /opt/foothing ]
then rm -f /opt/foothing && mkdir /opt/foothing
fi
cd /opt/foothing
tar xf /tmp/instpkg.tar.gz
sed -e s/QQQbarvalQQQ/$BAR_SETTING/ -i /opt/foothing/config
…
Normally, one just wants to install & configure foothing. Abstracting that away in shell is possible but a pain: it doesn't really have a rich language for composing paths and other variable values; quoting is a right royal pain; by the time one's written a fully-working shell script (note that the snippet above has no error-handling, breaks if /opt doesn't exist, breaks if $BAR_SETTING contains whitespace and doesn't enable one to override the foothing installation location), it's nearly impossible to read & understand.
The Right Answer would involve a language which enabled one to create one's own syntactic abstractions in order to satisfy the general and specific needs of software installation. As an example, it'd be nice to have a WITH-INSTALLATION-DIRECTORY construct, which ensures that a directory exists, ensures that it's owned by the appropriate user, ensures that no other package already claims it (except that a previous version of the currently-being-installed package is okay), registers the directory and everything created in it during WITH-INSTALLATION-DIRECTORY as belonging to the currently-being-installed package, handles errors in a well-defined and useful manner for calling code, and so on and on and on.
And of course even that isn't high-level enough: If I'm installing bazit, which depends on foothing and quuxstuff, then I'll want to call something which ensures they exist. Or maybe there's an optional 'dependency,' and I want to do certain things if they exist and certain if not.
And maybe it's not low-level enough either. What if I want to override one particular sort of installation behaviour, but not the rest? What if I want to install a package in my own account, as myself? Wouldn't it be cool if I could set a few variables and the package manager Just Worked™?
As another user indicated, what all these tools really need is to be Lisp: versionable data which is code. As Shiver's work with scsh demonstrated, a Lisp-like language can be very pleasant to write POSIX applications in. Macros enable one to create useful syntactic constructs which make meaning, rather than details, clear. Dynamic variables (as in Common Lisp) easily enable customisation based on the call stack. CL's condition and restart systems are the gold standard for error signalling and recovery.
Red Hat has a history of buying closed source software and releasing it as Open Source (KVM, Gluster, Cloudforms etc) so I would expect Tower to be open sourced. Assuming Ansible have the rights to all the code of course and dont license it from someone else of course.
This does make me wonder how it'll impact their eventual move to Python3. They've been hesitant to move due to a lot of their customer base being on RHEL5/CentOS5, I can't imagine that this move will help matters.
I always wonder why cf-engine is so unpopular on HN. It has some nice advantages like no dependency on ssh or a scripting language. It is not as simple to get started, though.
I'm working on a CFEngine Tutorial to help people get started. I was inspired by Michael Hartl's "Learn Enough Tutorial Writing To Be Dangerous" talk at LA Ruby Conf to finally turn my CFEngine course materials into a book. It'll be my first commercial product so I'm excited!
Hi all. I am a GM at Red Hat, and I have been deeply involved in the acquisition of Ansible. It's great to see so much interest and so many good questions. I hope that my blog post can help answering some of them:
http://www.redhat-cloudstrategy.com/why-did-red-hat-acquire-...
I think that this will fit nicely with the Cockpit project which should "revolutionize" remote administration (it isn't bad). So now Red Hat wants to add something for wholesome orchestration, which was really needed in that space.
I like what I've seen of ansible but a lot of their modules are a complete mess. I've run into problems with both their AWS and Docker modules and ended up resorting to a series of tasks running shell commands because it was more reliable and didn't require me to install a specific version of some python library on every single machine.
Has Red Hat ever done this with anything? I think a lot of their products exist as open-source versions. Satellite -> Katello, OpenShift is open-source, CloudForms -> ManageIQ, Red Hat Identity Management -> FreeIPA, RHEL -> CentOS. I suspect the list goes on and I have a hunch they will open-source Tower in the near future.
If you're new to Ansible. I've created about two hours of free screencasts on it. It's a very simple to use and understand configuration management tool.
Probably depends on who needs it. With almost no revenues, then it makes monetization only possible through enriching some platform. Maybe other major distro vendors will look at Chef, Puppet and Salt now and find them more expensive.
Here is the financial disclosure from RedHat. NOTICE THE FIRST SENTENCE.
The acquisition is expected to have no material impact to Red Hat's revenue for the third and fourth quarters of its fiscal year ending Feb. 29, 2016 (“fiscal 2016”). Management expects that non-GAAP operating expenses for fiscal 2016 will increase by approximately $2.0 million, or ($0.01) per share, in the third quarter and approximately $4.0 million, or ($0.02) per share, in the fourth quarter as a result of the transaction. Red Hat calculates non-GAAP operating expense by subtracting from GAAP operating expense the estimated impact of non-cash share-based compensation expense, which for fiscal 2016 is expected to increase by approximately $1 million for each of the third and fourth quarters, and amortization of intangible assets, which for fiscal 2016 is expected to increase by approximately $1 million for each of the third and fourth quarters, in addition to transaction costs related to business combinations, which are expected to increase by approximately $1 million in the third quarter. Management expects GAAP operating expense to increase for fiscal 2016 by approximately $5 million, or ($0.02) per share, in the third quarter and approximately $6 million, or ($0.02) per share, in the fourth quarter as a result of the transaction. Excluding the operating expense impact as noted above to GAAP and non-GAAP operating margin and GAAP and non-GAAP earnings per share, Red Hat is otherwise re-affirming its fiscal 2016 third quarter and full year guidance provided in its Sept. 21, 2015, earnings press release.
mattzito|10 years ago
- Chef and Puppet are too expensive for most companies to acquire, and have too much operational cost for too little revenue
- Ansible got a strong following in the SMB space, Red Hat probably thinks they can move that upmarket some
- Ansible's agentless configuration management has potentially strong applicability in a container world (why do I need a chunky agent to configure resources on my docker image? What if, for some reason, I need to affect change on running docker images? - I realize this is a bit of an anti-pattern for docker, but it was something I heard a lot from big enterprises)
$100m still sounds very high, kudos to the ansible folks who have come a long way in the last few years.
EDIT: one more piece I didn't think of here - the openstack side of things is an area where Red Hat has made big long-term bets for the future of the company, and it probably helps to justify the price in terms of backstopping their openstack support.
superuser2|10 years ago
nailer|10 years ago
Not saying that one is better than the other, just than there's more Python out there in sysadminland.
mugsie|10 years ago
Ansible is growing its OpenStack support, and they might see an opportunity for the RDO product.
walrus|10 years ago
samstave|10 years ago
leg100|10 years ago
bryanlarsen|10 years ago
I hate ansible, it's just better than any of the alternatives for different reasons. Luckily we're moving away from needing any of them. Scripting an image build is a lot easier than updating a machine using CI: you start from a blank slate every time, and an out of date script isn't the catastrophe it is with CI since you have the images saved.
dorfsmay|10 years ago
Using jinja2 for markup compounded the issue in my opinion, as it has no loops and logic is less than obvious (compared to mako for example).
Still I find its agentless model, the idempotent model, being able to use it on machines where you don't have root access etc... gives it a place that nobody had fulfilled.
anton_gogolev|10 years ago
bbrazil|10 years ago
Syntactically the problem I run into is that it's got it's own DSL in task definitions, so it can be hard to keep in mind what's YAML and what's the DSL.
Semantically loops and conditions are essential features so I don't have a problem with that. The inheritance could use some clarification, I was hit last year by a regression that remains unresolved.
Schiphol|10 years ago
dspillett|10 years ago
verytrivial|10 years ago
nzoschke|10 years ago
http://michaeldehaan.net/post/118717252307/immutable-infrast...
mmahemoff|10 years ago
Also, I think Ansible's idempotent model actually works nicely with immutable infrastructure. Why? For development of your stack. While messing around with it, you probably don't want to rebuild the whole thing from scratch. Of course you can play funny games with caching of remote packages and so on, but that's getting into Ansible territory anyway.
So I think a good model for immutable infrastructure is to use a tool like Ansible to develop the stack, then in production you would use the same tool to spin up immutable instances.
awjr|10 years ago
I see Ansible as primarily an orchestration tool.
rdeboo|10 years ago
But what do you do when you want to change my MySQL config file? Create a new image and somehow transfer the data? Or are the datastores somehow externalized? Then how do you synchronize shutting down the old image, then starting the new updated one, preventing them from accessing the store at the same time?
The linked article kind of waves these issues away ('externalize state in Cassandra or RDS'). Then am I supposed to use two mechanisms/tools to run my infrastructure? Docker for stateless servers and something like Ansible for stateful servers?
vasco|10 years ago
buster|10 years ago
listic|10 years ago
KarlPlatt|10 years ago
My idea was to develop the infrastructure with Ansible, e.g. no ssh to change some httpd settings at all. Everything via Ansible. It worked very well as long as the playbooks and number of servers was very small.
bovermyer|10 years ago
This can be mitigated somewhat by putting Ansible on the target machine, downloading all the necessary files to that machine, and then running Ansible locally... but that seems awfully fragile to me.
I am much more interested in Salt's ZeroMQ path these days. It seems to scale better, at least on paper and in my few small tests.
pm90|10 years ago
Disclaimer: I work in the Cloud Orchestration team at Rackspace.
[0]: https://github.com/openstack/heat-templates/tree/master/hot/...
crdoconnor|10 years ago
That's why it has tags. So you can run just the settings states rather than running the whole 20 minute thing over and over again.
justingood|10 years ago
We eventually settled on having Ansible build an AMI for us that can then be spun up by as part of a Cloudformation template (also initiated by Ansible).
We've actually been moving further and further away from having Ansible handle the configuration management side of things, and deal with Orchestration primarily.
ptio|10 years ago
srvg|10 years ago
srvg|10 years ago
xorcist|10 years ago
chr15p|10 years ago
In fact the puppet side of Satellite is built around Foreman (http://theforeman.org/) which is an open source project that isn;t Red Hat controlled so even if Red Hat wanted to move 100% to Ansible it would be very hard work for little gain. It would also be a really bad commercial idea Puppet is by far the market leader and most of their customers buy satellite precisely because it integrates with their existing puppet manifiests.
So I expect Puppet to stay as Red Hat's goto configuration management tool, and ansible to be used more for its ad-hoc remote execution capabilities where puppet is nowhere near as good. RH already uses ansible in the installer for Open Shift for example because it can set up multiple boxes without needing an agent pre-installed.
atsaloli|10 years ago
thejerz|10 years ago
I hope that Redhat will accelerate the growth of this very well engineered platform.
Congrats to the Ansible team!
carlsborg|10 years ago
geerlingguy|10 years ago
One thing I wonder is how much the project's priorities would shift away from (if at all) anything non-RHEL-centric.
rodgerd|10 years ago
Apart from the fact that it's been a shambles, Red Hat have been solidly pushing customers down the puppet route. I expect there will be some grumpy meetings in the next few weeks.
creshal|10 years ago
Going by projects like NetworkManager it'll work okay, but you'll need to be a paying RedHat customer to get any useful documentation.
akurilin|10 years ago
tomaac|10 years ago
pdeva1|10 years ago
devnonymous|10 years ago
stock_toaster|10 years ago
unknown|10 years ago
[deleted]
devit|10 years ago
Basically you lose a lot of time searching the web for how to do things that you already know how to do in shell, but the benefits are not so clear.
pilif|10 years ago
Completely wiping and reinstalling the main database servers (one after another of course) during the day while the system was in active use and completing the process with zero user intervention, that felt amazing.
Since then, whenever I had to reinstall a machine for one reason or another, I always appreciated the immense speed-up I gained by not having to ever manually re-do the configuration.
Better yet: All the years of growing the configuration, all the small insights learned over time, all the small fixes to the configuration: All are preserved and readily available. Even better: By using git, I can even go back in time and learn why I did what and when.
"Why am I using TCP for NFS? Oh right - that was back in december of 2012 when we were using UDP and we ran into that kernel deadlock" - that's next to impossible to do when you're configuring servers manually.
objectified|10 years ago
crucialfelix|10 years ago
I've also often found much of my time wasted trying to get ansible to do something simple.
mugsie|10 years ago
I did a fair bit of work based on the OpenStack tripleO project, which suffered from the OpenStack NIH syndrome. They could not agree on a CM tool, and wrote it in bash. Never, ever, ever again. Trying to cluster RabbitMQ / Percona across 3 different machines, via bash is an abomination, whereas in Ansible / Salt etc. it is pretty easy :)
togusa|10 years ago
However the config management stuff seems to come to light when you've got a mess on your hands and need to rationalise it and make it consistent.
I'm slightly leaning towards the "rebuild with known good baseline" state of affairs these days however even as a long time Ansible user. Rather than upgrade stuff, I build something new alongside and then do a switcheroo nearly every time.
One day, hopefully containers will allow us to have consistent state everywhere.
hedwall|10 years ago
ericcholis|10 years ago
wtbob|10 years ago
The Right Answer would involve a language which enabled one to create one's own syntactic abstractions in order to satisfy the general and specific needs of software installation. As an example, it'd be nice to have a WITH-INSTALLATION-DIRECTORY construct, which ensures that a directory exists, ensures that it's owned by the appropriate user, ensures that no other package already claims it (except that a previous version of the currently-being-installed package is okay), registers the directory and everything created in it during WITH-INSTALLATION-DIRECTORY as belonging to the currently-being-installed package, handles errors in a well-defined and useful manner for calling code, and so on and on and on.
And of course even that isn't high-level enough: If I'm installing bazit, which depends on foothing and quuxstuff, then I'll want to call something which ensures they exist. Or maybe there's an optional 'dependency,' and I want to do certain things if they exist and certain if not.
And maybe it's not low-level enough either. What if I want to override one particular sort of installation behaviour, but not the rest? What if I want to install a package in my own account, as myself? Wouldn't it be cool if I could set a few variables and the package manager Just Worked™?
As another user indicated, what all these tools really need is to be Lisp: versionable data which is code. As Shiver's work with scsh demonstrated, a Lisp-like language can be very pleasant to write POSIX applications in. Macros enable one to create useful syntactic constructs which make meaning, rather than details, clear. Dynamic variables (as in Common Lisp) easily enable customisation based on the call stack. CL's condition and restart systems are the gold standard for error signalling and recovery.
Florin_Andrei|10 years ago
srvg|10 years ago
chr15p|10 years ago
KrestenKjaer|10 years ago
poooogles|10 years ago
qznc|10 years ago
atsaloli|10 years ago
Edit: added link to mhartl's tutorial: http://www.learnenough.com/tutorial-writing
betaby|10 years ago
ybx|10 years ago
JanusBifrons|10 years ago
Alessandro
maweki|10 years ago
homulilly|10 years ago
lvandeyar|10 years ago
MrOwen|10 years ago
WestCoastJustin|10 years ago
https://sysadmincasts.com/episodes/43-19-minutes-with-ansibl...
https://sysadmincasts.com/episodes/45-learning-ansible-with-...
https://sysadmincasts.com/episodes/46-configuration-manageme...
https://sysadmincasts.com/episodes/47-zero-downtime-deployme...
sbierwagen|10 years ago
dethos|10 years ago
dadoprso|10 years ago
TheRealWatson|10 years ago
kangman|10 years ago
surapaneni|10 years ago
occsceo|10 years ago
mianos|10 years ago
hyperliner|10 years ago
Here is the financial disclosure from RedHat. NOTICE THE FIRST SENTENCE.
The acquisition is expected to have no material impact to Red Hat's revenue for the third and fourth quarters of its fiscal year ending Feb. 29, 2016 (“fiscal 2016”). Management expects that non-GAAP operating expenses for fiscal 2016 will increase by approximately $2.0 million, or ($0.01) per share, in the third quarter and approximately $4.0 million, or ($0.02) per share, in the fourth quarter as a result of the transaction. Red Hat calculates non-GAAP operating expense by subtracting from GAAP operating expense the estimated impact of non-cash share-based compensation expense, which for fiscal 2016 is expected to increase by approximately $1 million for each of the third and fourth quarters, and amortization of intangible assets, which for fiscal 2016 is expected to increase by approximately $1 million for each of the third and fourth quarters, in addition to transaction costs related to business combinations, which are expected to increase by approximately $1 million in the third quarter. Management expects GAAP operating expense to increase for fiscal 2016 by approximately $5 million, or ($0.02) per share, in the third quarter and approximately $6 million, or ($0.02) per share, in the fourth quarter as a result of the transaction. Excluding the operating expense impact as noted above to GAAP and non-GAAP operating margin and GAAP and non-GAAP earnings per share, Red Hat is otherwise re-affirming its fiscal 2016 third quarter and full year guidance provided in its Sept. 21, 2015, earnings press release.
kzhahou|10 years ago
Or not implying anything but hoping someone here has an answer or analysis?