"All problems in computer science can be solved by another level of indirection" - David Wheeler
That's what "containers" are, of course. There's so much state in OS file namespaces that running any complex program requires "installation" first. That's such a mess that virtual machines were created to allow a custom OS environment for a program. Then that turned into a mess, with, for example, a large number of canned AWS instances to choose from. So now we have another level of indirection, "containers".
Next I expect we'll have container logistics management startups. These will store your container in a cloud-based "warehouse", and will continuously take bids for container execution resources. Containers will be automatically moved around from Amazon to Google to Rackspace, etc. depending on who's offering the lowest bid right now.
It's more like a ping-pong. Things start off simply, but over time as the layers of abstraction pile up, things become brittle and unworkable.
I view containers as more of a reworking of a key computational abstraction (VMs) than an evolution of them. We finally have operating systems with enough inter-process isolation, sufficiently capable filesystems (layering), etc. that we can throw out 80% of the other unnecessary junk of VMs like second kernels, duplicate schedulers, endless duplication of standard system libraries, etc.
So it's more like we've hacked/refactored virtualization into a more usable state, and gotten rid of a lot of useless garbage that it turns out we didn't actually need. It's a lot like how a big software system evolves, now that I think about it.
IMO, the problem is that your standard OS has way too much stuff running.
A SaaS app running in production should be about the size of your binary, and the libraries it uses. Instead, we have X, smtp, terminals and a full filesystem running. home directories and uids make no sense in an app that uses no unix users except for the one you're forced to use.
I'd really like to see a much smaller, simpler, non-posix OS for running server apps.
In this case, the problem isn't being solved -- solving the problem would mean moving away from dependencies on the global OS namespace by relearning how to write self-contained applications (some people never forgot).
Containers are just a big wad of duct tape holding together the ball of mud that comprises most web applications' server-side components.
Add containers, and you haven't solved the problem, you've just made two problems.
It's not really adding another level of indirection, it's taking one away. The pain of change remains in that you have to internalize yet another new layer, BUT at least this way you get to leave VMs behind. It's trading one layer for another slightly more granular one instead of piling another one on top.
Docker in general is just another swing of the granularity pendulum. Since the rise of distributed environments in the late 1980s, the pendulum has swung back and forth between microservices (which become a version control tangle as they move independently) and monolithic applications (which become a bloatware problem as they have whole kitchen sinks to move around). The core problem is that software is complex, and at a certain level, you can't take complexity away - just push it around here and there. A large number of small pieces, or a small number of large pieces. Which kneecap do you want shot in?
After a few years of trending toward monoliths via chef/puppet/ansible DevOps automation, Docker is going in a different direction, toward fragmented SOA. It'll go that way for a while until it becomes too painful, and then new tech will come to push us back to the monolithic approach, until that hurts too much...
The good thing is, these cycles come in response to improvements in technology and performance. Our tools get better all the time, and configuration management struggles to keep up. It's awesome! Docker will rule for a while and then be passed by in favor of something new, but it'll leave a permanent mark, just as Chef did, and Maven, and Subversion, and Ant, and Make, and CVS, and every other game-changer.
Security-wise, if I understand correctly, this is a very interesting offering.
1. The containers live on "your" VMs so you get the isolation of a virtual machine and do not worry about the other tenants' containers.
2. The VMs are part of a "private cloud", i.e., the internal network is not accessible by other tenants' VMs and containers.
#2 is what worried me the most in other container service offerings. It's easy to overlook protecting your internal ip when you manage VMs, it's even easier (and expected) when you deploy containers.
I'm here at AWS reinvent and just saw the EC2 Container Service presentation. They specifically targeted security as part of their design.
Basically, you launch a cluster of EC2 instances that are "available" for containers to launch into. So these are your instances, running in your VPCs. It's really the same security profile as the standard VPCs plus any other security issues your particular docker containers expose.
I'm disappointed that this requires an invite, particularly so close after Container Engine which I was able to try out immediately while still watching Cloud Platform Live the other day.
Is this typical for new AWS offerings?
It makes me wonder if it's something that truly isn't ready for prime time, but is being rushed / forced by the mounting Docker hype and GKE announcement.
Considering they've been tweeting about it [1] since before their competitors announced things I'd say it's unlikely to be a "response". It's far more likely that Docker has now been out long enough for the various providers to build services around it. AWS already had some docker support built in in April [2]. It's also pretty common to release services as previews. GCE lists theirs as an Alpha quality product.
According to one of the AWS devs, they plan to start honoring invite requests in about 2 - 4 weeks. It appears to be in preview right now mostly b/c the loose ends aren't tied up yet. For example, in their demo today, they launched EC2 instances in a cluster using an AMI that's specially enabled for the EC2 Container Service but which is not yet publicly available.
Anyone have any insight about if this handles service discovery? It claims "cluster management" which usually means discovery, but there is no mention of it. Maybe Amazon is expecting you to handle that?
I was wondering this as well. It seems that they will provide for constraints around co-located containers (similar to pods in Kubernetes) but I'm not sure how discovery for containers scheduled across hosts is meant to take place.
...including the Docker repository and image, memory and CPU requirements, and how the containers are linked to each other. You can launch as many tasks as you want from a single task definition file that you can register with the service.
Very few details but it looks like container lifting across hosts. If so this is great news.
You install an ECS agent on each physical server that runs alongside dockerd and reports state back to the central ECS API. You can query that API for service discovery.
No mention of Elastic Load Balancing integration or even EBS integration. Thus avoiding the 2 hardest problems in container management.
To make this not suck you will still need a proxy layer that maps ELB listeners to your containers and if you intend to run containers with persistent storage you are going to be in for a fun ride.
Probably best to integrate functionality for interacting with storage systems into Docker itself, probably as a script hook interface similar to the way Xen works.
So Azure, GCE, and now EC2 all support docker natively. Sorry Canonical and LXD, but docker has basically won at this point. There simply isn't a good reason to "compete" when you can just add features to docker at this point.
[+] [-] Animats|11 years ago|reply
That's what "containers" are, of course. There's so much state in OS file namespaces that running any complex program requires "installation" first. That's such a mess that virtual machines were created to allow a custom OS environment for a program. Then that turned into a mess, with, for example, a large number of canned AWS instances to choose from. So now we have another level of indirection, "containers".
Next I expect we'll have container logistics management startups. These will store your container in a cloud-based "warehouse", and will continuously take bids for container execution resources. Containers will be automatically moved around from Amazon to Google to Rackspace, etc. depending on who's offering the lowest bid right now.
[+] [-] eldavido|11 years ago|reply
I view containers as more of a reworking of a key computational abstraction (VMs) than an evolution of them. We finally have operating systems with enough inter-process isolation, sufficiently capable filesystems (layering), etc. that we can throw out 80% of the other unnecessary junk of VMs like second kernels, duplicate schedulers, endless duplication of standard system libraries, etc.
So it's more like we've hacked/refactored virtualization into a more usable state, and gotten rid of a lot of useless garbage that it turns out we didn't actually need. It's a lot like how a big software system evolves, now that I think about it.
[+] [-] arohner|11 years ago|reply
A SaaS app running in production should be about the size of your binary, and the libraries it uses. Instead, we have X, smtp, terminals and a full filesystem running. home directories and uids make no sense in an app that uses no unix users except for the one you're forced to use.
I'd really like to see a much smaller, simpler, non-posix OS for running server apps.
[+] [-] teacup50|11 years ago|reply
Containers are just a big wad of duct tape holding together the ball of mud that comprises most web applications' server-side components.
Add containers, and you haven't solved the problem, you've just made two problems.
[+] [-] siegecraft|11 years ago|reply
[+] [-] tjbiddle|11 years ago|reply
[+] [-] rdgiii|11 years ago|reply
[+] [-] beat|11 years ago|reply
After a few years of trending toward monoliths via chef/puppet/ansible DevOps automation, Docker is going in a different direction, toward fragmented SOA. It'll go that way for a while until it becomes too painful, and then new tech will come to push us back to the monolithic approach, until that hurts too much...
The good thing is, these cycles come in response to improvements in technology and performance. Our tools get better all the time, and configuration management struggles to keep up. It's awesome! Docker will rule for a while and then be passed by in favor of something new, but it'll leave a permanent mark, just as Chef did, and Maven, and Subversion, and Ant, and Make, and CVS, and every other game-changer.
[+] [-] St-Clock|11 years ago|reply
1. The containers live on "your" VMs so you get the isolation of a virtual machine and do not worry about the other tenants' containers.
2. The VMs are part of a "private cloud", i.e., the internal network is not accessible by other tenants' VMs and containers.
#2 is what worried me the most in other container service offerings. It's easy to overlook protecting your internal ip when you manage VMs, it's even easier (and expected) when you deploy containers.
[+] [-] joshpadnick|11 years ago|reply
Basically, you launch a cluster of EC2 instances that are "available" for containers to launch into. So these are your instances, running in your VPCs. It's really the same security profile as the standard VPCs plus any other security issues your particular docker containers expose.
[+] [-] kalgen|11 years ago|reply
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] incision|11 years ago|reply
Is this typical for new AWS offerings?
It makes me wonder if it's something that truly isn't ready for prime time, but is being rushed / forced by the mounting Docker hype and GKE announcement.
[+] [-] swordwield|11 years ago|reply
[1] https://twitter.com/jeffbarr/status/529493907839533056
[2] http://blog.docker.com/2014/04/aws-elastic-beanstalk-launche...
[+] [-] jakozaur|11 years ago|reply
E.g. Amazon Kinesis:
Preview 14 Nov 2013: http://aws.amazon.com/blogs/aws/amazon-kinesis-real-time-pro...
Got access less than week from that.
GA 16 Dec 2013: http://aws.amazon.com/about-aws/whats-new/2013/12/16/amazon-...
[+] [-] joshpadnick|11 years ago|reply
[+] [-] hammerdr|11 years ago|reply
[+] [-] zenlikethat|11 years ago|reply
[+] [-] jhappoldt|11 years ago|reply
...including the Docker repository and image, memory and CPU requirements, and how the containers are linked to each other. You can launch as many tasks as you want from a single task definition file that you can register with the service.
Very few details but it looks like container lifting across hosts. If so this is great news.
[+] [-] timdorr|11 years ago|reply
[+] [-] jpgvm|11 years ago|reply
To make this not suck you will still need a proxy layer that maps ELB listeners to your containers and if you intend to run containers with persistent storage you are going to be in for a fun ride.
Probably best to integrate functionality for interacting with storage systems into Docker itself, probably as a script hook interface similar to the way Xen works.
[+] [-] nitinag|11 years ago|reply
[+] [-] SEJeff|11 years ago|reply
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] LeonidBugaev|11 years ago|reply
[+] [-] gtaylor|11 years ago|reply
[+] [-] sync|11 years ago|reply
[+] [-] zenlikethat|11 years ago|reply
[+] [-] sshillo|11 years ago|reply
[+] [-] j2d3|11 years ago|reply
[+] [-] general_failure|11 years ago|reply
[+] [-] garblegarble|11 years ago|reply
[+] [-] pm90|11 years ago|reply
[+] [-] cyanbane|11 years ago|reply
[+] [-] waitingkuo|11 years ago|reply
[+] [-] ing33k|11 years ago|reply
[+] [-] unknown|11 years ago|reply
[deleted]