Just what we need a Linux distro who's main goal is apparently to promote Intel products. The language used to describe it makes this quite clear. "The goal of Clear Linux OS, is to showcase the best of Intel Architecture technology...". This is a blatent attempt to exclude ARM who is gaining Linux market share. Whatever innovation they might bring to the table, I will avoid it purely on the basis that its aim is to benefit Intel rather than the user. Dot org my ass.
I don't think they really expect you to want to use it directly. As it says, it's a showcase. But a lot of the technology might make it into other distros in more generic forms.
One issue where "pure" containers have an advantage over VMs is IO.
For network intensive workloads, there is a choice between the efficiency of SR-IOV and the control & manageability of a virtual NIC like virtio-net. In order to get efficiency, you need to use SR-IOV, which (the last time I checked) still made lots of admins nervous when running untrusted guests. Sure, the guest could be isolated from internal resources via a vlan, but it could still be launching malicious code onto the internet, and it may be difficult to track its traffic for billing purposes, especially if you want to differentiate between external & internal traffic. SR-IOV NICs also have limited number of queues and VFs, so it is hard to over-commit servers. So in order to maintain control of guests, you end up doubling the kernel overhead by using a virtual NIC (eg, virtio-net) in the VM and a physical NIC in the hypervisor. Now you have twice the overhead, twice the packet pushing, more memory copies, VM exits, etc.
The nice thing about containers is that there is no need to choose. You get the efficiency of running just a single kernel, along with all the accounting and firewalling rules to maintain control & be able to bill the guest.
Container networking has overheads, the virtual network pairs and the natting is not costless at all, and most people with network intensive applications are allocating physical interfaces to containers anyway.
How they purport to do packaging is interesting, but I'm not sure it will work well in the end. Having "bundles" that contain immutable sets of packages sounds good from a stability point of view, but unless they are entirely self contained, you'll undoubtedly run into a library that you need to updated for one bundle that then forces you to update another entire bundle. If each bundle is entirely self contained (allowing it to have it's own set of libraries), you're essentially recreating what's a static binary through package semantics. This comes with the usual downsides of static binaries.
I'm interested in seeing it tried though. The learning is in the doing.
Self contained packages are not a new idea. For example, PC-BSD has been doing this for years, via their PBI package format. See the description of PBI here: http://www.pcbsd.org/en/package-management
I think PBI does de-duplication at the package manager level by manipulating hard-links to common files, rather than installing multiple copies.
If they are micro-vms, container-style, I don't think they will have such need to share any library? -in theory, at least- ..
I mean, it is possible to completely isolate them, all.
It may end-up very heavy though, but, and I can be wrong on this, with the constant growth of storage capacities, network bandwidth, RAM capacity, and the progress made to lighten "containers", I don't think this "heavy" downside I see of immutable infrastructures will be a real issue in the future.
quote: "With kvmtool, we no longer need a BIOS or UEFI; instead we can jump directly into the Linux kernel. Kvmtool is not cost-free, of course; starting kvmtool and creating the CPU contexts takes approximately 30 milliseconds."
I dont quite understand what this is: Is it a linux distribution that can have a graphical interface like Gnome 3? My question is essentially: Is it more like Ubuntu or more like Docker?
Would very much like to see a comparison of Clear Containers and LXD. Would also like to know why Intel decided to do their own thing and not just help with the LXD project.
This does use containers and in fact they have some interesting modifications that they have made to the rkt container runtime to use KVM isolation instead of just namespaces and cgroups. See a link in 4ad's comment for an LWN article.
Those modifications are exciting for me as one of the developers of rkt. We built rkt with this concept of "stages"[1] where the rkt stage1 here is being swapped out from the default which uses "Linux containers" and instead executing lkvm. In this case the Clear Containers team was able to swap out the stage1 with some fairly minimal code changes to rkt which are going upstream. Cool stuff!
[+] [-] jcoffland|11 years ago|reply
[+] [-] kasabali|11 years ago|reply
[+] [-] vidarh|11 years ago|reply
[+] [-] digi_owl|11 years ago|reply
Moblin was started because MS balked at making Windows for a Intel chip that didn't offer PCI enumeration.
[+] [-] drewg123|11 years ago|reply
For network intensive workloads, there is a choice between the efficiency of SR-IOV and the control & manageability of a virtual NIC like virtio-net. In order to get efficiency, you need to use SR-IOV, which (the last time I checked) still made lots of admins nervous when running untrusted guests. Sure, the guest could be isolated from internal resources via a vlan, but it could still be launching malicious code onto the internet, and it may be difficult to track its traffic for billing purposes, especially if you want to differentiate between external & internal traffic. SR-IOV NICs also have limited number of queues and VFs, so it is hard to over-commit servers. So in order to maintain control of guests, you end up doubling the kernel overhead by using a virtual NIC (eg, virtio-net) in the VM and a physical NIC in the hypervisor. Now you have twice the overhead, twice the packet pushing, more memory copies, VM exits, etc.
The nice thing about containers is that there is no need to choose. You get the efficiency of running just a single kernel, along with all the accounting and firewalling rules to maintain control & be able to bill the guest.
[+] [-] justincormack|11 years ago|reply
There are higher performance virtual network setups eg see http://www.virtualopensystems.com/en/solutions/guides/snabbs...
Container networking has overheads, the virtual network pairs and the natting is not costless at all, and most people with network intensive applications are allocating physical interfaces to containers anyway.
[+] [-] Merkur|11 years ago|reply
[+] [-] 4ad|11 years ago|reply
[+] [-] kbenson|11 years ago|reply
I'm interested in seeing it tried though. The learning is in the doing.
[+] [-] drewg123|11 years ago|reply
I think PBI does de-duplication at the package manager level by manipulating hard-links to common files, rather than installing multiple copies.
[+] [-] tbronchain|11 years ago|reply
I mean, it is possible to completely isolate them, all.
It may end-up very heavy though, but, and I can be wrong on this, with the constant growth of storage capacities, network bandwidth, RAM capacity, and the progress made to lighten "containers", I don't think this "heavy" downside I see of immutable infrastructures will be a real issue in the future.
[+] [-] zobzu|11 years ago|reply
its a VM really, but packaged like a container. On my laptop, it starts about as fast as a Docker container, ie less than a second.
This is quite impressive.
[+] [-] zymhan|11 years ago|reply
[+] [-] dbbolton|11 years ago|reply
* what tangible benefits would I get from using Clear Linux over my own heavily customized/handrolled linux server?
* how does the update system handle breakage/conflicts?
* are any of Intel's changes likely to make it into other existing distros or kernels?
[+] [-] ramidarigaz|11 years ago|reply
http://lwn.net/SubscriberLink/644675/5be656c24083e53b/
[+] [-] Thaxll|11 years ago|reply
[ 0.000000] KERNEL supported cpus:
[ 0.000000] Intel GenuineIntel
[ 0.000000] e820: BIOS-provided physical RAM map:
...
[ 1.245851] calling fuse_init+0x0/0x1b6 [fuse] @ 1
[ 1.245853] fuse init (API version 7.23)
[ 1.246299] initcall fuse_init+0x0/0x1b6 [fuse] returned 0 after 431 usecs
[+] [-] n3mes1s|11 years ago|reply
quote: "With kvmtool, we no longer need a BIOS or UEFI; instead we can jump directly into the Linux kernel. Kvmtool is not cost-free, of course; starting kvmtool and creating the CPU contexts takes approximately 30 milliseconds."
[+] [-] voltagex_|11 years ago|reply
[+] [-] mbrzusto|11 years ago|reply
[+] [-] pyvpx|11 years ago|reply
[+] [-] Meai|11 years ago|reply
[+] [-] oldsj|11 years ago|reply
[+] [-] rgborn|11 years ago|reply
[+] [-] lqdc13|11 years ago|reply
And then recompile again whenever a bundle gets updated?
[+] [-] mrmondo|11 years ago|reply
[+] [-] Merkur|11 years ago|reply
[+] [-] zxcvcxz|11 years ago|reply
https://download.clearlinux.org/
[+] [-] smegel|11 years ago|reply
[+] [-] philips|11 years ago|reply
Those modifications are exciting for me as one of the developers of rkt. We built rkt with this concept of "stages"[1] where the rkt stage1 here is being swapped out from the default which uses "Linux containers" and instead executing lkvm. In this case the Clear Containers team was able to swap out the stage1 with some fairly minimal code changes to rkt which are going upstream. Cool stuff!
[1] https://github.com/coreos/rkt/blob/master/Documentation/deve...
[+] [-] frozenport|11 years ago|reply