top | item 9572478

Clear Linux Project

164 points| Merkur | 11 years ago |clearlinux.org | reply

56 comments

order
[+] jcoffland|11 years ago|reply
Just what we need a Linux distro who's main goal is apparently to promote Intel products. The language used to describe it makes this quite clear. "The goal of Clear Linux OS, is to showcase the best of Intel Architecture technology...". This is a blatent attempt to exclude ARM who is gaining Linux market share. Whatever innovation they might bring to the table, I will avoid it purely on the basis that its aim is to benefit Intel rather than the user. Dot org my ass.
[+] kasabali|11 years ago|reply
The site is weak but you should check the LWN link given in this thread. They have done some cool stuff actually.
[+] vidarh|11 years ago|reply
I don't think they really expect you to want to use it directly. As it says, it's a showcase. But a lot of the technology might make it into other distros in more generic forms.
[+] digi_owl|11 years ago|reply
Not the first time Intel does this.

Moblin was started because MS balked at making Windows for a Intel chip that didn't offer PCI enumeration.

[+] drewg123|11 years ago|reply
One issue where "pure" containers have an advantage over VMs is IO.

For network intensive workloads, there is a choice between the efficiency of SR-IOV and the control & manageability of a virtual NIC like virtio-net. In order to get efficiency, you need to use SR-IOV, which (the last time I checked) still made lots of admins nervous when running untrusted guests. Sure, the guest could be isolated from internal resources via a vlan, but it could still be launching malicious code onto the internet, and it may be difficult to track its traffic for billing purposes, especially if you want to differentiate between external & internal traffic. SR-IOV NICs also have limited number of queues and VFs, so it is hard to over-commit servers. So in order to maintain control of guests, you end up doubling the kernel overhead by using a virtual NIC (eg, virtio-net) in the VM and a physical NIC in the hypervisor. Now you have twice the overhead, twice the packet pushing, more memory copies, VM exits, etc.

The nice thing about containers is that there is no need to choose. You get the efficiency of running just a single kernel, along with all the accounting and firewalling rules to maintain control & be able to bill the guest.

[+] justincormack|11 years ago|reply
SR-IOV should not really make you nervous, it uses the iommu. Billing might have some issues I guess.

There are higher performance virtual network setups eg see http://www.virtualopensystems.com/en/solutions/guides/snabbs...

Container networking has overheads, the virtual network pairs and the natting is not costless at all, and most people with network intensive applications are allocating physical interfaces to containers anyway.

[+] kbenson|11 years ago|reply
How they purport to do packaging is interesting, but I'm not sure it will work well in the end. Having "bundles" that contain immutable sets of packages sounds good from a stability point of view, but unless they are entirely self contained, you'll undoubtedly run into a library that you need to updated for one bundle that then forces you to update another entire bundle. If each bundle is entirely self contained (allowing it to have it's own set of libraries), you're essentially recreating what's a static binary through package semantics. This comes with the usual downsides of static binaries.

I'm interested in seeing it tried though. The learning is in the doing.

[+] drewg123|11 years ago|reply
Self contained packages are not a new idea. For example, PC-BSD has been doing this for years, via their PBI package format. See the description of PBI here: http://www.pcbsd.org/en/package-management

I think PBI does de-duplication at the package manager level by manipulating hard-links to common files, rather than installing multiple copies.

[+] tbronchain|11 years ago|reply
If they are micro-vms, container-style, I don't think they will have such need to share any library? -in theory, at least- ..

I mean, it is possible to completely isolate them, all.

It may end-up very heavy though, but, and I can be wrong on this, with the constant growth of storage capacities, network bandwidth, RAM capacity, and the progress made to lighten "containers", I don't think this "heavy" downside I see of immutable infrastructures will be a real issue in the future.

[+] zobzu|11 years ago|reply
I just tried it. it is fast.

its a VM really, but packaged like a container. On my laptop, it starts about as fast as a Docker container, ie less than a second.

This is quite impressive.

[+] zymhan|11 years ago|reply
I'm not so sure that running a container is directly analagous to using it in a VM.
[+] dbbolton|11 years ago|reply
After reading the overview and features, I'm left wondering:

* what tangible benefits would I get from using Clear Linux over my own heavily customized/handrolled linux server?

* how does the update system handle breakage/conflicts?

* are any of Intel's changes likely to make it into other existing distros or kernels?

[+] Thaxll|11 years ago|reply
I just tried on my desktop, woot it's super fast!

[ 0.000000] KERNEL supported cpus:

[ 0.000000] Intel GenuineIntel

[ 0.000000] e820: BIOS-provided physical RAM map:

...

[ 1.245851] calling fuse_init+0x0/0x1b6 [fuse] @ 1

[ 1.245853] fuse init (API version 7.23)

[ 1.246299] initcall fuse_init+0x0/0x1b6 [fuse] returned 0 after 431 usecs

[+] n3mes1s|11 years ago|reply
read the hypervisor part of the lwn article: https://lwn.net/SubscriberLink/644675/5be656c24083e53b/

quote: "With kvmtool, we no longer need a BIOS or UEFI; instead we can jump directly into the Linux kernel. Kvmtool is not cost-free, of course; starting kvmtool and creating the CPU contexts takes approximately 30 milliseconds."

[+] voltagex_|11 years ago|reply
Anyone know what they might be doing for the speed increase?
[+] mbrzusto|11 years ago|reply
i wonder if it builds with icc? seems like a matter of pride they should get that working.
[+] pyvpx|11 years ago|reply
that was my first guess at "how'd they make it faster?" icc is sometimes a shockingly better (read: compiled code that is faster) compiler.
[+] Meai|11 years ago|reply
I dont quite understand what this is: Is it a linux distribution that can have a graphical interface like Gnome 3? My question is essentially: Is it more like Ubuntu or more like Docker?
[+] oldsj|11 years ago|reply
More like CoreOS
[+] rgborn|11 years ago|reply
Would very much like to see a comparison of Clear Containers and LXD. Would also like to know why Intel decided to do their own thing and not just help with the LXD project.
[+] lqdc13|11 years ago|reply
Unless I am not getting something, are the developers expected to manually compile everything that isn't in a bundle?

And then recompile again whenever a bundle gets updated?

[+] mrmondo|11 years ago|reply
Correct me if I'm wrong but shouldn't 'Cloud' have a lower case C if it's not a product?
[+] Merkur|11 years ago|reply
I didn't find very mutch information about it.. yet. :( anyone played with it?
[+] smegel|11 years ago|reply
I am surprised they didn't go down the container route for OS updates like CoreOS. I think I like that approach.
[+] philips|11 years ago|reply
This does use containers and in fact they have some interesting modifications that they have made to the rkt container runtime to use KVM isolation instead of just namespaces and cgroups. See a link in 4ad's comment for an LWN article.

Those modifications are exciting for me as one of the developers of rkt. We built rkt with this concept of "stages"[1] where the rkt stage1 here is being swapped out from the default which uses "Linux containers" and instead executing lkvm. In this case the Clear Containers team was able to swap out the stage1 with some fairly minimal code changes to rkt which are going upstream. Cool stuff!

[1] https://github.com/coreos/rkt/blob/master/Documentation/deve...

[+] frozenport|11 years ago|reply
Would be cool if it built with ICC, like the old Linux DNA project.