top | item 37464622

(no title)

pluijzer | 2 years ago

In a way it became the complete opposite of how it started. At first one OS for many users, ea with many processes. Now, with containers, micro services etc. we have an OS per service/process. Still the original abstractions work surprisingly well though makes it me wonder how a complete redesign of would look like aimed at modern usage.

discuss

order

tannhaeuser|2 years ago

But the question is why did we arrive at containers and one OS per "microservice"? Has memory-to-IO bandwidth, scalability requirements, or whatever really changed (like in orders of magnitude) to warrant always-async programming models, even though these measurably destroy process isolation and worsen developer efficiency? After almost 50 years of progress? Or is it the case that containers are more convenient for cloud providers, selling more containers is more profitable, inventing new async server runtimes is more fun and/or the regular Linux userspace (shared lib loading) is royally foobar'd, or at least cloud providers tell us it is?

bluetomcat|2 years ago

The traditional Unix IO model broke with the Berkeley sockets API introduced in 1982. The obvious "Unix" way to handle a TCP connection concurrently was to fork() a separate process servicing the connection synchronously, but that doesn't scale well with many connections. Then they introduced non-blocking sockets and select(), then poll(), and now Linux has its own epoll. All these "async programming models" are ultimately based on that system API.

nijave|2 years ago

>But the question is why did we arrive at containers and one OS per "microservice"?

I think it makes more sense if you consider the interim transition to other isolation mechanisms like commodity servers instead of mainframes, VMs, then containers as a way to get more isolation/security than traditional multi user model with less overhead than an entire machine.

Obviously cloud providers want to push for solutions that offer higher densities but those same cost/efficiency incentives exist outside cloud providers.

I'd say we've more accurately been trying to reinvent proprietary mainframes on commodity hardware.

nijave|2 years ago

Calling an independent set of libraries in an isolated space an entire OS is a bit of a stretch. Containers generally don't contain an init system and a bunch of services (sure, they technically can and some do), but there's generally much less running than an entire OS.

lmm|2 years ago

Most of the time the OS is just overhead now. Look at unikernels for one possible future.

cmiller1|2 years ago

I'm not sure I think the exokernel/unikernel approach by itself is the path forward. While the library operating system approach makes a lot of sense for applications where raw throughput/performance are crucial, they don't offer as much in the way of development luxuries, stability, or security as most modern operating systems. Furthermore, outside of very specific applications the bare metal kind of performance that the exokernel approach promises isn't really that useful. That said, I suspect a hybrid approach may be viable where two extremes are offered, an extremely well isolated and secure microkernel which offers all of the luxuries of a modern operating system built on top of an exokernel which can also be accessed directly through the library operating system approach for specific performance critical applications (say, network and disk operations for a server.)

magicalhippo|2 years ago

> Most of the time the OS is just overhead now.

And they became so good at it that we added more OS' on top, with our VMs and OS-like web browsers...