(no title)
eyberg | 2 months ago
Unfortunately containers have always had an absolutely horrendous security story and they degrade performance by quite a lot.
The hypervisor is not going away anytime soon - it is what the entire public cloud is built on.
While you are correct that containers do add more layers - unikernels go the opposite direction and actively remove those layers. Also, imo the "attack surface" is by far the smallest security benefit - other architectural concepts such as the complete lack of an interactive userland is far more beneficial when you consider what an attacker actually wants to do after landing on your box. (eg: run their software)
When you deploy to AWS you have two layers of linux - one that AWS runs and one that you run - but you don't really need that second layer and you can have much faster/safer software without it.
m132|2 months ago
Suppose you control the entire stack though, from the bare metal up. (Correct me if I'm wrong, but) Toro doesn't seem to run on real hardware, you have to run it atop QEMU or Firecracker. In that case, what difference does it make if your application makes I/O requests through paravirtualized interfaces of the hypervisor or talks directly to the host via system calls? Both ultimately lead to the host OS servicing the request. There isn't any notable difference between the kernel/hypervisor and the user/kernel boundary in modern processors either; most of the time, privilege escalations come from errors in the software running in the privileged modes of the processor.
Technically, in the former case, besides exploiting the application, a hypothetical attacker will also have to exploit a flaw in QEMU to start processes or gain further privileges on the host, but that's just due to a layer of indirection. You can accomplish this without resorting to hardware virtualization. Once in QEMU, the entire assortment of your host's system calls and services is exposed, just as if you ran your code as a regular user space process.
This is the level you want to block exec() and other functionality your application doesn't need at, so that neither QEMU nor your code ran directly can perform anything out of their scope. Adding a layer of indirection while still leaving user/kernel, or unikernel/hypervisor junction points unsupervised will only stop unmotivated attackers looking for low-hanging fruit.
toast0|2 months ago
Some unikernels are intended to run under a hypervisor or on bare metal. Bare metal means you need some drivers, but if you have a use case for a unikernel on bare metal, you probably don't need to support the vast universe of devices, maybe only a few instances of a couple types of things.
I've got a not production ready at all hobby OS that's adjacent to a unikernel; runs in virtio hypervisors and on bare metal, with support for one NIC. In it's intended hypothetical use, it would boot from PXE, with storage on nodes running a traditional OS, so supporting a handful of NICs would probably be sufficient. Modern NICs tend to be fairly similar in interface, so if the manufacturer provides documentation, it shouldn't take too long to add support at least once you've got one driver doing multiple tx/rx queues and all that jazz... plus or minus optimization.
For storage, you can probably get by with two drivers, one for sata/ahci and one for nvme. And likely reuse an existing filesystem.
eyberg|2 months ago
One of the things that might not be so apparent is that when you deploy these to something like AWS all the users/process mgmt/etc. gets shifted up and out of the instance you control and put into the cloud layer - I feel that would be hard to do with physical boxen cause it becomes a slippery slope of having certain operations (such as updates) needing auth for instance.
mvaralar|2 months ago
Toro can run on baremetal although I stopped to support on that a few years ago. I tagged in master the commit when this happened. Also, I removed the TCP/IP Stack in favor to VSOCK. Those changes, though, could be reversed in case there is interest on those features.
laurencerowe|2 months ago
Hypervisors expose a much smaller API surface area to their tenants than an operating system does to its processes which makes them much easier to secure.
j-krieger|2 months ago
This is demonstratably untrue.
eyberg|2 months ago
ahepp|2 months ago
What does that have to do with unikernel vs more traditional VMs? You can build a rootfs that doesn't have any interactive userland. Lots of container images do that already.
I am not a security researcher, but I wouldn't think it would be too hard to load your own shell into memory once you get access to it. At least, compared to pulling off an exploit in the first place.
I would think that merging kernel and user address spaces in a unikernel would, if anything, make it more vulnerable than a design using similar kernel options that did not attempt to merge everything into the kernel. Since now every application exploit is a kernel exploit.
eyberg|2 months ago
Also merging the address space is not a necessity. In fact - 64-bit (which is essentially all modern cloud software) mandates virtual memory to begin with and many unikernel projects support elf loading.
pjmlp|2 months ago
The story is quite different in HP-UX, Aix, Solaris, BSD, Windows, IBM i, z/OS,...
ripdog|2 months ago
pixl97|2 months ago
Aren't there ways of overwriting the existing kernel memory/extending it to contain an a new application if an attacker is able to attack the running unikernel?
What protections are provided by the unikernel to prevent this?
eyberg|2 months ago
What becomes harder is if you have a binary that forces you to rewrite the program in memory as you suggest. That's where classic page protections come into play such as not exec'ing rodata, not writing to txt, not exec'ing heap/stack, etc. Just to note that not all unikernel projects have this and even if they do it might be trivial to turn them off. The kernel I'm involved with (Nanos) has other features such as 'exec protection' which prevents that app from exec-mapping anything not already explicitly mapped exec.
Running arbitrary programs, which is what a lot of exploit payloads try to achieve, is pretty different than having to stuff whatever they want to run inside the payload itself. For example if you look at most malware it's not just one program that gets ran - it's like 30. Droppers exist solely to load third party programs on compromised systems.
wmf|2 months ago
dheera|2 months ago
Python package management is a disaster. There should be ways of having multiple versions of a package coexist in /usr/lib/python, nicely organized by package name and version number, and import the exact version your script wants, without containerizing everything.
Electron applications are the other type of "fuck it" solution. There should be ways of writing good-looking native apps in JavaScript without actually embedding a full browser. JavaScript is actually a nice language to write front-ends in.
catlifeonmars|2 months ago
Have you tried uv?
soulofmischief|2 months ago
Sometimes, the reduction of development friction is the only reason a product ends up in your hands.
I say this as someone whose professional toolkit includes Docker, Python and Electron; Not necessarily tools of choice, but I'm one guy trying to build a lot of things and life is short. This is not a free lunch and the optimizer within me screams out whenever performance is left on the table, but everything is a tradeoff. And I'm always looking for better tools, and keep my eyes on projects such as Tauri.
ahepp|2 months ago
They can just say "here's the source code, here's a container where it works, the rest is the OS maintainer's job, and if Debian users running 10 year old software bug me I'm just gonna tell them to use the container"
nineteen999|2 months ago
fragmede|2 months ago
I've written my fair share of GUIs, and React (and thus Javascript) is great compared to, I don't know, PHP, but CSS is the absolute devil.