(no title)
acje
|
10 months ago
I find it inspiring that we are getting to where we are dealing with models that classify vulnerabilities at a systems level. However I also think we are kind of barking up the wrong three. There is IMHO something wrong with the current strategy of scaling up the von Neumann architecture. It leads to fragile software partitioning, noisy neighbors and both slow and sometimes unintended communication through shared memory. I’ve tried to lay this out in detail here https://lnkd.in/dRNSYPWC
transpute|10 months ago
https://www.microsoft.com/en-us/research/blog/barrelfish-exp...
> “In the next five to 10 years,” Barham predicts, “there are going to be many varieties of multicore machines. There are going to be a small number of each type of machine, and you won’t be able to afford to spend two years rewriting an operating system to work on each new machine that comes out. Trying to write the OS so it can be installed on a completely new computer it’s never seen before, measure things, and think about the best way to optimize itself on this computer—that’s quite a different approach to making an operating system for a single, specific multiprocessor.” The problem, the researchers say, stems from the use of a shared-memory kernel with data structures protected by locks. The Barrelfish project opts instead for a distributed system in which each unit communicates explicitly.
Public development stopped in March 2020, https://github.com/BarrelfishOS/barrelfish & https://barrelfish.org
_huayra_|10 months ago
Concretely, there are a lot of things that are getting more "NOC-y" (network-on-chip). I'm not an OS expert, but deal with a lot of forthcoming features from hardware vendors at my current role. Most are abstracted as some sorta PCI device that does a little "mailbox protocol" to get some values (perhaps directly, perhaps read out of memory upon success). Examples are HSMP from AMD and OOBMSM from Intel. In both, the OS doesn't directly configure a setting, but asks some other chunk of code (provided by the CPU vendor) to configure the setting. Mothy's argument is that that is an architectural failure, and we should create OSes that can deal with this NOC-y heterogeneous architecture.
Even if one disagrees with Mothy's premise, this is a banger of a talk, well worth watching and easy to understand.
[0] https://www.usenix.org/conference/atc21/presentation/fri-key...
egberts1|10 months ago
Anyone remember the debate between microkernel vs monolithic kernel?
https://en.m.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds_d...
nand_gate|10 months ago
As far as a barrel CPUs to replace SMT... crickets
simonask|10 months ago
acje|10 months ago