top | item 43629223

(no title)

acje | 10 months ago

I find it inspiring that we are getting to where we are dealing with models that classify vulnerabilities at a systems level. However I also think we are kind of barking up the wrong three. There is IMHO something wrong with the current strategy of scaling up the von Neumann architecture. It leads to fragile software partitioning, noisy neighbors and both slow and sometimes unintended communication through shared memory. I’ve tried to lay this out in detail here https://lnkd.in/dRNSYPWC

discuss

order

transpute|10 months ago

Have you looked at Barrelfish (2011) from Microsoft Research and ETH Zurich?

https://www.microsoft.com/en-us/research/blog/barrelfish-exp...

> “In the next five to 10 years,” Barham predicts, “there are going to be many varieties of multicore machines. There are going to be a small number of each type of machine, and you won’t be able to afford to spend two years rewriting an operating system to work on each new machine that comes out. Trying to write the OS so it can be installed on a completely new computer it’s never seen before, measure things, and think about the best way to optimize itself on this computer—that’s quite a different approach to making an operating system for a single, specific multiprocessor.” The problem, the researchers say, stems from the use of a shared-memory kernel with data structures protected by locks. The Barrelfish project opts instead for a distributed system in which each unit communicates explicitly.

Public development stopped in March 2020, https://github.com/BarrelfishOS/barrelfish & https://barrelfish.org

_huayra_|10 months ago

Mothy Roscoe, the Barrelfish PI, gave a really great talk at ATC 2021 [0]. A lot of OS research is basically "here's a clever way we bypassed Linux to touch hardware directly", but his argument is that the "VAX model" of hardware that Linux still uses has ossified, and CPU manufacturers have to build complexity to support that.

Concretely, there are a lot of things that are getting more "NOC-y" (network-on-chip). I'm not an OS expert, but deal with a lot of forthcoming features from hardware vendors at my current role. Most are abstracted as some sorta PCI device that does a little "mailbox protocol" to get some values (perhaps directly, perhaps read out of memory upon success). Examples are HSMP from AMD and OOBMSM from Intel. In both, the OS doesn't directly configure a setting, but asks some other chunk of code (provided by the CPU vendor) to configure the setting. Mothy's argument is that that is an architectural failure, and we should create OSes that can deal with this NOC-y heterogeneous architecture.

Even if one disagrees with Mothy's premise, this is a banger of a talk, well worth watching and easy to understand.

[0] https://www.usenix.org/conference/atc21/presentation/fri-key...

nand_gate|10 months ago

Vapourware, what they post (microkernels) is nothing new.

As far as a barrel CPUs to replace SMT... crickets

simonask|10 months ago

I think your take is interesting, but your article does not go into details with ideas about how to address these problems at the architectural level. Would you like to elaborate?

acje|10 months ago

There is some elaboration in part four of the series. A fifth part on actor model, gaps and surfaces is in the works. Part four https://lnkd.in/dEVabpkN