> CHERI changes this. Every load or store instruction and every instruction fetch must be authorized by an architectural capability. Load and store instructions take a capability as the base instead of an integer address. The capability both identifies an address and grants access to a range of memory (including permissions, such as read, write, or execute).
Wow.
> When malloc or operator new returns a new object, it will subdivide one of these capabilities and provide a pointer that gives access to precisely one object. No mechanism in the system can turn that into a pointer to another object.
Impressive. This is a game-changer, will eliminate a whole class of side -channel attacks.
It will also effectively destroy performance for general purpose computing. Aside from secure enclaves and certain specific computing usages, it doesn't seem to offer much for day-to-day security. However I do think it can be useful for some kernel/systemland operations if the additional circuitry does not have too much overhead.
"In the case of CHERI, this was to change the user-visible abstract machine exposed by hardware in a way that hasn’t been done for mainstream hardware since the PDP-11." - the authors should take the time to familiarise themselves with the Burroughs B6700 (designed in the 1960s) which provided tagged memory, and a similar mechanism to CHERI architectural capabilities through the B6700 descriptor mechanism.
The PDP-11 minicomputer was a low-cost and undoubtably successful for its time system but a low bar for architectural sophistication or an exemplar for state of the art in computer architecture particularly in terms of memory design.
CHERI is a welcome development in producing safer systems, but it is packaging ideas that have been around for 50+ years, long overdue of course, but finally hardware costs have commoditised such that these ideas can be baked into mass produced hardware.
It's too bad that 286 style 16bit protected mode (and the iAPX 432 before it) was so shit on at the time. We had the potential to have these concepts be ubiquitous, but collectively dismissed the protection mechanisms as intrinsically too slow.
It feels like from the outside that there was a decent sized faction at Intel in the 80s to give us a hardware object capability system, but they ended up losing the political battle, and by the time of AMD designing long mode, the last remnants were swept away.
I wish the 386's 32 bit protected mode had been structured as GDT entries that had a bit to optionally point to page tables rather than just having base addresses into a global page table. It would have encouraged these techniques in commodity systems 30 years ago.
Hell, we might not even have had spectre and at least would have had better tools to address it if we had that plus rings 1 & 2 still useable. The user and kernel space would have had the ability to describe untrusted data to the MMU. It feels like we're just pretending that NetSpectre isn't a thing and somehow spectre is only an issue with untrusted code.
I'm hugely optimistic about CHERI, or similar approaches if this fails to get traction.
We get the biggest security benefits when we rule out entire classes of vulnerability and take them away from developers needing to care about them, e.g. memory safe languages. It's particularly good that CHERI works with C and C++ code at the source level.
No offense to software developers, but the incentives to push code out as quickly as possible simply doesn't align with the need for security, even if the developer understands all the potential issues.
This might be hyperbolic, but I worry that this will further cement the stranglehold that Big Tech has on us, broadly speaking, as hackers. Remote attestation, hardware-backed DRM, TPMs and other Microsoft-favoured technologies exist to wrest control of a computer from a user with physical access to "the rights holder" or "the IT department".
Say what you want about memory vulnerabilities (they're a bad thing!), I secretly like the fact that everything has an implementation bug for a determined enough attacker. It means that if you own a device and have unrestricted physical access to it -- something that only really happens if you own a device -- then you're likely able to bend its software to your will. That's what hacking is about, after all!
I worry that this architecture will end up in HDCPv3 chips, or as an embedded SOC (or chiplet) in the next x86 processors, constantly checking that everything is okay. People say that this idea leading to hating behind trusted computing is paranoid, but as every rooted Android user knows, it's not.
I wonder if this mechanism could also be used for bounds checking when accessing dynamically sized collections. That would be a boon for C/C++ as well as safe languages.
But I guess it can not because while the underlying allocation may have space for 100 objects, the high-level container might only have initialized the first 10 objects for example.
[+] [-] glitchc|4 years ago|reply
Wow.
> When malloc or operator new returns a new object, it will subdivide one of these capabilities and provide a pointer that gives access to precisely one object. No mechanism in the system can turn that into a pointer to another object.
Impressive. This is a game-changer, will eliminate a whole class of side -channel attacks.
[+] [-] melony|4 years ago|reply
[+] [-] nigwil_|4 years ago|reply
CHERI is a welcome development in producing safer systems, but it is packaging ideas that have been around for 50+ years, long overdue of course, but finally hardware costs have commoditised such that these ideas can be baked into mass produced hardware.
[+] [-] monocasa|4 years ago|reply
It feels like from the outside that there was a decent sized faction at Intel in the 80s to give us a hardware object capability system, but they ended up losing the political battle, and by the time of AMD designing long mode, the last remnants were swept away.
I wish the 386's 32 bit protected mode had been structured as GDT entries that had a bit to optionally point to page tables rather than just having base addresses into a global page table. It would have encouraged these techniques in commodity systems 30 years ago.
Hell, we might not even have had spectre and at least would have had better tools to address it if we had that plus rings 1 & 2 still useable. The user and kernel space would have had the ability to describe untrusted data to the MMU. It feels like we're just pretending that NetSpectre isn't a thing and somehow spectre is only an issue with untrusted code.
[+] [-] pjmlp|4 years ago|reply
[+] [-] anaisbetts|4 years ago|reply
[+] [-] dang|4 years ago|reply
Arm releases experimental CHERI-enabled Morello board - https://news.ycombinator.com/item?id=30007474 - Jan 2022 (81 comments)
[+] [-] MattPalmer1086|4 years ago|reply
We get the biggest security benefits when we rule out entire classes of vulnerability and take them away from developers needing to care about them, e.g. memory safe languages. It's particularly good that CHERI works with C and C++ code at the source level.
No offense to software developers, but the incentives to push code out as quickly as possible simply doesn't align with the need for security, even if the developer understands all the potential issues.
[+] [-] azalemeth|4 years ago|reply
Say what you want about memory vulnerabilities (they're a bad thing!), I secretly like the fact that everything has an implementation bug for a determined enough attacker. It means that if you own a device and have unrestricted physical access to it -- something that only really happens if you own a device -- then you're likely able to bend its software to your will. That's what hacking is about, after all!
I worry that this architecture will end up in HDCPv3 chips, or as an embedded SOC (or chiplet) in the next x86 processors, constantly checking that everything is okay. People say that this idea leading to hating behind trusted computing is paranoid, but as every rooted Android user knows, it's not.
[+] [-] pjmlp|4 years ago|reply
As for CHERIs, I have been following the research from afar and it is great to see how far they managed to come.
[+] [-] dthul|4 years ago|reply
But I guess it can not because while the underlying allocation may have space for 100 objects, the high-level container might only have initialized the first 10 objects for example.
[+] [-] mzs|4 years ago|reply
https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/cheri...
[+] [-] phkamp|4 years ago|reply