Its good to see classic ISAs moving away from memory protection 'rings' towards arbitrary 'zones', even if retrofitting it e.g. SMEP/SMAP gives horrendous APIs and a nightmare to keep checked and balanced! ;)
The Mill comes at this from the other direction, starting with 'zones' (termed "turfs" in Mill jargon) and emulating 'rings' (if your kernel wants that) with overlapping access rights between turfs.
On the Mill you can have lots of turfs that may or may not have disjoint memory access, and you move between turfs synchronously with a special kind of indirect function call termed a "portal". There are provisions for passing across specific transient rights to memory in these calls, so you can pass a pointer to a buffer and other aspects that facilitate the 'usercopy()' mentioned in the article but with full hardware rather than software protection.
We have tightened the portal/turf concept extensively since the Security talk http://millcomputing.com/docs/#security but it does give a gentle high-level intro to turfs and portals.
These days, we have facilities for passing buffers without exposing memory pointers and other niceties to make it easy to write correct yet efficient code. They can now all be made public but oh so little time, and I'm hoping to get a white paper out about it by the end of this month. Watch this space ;)
Happy to elaborate if anyone has Mill or general questions :)
PS an example of 'zoning' is http://elfbac.org/ , which is not getting enough attention. Its another way facilitate memory separation, albeit by abusing the classic MMU and with inherent runtime cost. Elfbac is userspace, but the hardware could be abused to protect kernels on classic CPUs too. Well worth everyone reading :)
Can you give us any hint about how much time is there until we can buy a Mill to play with?
From that security presentation, I've left with the idea that you wouldn't want a Linux kernel on the Mill. You'll more likely want something smaller, with a Linux virtualization layer for device drivers. That's because your security layer is extremely flexible, making it possible to push a lot of kernel space code into some less powerful context while keeping performance the same.
So are you working on a Linux port for it? (Maybe breaking it in pieces on the process?) Or do you intend on starting with something else? (Maybe building up from a microkernel?)
I was excited about the Mill when I first read about it 5 years ago. But at this point is there any reason to believe I'll ever get to write code for one? (Even via an emulator or something like that)
(I can't/won't watch videos, so will have missed anything that was only in videos)
Looks pretty cool. Out of interest, have you considered adding support for something like the enclaves provided by Intel SGX, e.g. for cloud computing use cases?
The reality is that the majority of systems out there use distribution vendor supplied kernels. If you are in this camp, note that one of the best things you can do for kernel security in production is run a custom kernel with all of the features you don't need removed.
If you go this route, definitely consider grsec as well.
Reasonably tuning your kernel can also offer speed (eg. via more specific CPU targeting), size and - critically for embedded environments - startup time improvements.
> critically for embedded environments - startup time improvements.
This is so true. Back in the day when I was involve in embedded Linux development, the quickest bootup time was about 40secs. This was booting a v2.6 kernel in a minimum configuration on an ARM7 system over SPI NAND FLASH. Hopefully, the bootup time is in the subseconds by now. Are we getting to these speed yet?
Since you mentioned majorities, how many Linux kernel systems out there belong to parties which can afford to maintain a custom-built kernel with things like grsec integrated?
Oh, wouldn't things be better if we had all that candy upstream?
One thing I routinely do is compiling a monolithic kernel without module support (and exactly the modules the hardware needs). This way injecting a module into the kernel should be a little harder.
From the article section on grsecurity Linus added that this kind of problem is exactly why he has never seriously considered merging the grsecurity patch set; it's full of "this kind of craziness."
How do you reconcile that with suggesting people run this patch? If it were good, Linus would merge it. For me, the fact that it has existed for 10 years and _not_ been merged does not speak highly to it's quality.
I feel that any non-kernel dev applying a patch to their kernel is the opposite of a good security recommendation. I'm not nearly as qualified about the tradeoffs between performance and security or even code quality as Linus and the kernel team. That's why I delegate the decision about what code goes in my kernel to them.
They mention the downside of address space randomization - it kills bug repeatability. The effort to reproduce a crash is much higher. The result is bugs closed with "cannot reproduce" comments from developers.
At least they're trying to reduce the attack surface. But the kernel is just too big.
Couldn't they do something that's "repeatably random"? So that in case of a bug, you can extract some information from your kernel on its current randomisation, and then another kernel can use this information to repeat your random layout.
E.g. use pseudorandom numbers, store the seed somewhere. In case of a bug, extract that seed, pass it on to the dev, and he'll run his kernel with that seed to reproduce.
I know of a few "one time in a billion" bugs that became "one time in a hundred" bugs after randomization. Then they were fixed, instead of continuing to linger.
I didn't get that argument really. Or specifically, do we know the number of bugs which are harder to debug because now they disappear -vs- the number of bugs which were detected because kaslr breaks invalid assumptions? Maybe kaslr just exposed some ticking bombs in the code.
I've been experimenting with Tomoyo Linux lately. To me, it's the simplest LSM to reason about (although I have misconfigured it before). In the spirit of Russell Coker's SELinux play machine, I have an initial Tomoyo test machine that users may experiment with as root (uid 0). Feel free to ssh in and try it out. If you find an issue, or bypass Tomoyo somehow, please don't damage anything and let me know. Also, please no fork bombs. You don't need root to do that:
Aye, I wouldn't be surprised if the authors of this stuff might feel similar listening to you talk to somebody in whatever your area of expertise is. Experts use acronyms amongst themselves because it's more efficient for communication. If you're interested in it, google that shit, read, and learn :)
Most(except ro/no exec) of the stuff mentioned are poor band-aid after the fact just in case security by obscurity solutions :( Whats worse some of them add baggage of kludges and cost extra processing, all in the name of slowing down attacker (not stopping, slowing down). This is why Linus was mostly skeptical if not opposing this (and grsec).
This would be great, but unless you've got a budget for support, you probably shouldn't do it. When they say the free version is for testing, they mean it. Probably on servers it's less likely, but on desktop it has a tendency to crash xorg every few versions.
Prepare for the lovely blog post from the grsecurity author who is going to proclaim how he is so much smarter than everyone else, and that upstream Linux doesn't know what they're doing.
What an non-constructive, cynical, depressing response. Grsecurity is a good product, components have been added to the kernel without proper attribution, and although Brad can admittedly add drama, how is this different from the drama your post adds? Pot, meet kettle.
[+] [-] willvarfar|9 years ago|reply
Its good to see classic ISAs moving away from memory protection 'rings' towards arbitrary 'zones', even if retrofitting it e.g. SMEP/SMAP gives horrendous APIs and a nightmare to keep checked and balanced! ;)
The Mill comes at this from the other direction, starting with 'zones' (termed "turfs" in Mill jargon) and emulating 'rings' (if your kernel wants that) with overlapping access rights between turfs.
On the Mill you can have lots of turfs that may or may not have disjoint memory access, and you move between turfs synchronously with a special kind of indirect function call termed a "portal". There are provisions for passing across specific transient rights to memory in these calls, so you can pass a pointer to a buffer and other aspects that facilitate the 'usercopy()' mentioned in the article but with full hardware rather than software protection.
We have tightened the portal/turf concept extensively since the Security talk http://millcomputing.com/docs/#security but it does give a gentle high-level intro to turfs and portals.
These days, we have facilities for passing buffers without exposing memory pointers and other niceties to make it easy to write correct yet efficient code. They can now all be made public but oh so little time, and I'm hoping to get a white paper out about it by the end of this month. Watch this space ;)
Happy to elaborate if anyone has Mill or general questions :)
PS an example of 'zoning' is http://elfbac.org/ , which is not getting enough attention. Its another way facilitate memory separation, albeit by abusing the classic MMU and with inherent runtime cost. Elfbac is userspace, but the hardware could be abused to protect kernels on classic CPUs too. Well worth everyone reading :)
[+] [-] marcosdumay|9 years ago|reply
From that security presentation, I've left with the idea that you wouldn't want a Linux kernel on the Mill. You'll more likely want something smaller, with a Linux virtualization layer for device drivers. That's because your security layer is extremely flexible, making it possible to push a lot of kernel space code into some less powerful context while keeping performance the same.
So are you working on a Linux port for it? (Maybe breaking it in pieces on the process?) Or do you intend on starting with something else? (Maybe building up from a microkernel?)
[+] [-] lmm|9 years ago|reply
(I can't/won't watch videos, so will have missed anything that was only in videos)
[+] [-] EdSharkey|9 years ago|reply
[+] [-] anonymousDan|9 years ago|reply
[+] [-] contingencies|9 years ago|reply
If you go this route, definitely consider grsec as well.
Reasonably tuning your kernel can also offer speed (eg. via more specific CPU targeting), size and - critically for embedded environments - startup time improvements.
[+] [-] yitchelle|9 years ago|reply
This is so true. Back in the day when I was involve in embedded Linux development, the quickest bootup time was about 40secs. This was booting a v2.6 kernel in a minimum configuration on an ARM7 system over SPI NAND FLASH. Hopefully, the bootup time is in the subseconds by now. Are we getting to these speed yet?
[+] [-] mfukar|9 years ago|reply
Oh, wouldn't things be better if we had all that candy upstream?
[+] [-] cleeus|9 years ago|reply
[+] [-] antocv|9 years ago|reply
This stuff has been there in grsecurity patchsets for more than 10 (ten) years already.
[+] [-] cakeface|9 years ago|reply
How do you reconcile that with suggesting people run this patch? If it were good, Linus would merge it. For me, the fact that it has existed for 10 years and _not_ been merged does not speak highly to it's quality.
I feel that any non-kernel dev applying a patch to their kernel is the opposite of a good security recommendation. I'm not nearly as qualified about the tradeoffs between performance and security or even code quality as Linus and the kernel team. That's why I delegate the decision about what code goes in my kernel to them.
[+] [-] omginternets|9 years ago|reply
[+] [-] Animats|9 years ago|reply
At least they're trying to reduce the attack surface. But the kernel is just too big.
[+] [-] frederikvs|9 years ago|reply
E.g. use pseudorandom numbers, store the seed somewhere. In case of a bug, extract that seed, pass it on to the dev, and he'll run his kernel with that seed to reproduce.
[+] [-] tedunangst|9 years ago|reply
[+] [-] viraptor|9 years ago|reply
[+] [-] mfukar|9 years ago|reply
Here, have 3: https://twitter.com/R00tkitSMM/status/796617449823236096
(no, I didn't get confused, KASLR is not helping regardless of the OS)
[+] [-] w8rbt|9 years ago|reply
[+] [-] totalZero|9 years ago|reply
[+] [-] micaksica|9 years ago|reply
SMAP/SMEP: Intel/x86-specific security features. See http://j00ru.vexillium.org/?p=783 for an early (2011!) take on SMEP. (j00ru is great reading, in any case.) See https://lwn.net/Articles/517475/ for SMAP from LWN.
PAN: Privileged Access Never, Basically ARM SMAP: https://community.arm.com/groups/processors/blog/2014/12/02/...
[+] [-] peller|9 years ago|reply
EDIT: See also: https://en.wikipedia.org/wiki/Curse_of_knowledge
[+] [-] rasz_pl|9 years ago|reply
[+] [-] NetOpWibby|9 years ago|reply
[+] [-] eugeneionesco|9 years ago|reply
[+] [-] viraptor|9 years ago|reply
[+] [-] acobster|9 years ago|reply
What is the argument here? Is there something about this randomization that distinguishes it from classic security through obscurity?
[+] [-] willvarfar|9 years ago|reply
Here are some excellent slides on exploit mitigation in general: https://events.yandex.com/events/ruBSD/2013/talks/103/
Of the top of my head there are four approaches to stopping memory vulnerabilities:
1) have no bugs, e.g. formal verification etc
2) use a memory-safe language
3) accept that there can be vulnerabilities, and use exploit mitigation to harden it
4) capability-based addressing as a mitigation (it doesn't solve use-after-free, for example; it relies on software to do that etc)
Of these, (3) is the one you can retrofit to existing C/C++ codebases... a route you are usually forced to travel.
(There may still be other kinds of bugs, e.g. the obvious sql injections etc; I am talking above about memory bugs specifically)
[+] [-] commenting|9 years ago|reply
[+] [-] tonyplee|9 years ago|reply
(Good old ZoneAlarm type GUI / workflow would be very nice.)
* Report and block if someone able to run any kind of privilege escalation exploit.
* Report and block if any non-white list apps attempt to make any network connections to internet and any external IP.
* Report and block if any non-white list apps, scripts try to run and execute any program.
Selinux seems to claim it can do most of these. But the barrier of entry to setup and using it effectively is high (at lease for noob like me).
[+] [-] mixedCase|9 years ago|reply
[+] [-] llihs0|9 years ago|reply
[deleted]
[+] [-] sctb|9 years ago|reply
[+] [-] bronson|9 years ago|reply
[+] [-] cyphar|9 years ago|reply
/sigh
[+] [-] voidz|9 years ago|reply
[+] [-] nwmcsween|9 years ago|reply