top | item 34532373

Fun with Gentoo: Why don't we just shuffle those ROP gadgets away?

130 points| crtxcr | 3 years ago |quitesimple.org

80 comments

order
[+] atlgator|3 years ago|reply
I remember my Gentoo days freshman year in college. I spent more time compiling updates than actually using the computer.
[+] gerdesj|3 years ago|reply
I used to keep using the boxes whilst steam billowed out the sides until things started crashing.

I recall gcc3 -> 4. The prevailing "wisdom" was emerge --deep (etc) world ... twice! My laptop was left for around a week trundling through 1500 odd packages. I think I did system first, twice too. I left it running on a glass table in an unheated study, propped up to allow some better airflow.

One of the great things about Gentoo is that a completely fragged system doesn't faze you anymore. Screwed glibc? Never mind. Broken python? lol! Scrambled portage? Hold my beer.

I have a VM running in the attic that got a bit behind. OK it was around eight? years out of date. I ended up putting in a new portage tree under git and reverting it into the past and then winding it forwards after getting the thing up to date at that point in time. It took quite a while. I could have started again but it was fun to do as an exercise.

[+] batman-farts|3 years ago|reply
These days my 5950X can get through some of the big scary packages quite rapidly. Firefox is done in about 8 minutes, a new point release of Rust seems to take about 15.

I still haven’t decided whether or not I should be embarrassed that I mainly bought a 16-core CPU to run Gentoo.

[+] TylerE|3 years ago|reply
I remember installing from stage1 on a 1ghz-ish single core. Just something like kde2 would take hours, and that's not even counting the dependencies. Anything bigger than a command line tool was something you'd kick off before going to bed and pray it didn't error. (Spoiler: It almost always did)
[+] dzmien|3 years ago|reply
I do all world updates overnight for this very reason. But on my R5 3600, the longest emerge is, by far, qtwebengine, which takes just under 1.5 hours. Plus, Gentoo provides -bin versions of many packages notorious for protracted build times, such as Rust, Chromium, Firefox, etc...
[+] eMPee584|3 years ago|reply
Same thing for me. 2003 it was .. and gentoo was a well good entry vehicle into linux
[+] aquafox|3 years ago|reply
I remember praying before every 'emerge -uDav world' that I won't have to deal with fixing my system for the next 2 hours.
[+] flatiron|3 years ago|reply
College was some good distcc days though. My off campus house all ran Linux and they were dumb enough to distcc me. Debian, RedHat 9 (non rhel), and Slack were the other popular distros at the time. My school was ran on Solaris.
[+] kakwa_|3 years ago|reply
As a student, I've actually put an overheating PowerBook G4 in a fridge just to finish an install
[+] account42|3 years ago|reply
How? Were you watching the compile output? Because you don't need to spend much time when your computer is doing all the work.
[+] jchw|3 years ago|reply
I like this idea. I have an idea for something that would be cool, if impractical: Imagine a GCC wrapper that doesn't actually link, but produces a bundle that performs the linking in randomized order in realtime and then runs.

I think that you could do this quite well on NixOS, and I'm now intrigued to try to rig up a proof-of-concept when I can find the time.

Side-effect: Does not work for libraries without a significantly more complex wrapper that certainly could not work for all libraries. Though, you could re-order the objects within a static library fairly easily.

[+] xxpor|3 years ago|reply
That'd make process startup EXTREMELY slow
[+] vlovich123|3 years ago|reply
I wonder if just shuffling it on every release (even minor) isn’t sufficient (and actually even publishing that order). That doesn’t have full security benefit (attackers have a finite set of options) but keeps reproducible builds and the ability to distribute pre-linked binaries while raising the attack complexity significantly since no two machines are likely running the exact same version. That means an exploit has to try several different versions. Taking this a step further, create link N randomly sorted copies per version and randomly distribute those. Now the space to search through is large and the probability of picking the correct gadget variant goes down with 1/MN where there are M releases being attacked and N variants per release that might be installed (a targeted attack or an attack of a specific version only gets 1/N). Additionally, deterministic builds maintain your ability to audit binaries and their providence fairly easily (only grows linearly) while the risk of noticing the attempt without a successful exploit is N-1/N.

I’m not saying it’s perfect but it seems like a reasonable defense for binary distribution. As someone who used to run Gentoo, I’d say most people are in favor of the faster times to install a new package.

EDIT: extending this idea further, I wonder if compilers can’t offer a random seed to supply that causes a random layout of the sections within a built execution so that even statically linked binaries benefit from this.

[+] notpushkin|3 years ago|reply
For binary distributions, how about shipping object files and linking them on install with mold? This should be faster than compiling from source, just marginally slower than installing pre-linked binaries, and each build will be as unique as it gets.
[+] saagarjha|3 years ago|reply
This is generally less useful with automatic updates for security patches because then you do want everyone to be running the same, latest, version.
[+] somat|3 years ago|reply
Openbsd also puts a fair amount of work into removing ROP gadgets.

For example.

https://marc.info/?l=openbsd-cvs&m=152824407931917

[+] rtev|3 years ago|reply
Very cool, thank you for sharing! Not only does ROP facilitate traditional binary exploitation, but it’s also used in cutting-edge evasive techniques. By abusing ROP instead of direct calls, red teamers are able to heavily obfuscate activities from endpoint detection and response.
[+] rtepopbe|3 years ago|reply
Uh, yeah... The post opens with a mention of being inspired by OpenBSD and goes into some detail on differences between their approach and OpenBSD's throughout.
[+] saagarjha|3 years ago|reply
Though, much less effective than reordering gadgets.
[+] ShredKazoo|3 years ago|reply
Lack of reproducible builds seems like a big cost here.

I wonder if there's a way to do just-in-time random relinking such that the performance cost is low, but the security benefit is still strong.

Just-in-time gets you reproducible builds, and also addresses the "local attackers who can read the binary or library" problem.

There would be a performance cost in terms of startup time, but since the number of possible permutations is a factorial function of the number of possible linking orders, it seems like even a very coarse-grained random relinking can go a long way.

You could accomplish this by doing static analysis of a binary to generate a file full of hints for ways to rewrite the binary such that its behavior is provably equivalent to the original. Then there could be a wrapper (perhaps at the shell or OS level) which uses the hints to randomly relink on the fly just prior to execution.

Another advantage is that this approach should be feasible on an OS like Ubuntu where everything is precompiled.

However the static analysis part could be a little tricky? I'm not familiar with the state of the art in static analysis of compiled binaries.

Performance-sensitive users could be given a way to turn the feature off, in cases where fast startup time was more important than security.

[+] lxgr|3 years ago|reply
Do reproducible builds even matter if you're building/linking and executing a binary on the same system?

The biggest benefit seems to be in making it infeasible/dangerous for a malicious actor to distribute binary versions containing different behavior from the published source.

On a local machine, when and with what would you compare your binaries?

[+] phkahler|3 years ago|reply
>> As a side-effect, reproducible builds, which this technique breaks, are less of a concern anyway (because you've compiled your system from source).

Reproducible builds verify the source code and build process (including options) were the same. Not sure how important each aspect is.

Also, if for some reason you rebuild a dependency, you'll need to relink everything that depends on that. This could get messy, but it's still interesting.

[+] withinboredom|3 years ago|reply
Isn’t it impossible to have truly from-scratch reproducible builds? IIRC, you have to trust the compiler which can’t be built from scratch.
[+] Hydraulix989|3 years ago|reply
Why? If the dependencies are dynamically loaded libraries it shouldn't matter?
[+] cbrozefsky|3 years ago|reply
Control over the RNG seed, and tracking that seed as an 'input', would be a way to get reproducible builds while still having randomization.
[+] frankjr|3 years ago|reply
I'm guessing "dev-libs/openssl shuffleld" should go into "/etc/portage/package.env" instead (in the appendix).
[+] lucideer|3 years ago|reply
> The potential issue comes from the assumption that all .o files will be given continuously in the command line. The assumption appear to hold, but could blow up down the road. But well, it's hack.

Other than this issue (which may well be a large / unsolvable one), I wonder what other disadvantages to this approach there might be. Does this hack have any potential for a Gentoo profile or mainlining?

[+] matzf|3 years ago|reply
Don't try this with C++, unless you're certain that there are no interdependencies or side-effects in global variable initialisation. The link order (usually) affects the order in which initialisers are executed.
[+] Asooka|3 years ago|reply
On the contrary: do do this and if you observe your program crashing due to linking order, fix the damn bug.
[+] londons_explore|3 years ago|reply
Does the C++ spec guarantee initialization order? Or is any application that depends on it relying on undefined behaviour?
[+] gigel82|3 years ago|reply
How does this work with dynamic libraries (shared objects). In Windows land, you get a .lib with a .dll and afaik that has hardcoded function addresses. You statically link the "import library" .lib with your exe, so if you randomize the function addresses and rebuild just the .dll later, it blows up (you need to rebuild all exes as well).

Is dynamic linking in Unix world truly runtime-only (a-la "GetLibrary" / "GetProcAddress")?

[+] account42|3 years ago|reply
Unix/ELF doesn't have seperate .lib and .dll files - you link directly against the .so (or a linker script, but those are typically only used for special system libraries). The main thing this does is record the name from the DT_SONAME field of the .so as a requied dependency in your binary.

But I also don't think that this would be a problem on Windows. After all, you can generally replace DLLs with entirely different versions and you'll be fine as long as all the required symbols are present and ABI-compatible.

The main difference between ELF and PE dynamic linking is that with PE you have a list of required symbols along with the libraries to load those symbols from while with ELF you have a list of required libraries and a list of required symbols but not information recorded about which symbols should come from which libraries.

[+] hermitdev|3 years ago|reply
One gap to this approach: gcc can use argument files (pass a file that contains the actual arguments). I've only really seen this with build systems that expect to work on large numbers of arguments that will not fit on the command line. Still, something to be aware of.
[+] crtxcr|3 years ago|reply
I'll keep an eye on that, thx!
[+] yazzku|3 years ago|reply
Deep feels from that web design. Simple, aesthetic, functional.
[+] ngneer|3 years ago|reply
Why not prevent control transfer to the ROP gadget?
[+] PeterisP|3 years ago|reply
Because we are unable to do that, and we've tried for decades.

There are all kinds of things we're doing (e.g. rewriting things in memory-safe languages) to make it less likely for an attacker to become able to control a jump to somewhere, however, we don't expect to fully succeed any time soon, and this is defense in depth against cases when attackers once again do find a way to control transfer to some arbitrary gadget.

[+] kwhitefoot|3 years ago|reply
ROP gadgets?
[+] Karellen|3 years ago|reply
https://en.wikipedia.org/wiki/Return-oriented_programming

> Return-oriented programming (ROP) is a computer security exploit technique that allows an attacker to execute code in the presence of security defenses[1][2] such as executable space protection and code signing.[3]

> In this technique, an attacker gains control of the call stack to hijack program control flow and then executes carefully chosen machine instruction sequences that are already present in the machine's memory, called "gadgets".[4][nb 1] Each gadget typically ends in a return instruction and is located in a subroutine within the existing program and/or shared library code.[nb 1] Chained together, these gadgets allow an attacker to perform arbitrary operations on a machine employing defenses that thwart simpler attacks.