top | item 31176138

SELinux is unmanageable; just turn it off if it gets in your way

491 points| HyphenSam | 3 years ago |ctrl.blog

444 comments

order
[+] fefe23|3 years ago|reply
The problem is not so much that selinux is too complicated (it is as complicated as it needs to be), but that we all run software we don't understand.

The whole IT ecosystem has become a hail mary. Even admins usually have no idea what a certain program actually wants to do. If the admin knows how to install the app so that it actually runs, you call them a good admin.

From a security point of view, an application is like a nuclear power plant. It's good if it works as planned, but if something blows up it endangers your whole enterprise.

The whole container movement can be seen as putting the apps in a sarcophagus like Chernobyl. That way the radiation hopefully stays in, but history has shown that it really doesn't. Also, the wheel of history has just turned one more iteration and now admins just view the sarcophagus as something you deploy as you previously deployed the app. Who is responsible that it is air tight? Well, uh, nobody, really.

You can't even blame the applications for that. Let's say you want to build a good, secure application. How do you know what files your application will try to open? What syscalls it wants to call? Library and framework functions tend to not document that properly.

Obscure files like /etc/localtime, /etc/resolv.conf, /etc/ld.so.conf, /dev/zero ... how can you expect devs to build well documented and well sandboxable applications if they don't know which files their library functions will open?

You may have heard of /etc/resolv.conf ... but have you heard of /etc/gai.conf? /etc/nsswitch.conf? /etc/host.conf? Wouldn't it be great if the man page of getaddrinfo mentioned those (mine only mentions gai.conf)

[+] lucideer|3 years ago|reply
> The problem is not so much that selinux is too complicated (it is as complicated as it needs to be)

I disagree.

> but that we all run software we don't understand.

I fully agree.

My disagreement lies in the fact that you've described the problem, but are proposing that some software (SELinux) that fails to solve the problem is somehow good.

SELinux might be a perfect tool in an ideal utopia where everyone understands all the software they run, but that isn't the real world, and having a tool that works in a theoretical world isn't particularly useful.

Either SELinux is applicable to the real world, or it's not useful and not really fit for purpose.

This isn't a "blame game" - it's not about figuring out why things are bad (no-one understands their software components), nor blaming SELinux (it hasn't caused these problems, it's just failing to mitigate them). It's about figuring out how to improve that situation. Does SELinux do that effectively?

[+] tristor|3 years ago|reply
Honestly, the bigger issue is that most SWEs just aren’t very good. It’s extremely telling that when you spend time in tech forums most people dread system design questions as the harder side of interviewing for senior level SWE roles…

System design, though, is the actual point of SW ENGINEERING. That’s the part that is responsible for creating a foundation of quality to build on.

The other side is that sysadmins have largely become DevOps or SRE roles, which means most folks left as a sysadmin are those that couldn’t hack it in the other roles. In the end, as we grow the number of people in tech, the number of software and systems, and the complexity we experience a regression to the mean across the board.

It’s always been a pet peeve of mine that I can count on my hand the number of SWEs in my ~20yr career that had similar understanding of the system they worked with as their SRE counterparts… but this should be table stakes.

[+] lazyier|3 years ago|reply
> The problem is not so much that selinux is too complicated (it is as complicated as it needs to be), but that we all run software we don't understand.

There are many many problems.

One of the biggest problem with SELinux is that it is trying to graft Mandatory Access Controls on a userland that is not designed for it.

Unix, frankly, is not designed for security. It is designed to get work done by writing a bunch of little buggy C programs that you string together in novel ways.

Security is something that was grafted on it. And it shows.

How many decades of security vulnerabilities have occurred because of a shared /tmp space? 40 years?

How do you graft access controls on a system designed with no access controls and potentially billions of combinations of programs, paths, and various other resources without breaking anything?

The answer is: You don't. You can't.

Were as you have a system designed for security, like Android, and literally hundreds of millions of fully SELinux-enabled fully locked down user-facing Linux devices are out there being used by people who haven't the faintest clue what "audit2allow" is and wouldn't understand it if you tried to explain it to them.

So it's less of an issue of "we all use software that we don't understand". SELinux is complicated enough that you can devote your life to trying to understand it and still fail to craft good rules for other people.

It's more of an issue of "Linux userland follows the basic Unix design from the 1970s which is kinda shit if your goal is security".

It is just bad design. Pure and simple. That is all there really is.

However there is a way out.

The way out is to give each process and each user their own little itty bitty special Unix environment where they can do whatever they want. And then you use strong divisions to keep them mostly unaware of each other. Use a default deny policy and only poke holes in it when required.

[+] Damogran6|3 years ago|reply
Single page of instructions from google apps has me download a tarball, add a hello world command, and Presto! I have a webapp on my computer! Then change this one line and PRESTO! My webapp is in the cloud!!

No telling how, or with what components, or what the dependencies or security implications are, or how to see the logging for the app. Just a single paragraph of instructions and you can be the next internet startup IPO!

'Hail Mary' is the best way to describe the situation I've yet heard.

[+] icedchai|3 years ago|reply
Containers are more like a trash bag. Nobody expects it to be air tight, just good enough to make it to the dumpster.

I always felt containers were always about packaging and deployment, not security. Any "security" was a byproduct of isolation, not an end goal.

[+] beefield|3 years ago|reply
I would argue that the fundamental problem is that the companies selling software "engineering" products do not actually take any responsibility that the product they have engineered works as intended. (see: https://www.snopes.com/fact-check/car-balk/ )

And of course, the main reason they do not take the responsibility is that the customers won't pay for it.

It is kind of interesting. We are very good at making bridges that do not fail unexpectedly, even if there is unlimited amount of unknown failure modes when working with physical materials. And on the software side, well, managing to come up with a fizzbuzz without failure modes is used to screen people on interviews. What would be similar for bridge engineers? Here is a hammer, nail and two pieces of wood. Can you make the two pieces of wood stick together?

[+] legulere|3 years ago|reply
The problem is that we have built all our software on Unix. The Unix security model is based on an attack model where users have to be protected from each other on a terminal server. Code is implicitly trusted and exploits were an unknown unknown.

That security model is almost completely useless now. Terminal servers are an extreme edge case. Services implement their own security model between users. Special “Users” for services is just a hack that tries to use an inadequate system as much as possible.

The Unix security model is deeply baked in everywhere and it’s nearly impossible to tack on a security model fitting to todays requirements afterwards.

[+] dale_glass|3 years ago|reply
Some of the problem is that historically we've built systems badly engineered for security.

Take for instance something like xscreensaver. Something in there needs to be setuid so that it can verify your password to unlock the screen. That something is fortunately a dedicated binary, and not every single screensaver, but still, that's bad. Writing that executable is a delicate thing. Get one of them wrong, and it's a glaring hole.

And /usr/libexec/xscreensaver/xscreensaver-auth is quite the thing. It links to a whole bunch of graphical libraries and talks to X11 to read your password, so that thing has a huge potential attack surface. Far more than seems comfortable.

What we should have instead is some sort of authentication service. A program confined by SELinux to only interact with a socket and the password database, speaking a simple but well designed protocol.

With that in place we'd have much simpler security. Policies get simpler because random stuff doesn't link to PAM anymore, and so doesn't end up touching a whole bunch of critical security related files. There's one thing on the system that deals with that, and it ends up with a small, understandable policy.

And the policy for the users is now expressed in terms of "Do we want this program to be able to validate user passwords?", which is also far simpler.

With that kind of redesign over time we could have much smaller and more understandable policies, and a more secure system as a result.

[+] throw0101a|3 years ago|reply
> The whole container movement can be seen as putting the apps in a sarcophagus like Chernobyl. […] Who is responsible that it is air tight? Well, uh, nobody, really.

The people who designed the container/sarcophagus system. If it's not secure don't sell it as secure: see the difference between Linux containers and FreeBSD jails or Solaris zones.

> You can't even blame the applications for that. Let's say you want to build a good, secure application. How do you know what files your application will try to open? What syscalls it wants to call? Library and framework functions tend to not document that properly.

As a developer / software "engineer" it is your responsibility to know how the components you choose work and how much you can rely on them. When a structural engineer selects certain types of steel or concrete to be used in the construction of a bridge, it is their responsibility to know the characteristics of the raw materials.

When you ship a software product, enable SELinux or AppArmor and audit what it does: then ship an SELinux/AppArmor profiles.

See also OpenBSD's pledge(2) framework where the developer embeds system calls in their (C) code to promise to only access certain OS features:

* https://awesomekling.github.io/pledge-and-unveil-in-Serenity...

* https://man.openbsd.org/pledge.2

[+] kllrnohj|3 years ago|reply
Ideally all apps & libraries would ship their own selinux policies with a common framework for combining them & base layer for privilege sets (eg, "let me open any file the current user can open" or whatever). If that was the case then your concern wouldn't be an issue. You'd just say "I use libc sockets" and you'd inherit whatever file path permissions are necessary for that to work, as defined & exported by the libc in question.

But that's not a thing. So distros are attempting to add it later themselves, which is a disaster.

[+] nisa|3 years ago|reply
> Wouldn't it be great if the man page of getaddrinfo mentioned those (mine only mentions gai.conf)

That's a huge part of the problem. Was looking for a monograph to pass on my successor that wasn't so familiar with linux - modern linux - as in systemd, docker, nssswitch, pam_homed, network-manager is penetrable but more often than not I'm just looking at the c source in github and I am in my 30ies, starting in my teens running linux nonstop and I'm still throwing up my hands every other day...

Add time constraints and pressure to deliver (don't waste your time understanding this, just do xyz) and here we are - on the other hand it's not okay that you have to devote years of trail and error to get a comprehensive undestanding of the system.

I guess a nuclear power plant has at least a training plan and complete reference book - that still needs to be read and grokked and trained upon but modern linux is often just undestandable by reading the source if you hit a problem - for some projects like things in the freedesktop ecosystem and partly systemd even that is kind of difficult because i.e. this bug https://github.com/systemd/systemd/issues/19118 explains the problem - there is no overview documentation, no clear way to look up what's happening, not even a good way to introspect and it's several components that interact with each other fail subtly. I've chosen this one because I've also hit it, not because I want to blame systemd, which is not so bad there are much worse things out there but it's part of a trend to introduce complexity - not sure what is necessary complexity and what is unnecessary - went to uni in compsci without ever someone slapping out of the tar pit (http://curtclifton.net/papers/MoseleyMarks06a.pdf) in my face and maybe that's part of the problem.

[+] ptidhomme|3 years ago|reply
Let's say you want to build a good, secure application. How do you know what files your application will try to open? What syscalls it wants to call?

This almost looks like a hint to OpenBSD's pledge and unveil system calls.

I'm just a hobbyist, but regarding

we all run software we don't understand

what I like (again) in OpenBSD is that I feel I can largely understand/control what's happening in my system, it quite fits in my head.

Just my 2 cents.

[+] throwaway894345|3 years ago|reply
> The problem is not so much that selinux is too complicated (it is as complicated as it needs to be), but that we all run software we don't understand.

I don't think these statements are meaningfully different. "too complicated" implies "...for humans to manage". Maybe that's sort of your point?

> You can't even blame the applications for that. Let's say you want to build a good, secure application. How do you know what files your application will try to open? What syscalls it wants to call? Library and framework functions tend to not document that properly.

Agreed. It's too burdensome for software developers to understand exactly what syscalls their program needs to make and the security implications of permitting those syscalls. It also doesn't help that Linux naming conventions and concepts are very counterintuitive (yes, dear Kernel Hacker, I'm sure they're very intuitive to you, but we lowly mortals struggle).

And unfortunately the SELinux policies are tightly coupled to the application such that you can't make SELinux policies the purview of a dedicated sysadmin expert and leave the appdev to the development teams. They have to collaborate which is swimming against the current of Conway's Law or else you make SELinux policies the responsibility of appdev and suffer their lack of expertise.

We had similar problems with operations in general, but containers largely solved this problem by allowing sysadmins to focus on a platform while developers focus on developing and operating the application. We need something similar for security. This is probably a rephrasing of your "sarcophagus" point?

[+] marcosdumay|3 years ago|reply
> Obscure files like /etc/localtime, /etc/resolv.conf, /etc/ld.so.conf, /dev/zero ... how can you expect devs to build well documented and well sandboxable applications if they don't know which files their library functions will open?

Who the fuck invented that convention that fine-grained permissions must be file-based? It's insane. No, no developer will anticipate that he needs to read /etc/nsswitch.conf. No developer should. A developer should anticipate that the software needs permission to connect on network hosts.

As much as it is too granular, file based permissions aren't fine-grained enough. Asking to connect on random hosts is absurdly wide, most programs only need to connect in to a few of them, or connect to an user-supplied host (that can be a permission by itself).

Anyway, yes, the manpages should interlink better. What is a different issue.

[+] goodpoint|3 years ago|reply
> The whole container movement can be seen as putting the apps in a sarcophagus like Chernobyl

Reminder: containers are not meant to be security tools.

Fine-grained sandboxing (e.g. seccomp) is. And it can be layered upon OS-level VMs.

Additionally, bundling tons of stuff together with an application (like docker but also flatpak do) is not good for security. Same for static linking. They all increase the workload of updating vulnerable dependencies.

[+] resonious|3 years ago|reply
I've been recently thinking along similar lines about libraries. In web dev, it's common practice when you have a problem to first look for a library that solves it for you.

I get the "don't reinvent the wheel" sentiment but, I think we take it too far sometimes. I've been looking at the source code for some dependencies at work lately, and many of them actually don't hold up to our own code quality standards. Kind of subjective, yes, but many of our dependencies would probably fail code review if actually reviewed by us.

Then when there is a bug in a dependency, nobody actually understands how it works and the code is often tucked away and not easily changed.

[+] MarkusWandel|3 years ago|reply
/etc/resolv.conf is a funny one.

Used to be, you could update it manually. Then NetworkManager (or whatever its successor is) came along and would change it back on you. Oh well, you could still edit it for a quick test.

Then I set up a Wireguard tunnel. And it changed /etc/resolv.conf. Oh, but an entry is missing, I'll just add that and test. Nope, read-only. But I'm root! What gives?

Turns out the wg-quick script mounts a file on top of /etc/resolv.conf! I didn't even know that was possible until I saw it. Nobody messes with wg-quick's resolv.conf, and that's final! Until some other tool ups the ante and gets code to undo that.

[+] TheCondor|3 years ago|reply
>>The problem is not so much that selinux is too complicated (it is as complicated as it needs to be), but that we all run software we don't understand

There is truth to this, and it nails the fundamental asymmetry of the bad guys vs good guys in the security war. To program selinux you need to understand the software you run at the syscall level. and potentially have a deep understanding of its file usage, particularly if it’s doing ipc.

In general, I think that is a good goal. More understanding is more understanding and that is Good. In practice? I equate it to the problem of writing secure and robust code in C, I don’t know how good you have to be todo it and I basically assume that anyone who says they do is full of shit. I have contributed to the Linux kernel, I have decades of UNIX and specifically Linux experience as a software engineer, and I am still surprised when I fire up strace from time to time. You look at something like the recent Dirty Pipe bug, and I have a difficult time accepting that many people can fully grasp it all. The cost of a fairly simple system interface is all the subtlety and edge cases.

[+] beached_whale|3 years ago|reply
I view the container movement as a way of managing interactions. It’s really hard to manage dependencies in such a heterogenous systems. Containers simplify it all, at the cost of owning ones full dep tree. But with a good release process, one can keep on top of that and put less effort into the combinatorial growth when interacting with all the other dependencies other programs require.
[+] StillBored|3 years ago|reply
containerization is less about security these days, and is being used to work around the broken package/dependency management situation on linux which is caused by the wild west of libraries that make little effort to assure any kind of forward/backwards compatibility. Given the fact that distro package maintainers cant maintain 30x different versions of any given package to satisfy all the applications stuck on a particular library version, means that larger applications which can't be constantly churning their code bases to replace foo(x,y) with foo_bar(y,x) are stuck shipping their own pinned version of a library.

So, I don't think anyone seriously thought that containers provided significantly more security than simply running an application and trusting the kernel syscall and filesystem permissions were bulletproof. More software layers are unlikely to improve security.

[+] 0xbadcafebee|3 years ago|reply
It's the forward march of progress. As things get more complex they also get more fragile and inscrutable, and harder to solve problems for.

Containers are a reaction to the flaws in the system. Fixing the root cause of those flaws is hard, because they're systemic. We would need to re-design everything to fix those flaws. Containers are a stop-gap measure that makes things easier to deal with.

The thing that I've learned about systems is they are all crap, but humans constantly munge them to keep them working. We don't end up with the best systems, we end up with systems easiest to munge. Every once in a while, a spasm of collective organization redesigns the system, and then we keep going with that one. It's evolution. So however crap our current system is, eventually it will change, probably for the better. It just takes a really long time.

[+] cnity|3 years ago|reply
This seems like something that WebAssembly (misnomer) is set to resolve, to some degree. Sandboxing capabilities is easy because all of the wasm module imports must be provided by the process that initializes the module, so you can view imports as a kind of list of "requests for capabilities".
[+] gitgud|3 years ago|reply
> "The problem is not so much that selinux is too complicated (it is as complicated as it needs to be)..."

Completely disagree, if the target users are advising each other to disable it... then the tool is definitely more complicated than it needs to be.

Using tools that don't hide any complexity are very painful to use. It feels like the creator doesn't care about the user and put no thought into the display of information or workflows of the tool.

Solving a complex problem with a perceived complex tool is relatively easy... don't hide any complexity.

Solving a complex problem with a perceived simple tool is difficult... hiding complexity at the right time, revealing functionality it based on the user's intentions and experience is not easy but greatly appreciated.

[+] kd913|3 years ago|reply
Uhh isn't this precisely what the whole snap/flatpak movement is trying to fix?

I can talk about snaps, but at least they are offering a specific permission model which is usable for users to understand. i.e. Via the interfaces, I can know software x accesses my password-manager, home directory, camera etc... I can disconnect access to the given permission and have it enforced via the kernel and apparmor.

The applications themselves bundle only libraries that themselves are sandboxed/snapped.

[+] phkahler|3 years ago|reply
Yeah, I came to comment on this:

>> It doesn’t help that no one wants to be told: “stop what you’re doing, spend a week to learn this other thing, make a tiny change, and then return to your real job”. They’re looking for solutions — preferably quick ones — and not a week’s worth of study.

If you're a system administrator running SeLinux why dont you already know SeLinux? It's not like some obscure command, it's a big part of the infrastructure you're running.

[+] conjectures|3 years ago|reply
Pour one out for the Machine God.
[+] Edynamic77|3 years ago|reply
SELinux is so ridiculous.

If you want a real security on linux where root is not God !

Go to this site.

https://www.rsbac.org/

It's little difficult to implement (by kernel customisation) but there is a learning mode to secure all Linux structure

Be "root" is not be "God" after implementation. You will must ask to Security Officer (SecOff)

You can speak of "Evaluation Assurance Level" with this security solution and push SELinux into a trash.

[+] mindwok|3 years ago|reply
As an experienced RHEL admin, a few years ago I probably would have said this is very bad advice in any professional context, and you should spend the time to learn it because it will save you one day.

Now, I think my advice would be: Put everything in a container, and learn how to run Docker or Podman (or k8s) in a secure way (ie no root containers, be very careful with volume mounts, etc). Yes, they aren’t as mature as SELinux, but containers aim to provide many of the same benefits that SELinux does (and even more) except in a way that’s much easier to manage. Even better is that these container runtimes often come with SELinux and AppArmor policies out of the box on good Linux distros.

[+] staticassertion|3 years ago|reply
I've always preferred apparmor. SELinux has always seemed radically more complex for very little benefit, unless you have a tightly constrained OS (like Android, where the VM does most of the work and every app has the same sort of security policy) or a team of admins working full time to maintain it (again, like Android).

Apparmor is weak in all the same ways SELinux is weak, at least in terms of the ways that actually matter - that is to say, a kernel exploit is the simplest way out of either. But anyone can write an apparmor profile for any program in an hour, and if you actually know wtf you're doing you can build very strong profiles and trivially test and maintain them.

SELinux is "good" in that if you are building a system like Android, great, consumers get very tight policies for free, and SELinux is ultimately more expressive. But I think 99% of companies looking at an LSM MAC should probably roll apparmor.

[+] jandrese|3 years ago|reply
> There’s nowhere on the system where you can view the policies and look up why something might or might not work.

I always thought I had to be missing something with SELinux because this is what it seemed like to me and that can't be right. My impression is that the documentation for SELinux is extensive in all of the areas that aren't affecting you but it's really hard to nail down exactly what the policies are, what labels are available, which labels you should be using, and generally how everything interacts. Is there a SELinux Wiki somewhere that I've missed that has a simple breakdown of each and every possible label with interactions? Some tool I can use to generate said list?

[+] kbenson|3 years ago|reply
I have to say, for sysadmin work I find this view sorta overblown.

My experience is with RHEL and derivatives, not Fedora, and for the most part those are very straightforward and while selling does rear it's head every once in a while, it's usually not that problematic to work around.

When you have something failing that you need to fix and isn't an upstream problem (i.e. your own app or some third party vendor), you set the system to permissive mode and let it run with the app long enough that you think you've hit all the use cases, and then you use ausearch and audit2allow to generate a module for it. And bi, while the selinux audit log entries (which ausearch shows) aren't super clear, you can learn to interpret the gist of them, and piping them to audit2why and audit2allow gives more context.

The real "trick" in the above is to make sure you run the system in permissive mode for a while. Otherwise you'll usually only see the first deny in a code path which the app will then fail on, and you won't see subsequent ones. Permissive mode means nothing is actually denied, but all thing that would be denials are logged as such.

For things that ship with the OS repos for RHEL and derivatives, my experience is that very rarely are there problems or things that are not obvious to fix (running errors through audit2why will tell you when there's a boolean to toggle to allow what you want, such as outbound tcp connections from the http context that is causing some PHP app to malfunction).

[+] glowingly|3 years ago|reply
Like others here, I had a similar experience. I setup a simple minecraft server on a SELinux secured OS. So far, so good. I wanted to setup a systemd service to startup and shutdown the minecraft server.

After ~1 hour of work later, I came to the conclusion that I was going to disable SELinux. Another hour later, I disabled SELinux.

Much as the article mentions, there didn't seem to be much good help, especially w.r.t. learning what the incantations meant and how to use them properly outside of a narrow path. Similarly, I did not have any decent way of introspecting what was going on in there. The error messages were of the "you must google this to even have a remote chance of figuring out what it means at all."

I understand SELinux is probably designed for enterprise or organizational specialists and not for normies to touch. It just seemed a bit too extreme towards that end.

[+] ReganLaitila|3 years ago|reply
the linux fiefdoms have a serious UX problem. SELinux being a prime example. As the article articulates, no wonder why people just turn it off. If your subsystems are not consistent, discoverable, palpable, and most important logical your setting yourself up for lousy adoption. And just "reading the docs" does not solve this problem. Your subsystem does not get to consume my professional time slice.

The reason docker became the de facto entry point into containerization in yesteryear is because if you were dealing with 'containers' you were dealing with the 'docker' cli entry point. Everything you did with linux containers in the (mainstream) came from 'docker' and you can '--help' to your hearts content -or- google as much as you required with others that had the same shared experience with 'docker'. We've moved on in recent years but its important to remember the power of a well described, but imperfect interface.

SELinux has none of this mindshare. What is my canonical entrypoint to SELinux on any particular distro? There is none. I have to specifically know to install support packages for 'audit2allow' or 'audit2why' to do any reasonable troubleshooting on why a processes wont start. Why? Because any raw logs are so chocked with implementation details as a administrator I cannot make a real-world decision on what is broken on the system. Sysadmins do not start every day thinking about SELinux and memorizing its maze of tools and procedures. Something is starting to smell here...

For SELinux I need to know about, and sometimes explicitly install, half a dozen cli tools to administer selinux. Most of which don't follow any particular naming convention or entry point. I now need to learn a completely new markup for policy AND compile them AND install them using other esoteric tools . I need to explicitly refresh system state after making any changes, and return to my blunt 'audit2why' what-is-this tool to figure out if I did anything right.

The principles of SELinux are fine. The UX of SELinux in terms of getting shit done day to day is not.

[+] egberts1|3 years ago|reply
Right, right, and right.

Even after mastering all the fundamentals of SELinux, six month later a different audit-related problem surfaced and “what was that command again?”

This link often saves me:

https://access.redhat.com/documentation/en-us/red_hat_enterp...

In fact, I condensed it to the following steps (outlined elsewhere in this OP by patrck, new HN user):

couple sysadm red flags:

1) The article author is Testing in PROD

2) selinux debugging relies on auditd, so sanity checks required.

  df -P /var/log/audit # has space?
  tail -1 /var/log/audit/audit.log # is recent?
  semodule -DB  # disable dontaudit
  setenforce 0
  # run the failing test
  audit2allow -l
After which the selinux debugging experience boils down to:

    mk_semod() {
        module_name=$1; shift
        audit2allow -l -m ${module_name} -a > ${module_name}.te
        $EDITOR ${module_name}.te || return
        checkmodule -M -m -o ${module_name}.mod ${module_name}.te
        semodule_package -o ${module_name}.pp -m ${module_name}.mod
        semodule -i ${module_name}.pp
    }
[+] josephcsible|3 years ago|reply
SELinux has a horrible misfeature called dontaudit, that lets policies using it deny actions without any evidence being logged anywhere. Because of the existence of this, the only reliable way to know if a problem is being caused by SELinux is to temporarily disable it and see if the problem goes away.
[+] AceJohnny2|3 years ago|reply
I see a few obvious areas of improvement, all of which are "implementation details" and don't undermine the core principles of SELinux.

> The SELinux denial audit log messages are too vague. You’re told that a label was denied reading from another label. Okay, what do those labels mean? Which programs? Which files, sockets, or whatever?

Improve logging in the SELinux system. Clearly it was able to map the program/files to labels, why aren't those programs/files provided in the error message?

> There’s nowhere on the system where you can view the policies and look up why something might or might not work.

Improve SELinux's policy analysis tools. As it is, it appears `seinfo` and `sesearch` provide too much detail and are impossible for non-experts to use. Are there alternatives that any sysadmin can use?

> A package’s SELinux policy is — most often — not part of the package but is installed as part of a larger meta package containing tens of thousands of policies (e.g., the Fedora Linux project’s selinux-policy mega-package).

Policy should be included with the software package (or have a clear, computer-resolvable link to the package the policy applies to).

Seems to me that SELinux is an afterthought in most distributions, which I can understand in volunteer/community-driven distributions, less in Enterprise ones. How does RHEL do it?

--

However, the fact that these problems remain decades after the introduction of SELinux (2000!), indicate a lack of will by the Enterprise Linux/Security industry. Considering its origins at NSA, which I'm sure is willing to train its sysadmins and not bothered by its obtuseness, I'm not surprised.

[+] egberts1|3 years ago|reply
I blame Redhat squarely for not mandating a document file detailing these systems, these contexts, these users.

Redhat has co-opted and has become an IBM of consulting firm: use obscured notation of an error code as a corporate revenue flow.

This is not a knock against SELinux nor against the inventor (NSA) thereof.

SELinux is a great privilege boundary setter at an extreme granulated level. SELinux is superior over many other system audit approaches in all area but one: that one is ease of use.

Another fault is that the Buy-ins of SELinux was placed upon the group of distro/package maintainers instead of upon the developer.

Another aspect on the ease of use is that There are no auto generation of SELinux policy given one executable file. No tools have attempted to adequately scan the code or object files for such its needed audit policy file. But this mapping of code to resource is often a skimped step of software development.

Unlike the development of apps for a GUI/WindowsManager environment, SELinux is NOT mandated by its own library to say “thou shall use all of me” or you don’t get go to heaven (or be seen on the computer).

[+] ulzeraj|3 years ago|reply
> Red Hat Enterprise Linux (RHEL) has some of the most accessible documentation on creating custom policies.

Red Hat is the Pinterest of Google searches for Linux problems. You’ll often find a page where someone is describing you problem but the resolution is locked behind a subscription login.

[+] bravetraveler|3 years ago|reply
Eh, I generally disagree - but not completely.

Set SELinux to permissive for your personal systems at your own relative risk.

Never disable it, you'll end up in a situation where nothing is labeled and relevant policy adjustments probably don't exist. Making re-enabling it later a nightmare.

I'd argue it's particularly advantageous for desktops where untrusted software is a more common occurrence... and other controls (eg: network isolation) aren't as robust.

For business use, it's probably worth some investment. Any serious compliance program will probably want an explanation for why you aren't using it/equivalent.

It's not some amorphous thing, it can be learned and handled. I used to routinely disable it but I haven't in years

There are clearly defined objects and areas of responsibility. It's just buried under piles of documentation

semanage and all of the other tools are indeed cryptic, but you can generally get by with two steps:

- Identifying triggered hits

- restore contexts / update the policy as appropriate for this use case

I won't say it's easy, an employer of mine paid for two weeks of training that focused a lot specifically on this.

A lot of it I can't even articulate that well, I've just developed a set of patterns

Edit; obligatory mention: https://stopdisablingselinux.com/

[+] dylan604|3 years ago|reply
After using CentOS for years, I have come to establish a debugging rule: If things just don't make sense, check if you're fighting SELinux policy. It's just one of those things that you beat your head on the desk after crashing into multiple brick walls that eventually works its way into your debugging process.

I guess it just shows how effective SELinux is if it is preventing the admin from doing something. /s

[+] bayindirh|3 years ago|reply
This is bad advice. SELinux might be hard, but it's neither unmanageable, nor it gets in your way when configured correctly.

I've deployed both AppArmor and SELinux in high security environments, and they're nice layers of security to have.

Yes, it has a learning curve, but it's worth it.

[+] INTPenis|3 years ago|reply
Let me just assure everyone that it's not broken by design. Many of us use SElinux to harden systems and services on a regular.

But that said, it's not user friendly either. It's definitely a skill I list on my CV. Either learn it and feel better about it, or set it to permissive mode and live with that.

I agree with mindwok, SElinux defaults from distro + containers work just fine. Rarely any policy issues, and you get the isolation of containers on top of that. So I wouldn't set SElinux into permissive mode just to use containers, I'd use both. Which I currently am doing on my container host at home for example.

Edit: Today's press[1] have an excellent example of a vulnerability that should be stopped by SElinux default policies. Look at NetworkManager/dispatcher.d as an example, semange fcontext -l | grep 'NetworkManager/dispatcher' and you'll see it only has the exec context on that one directory.

So even if you could exploit a directory traversal and place malicious scripts in other places, it would not be allowed to execute them.

1. https://arstechnica.com/information-technology/2022/04/micro...

[+] ohuf|3 years ago|reply
The question is: what good is a security system that tells me to "just allow whatever the problem is", once a problem comes up? Without being able to see the specifics of a request and blindly following the whitelisting I could end up allowing some hacked module access to critical files without knowing. Opening the system to exactly the dangers I'm seeking to stop.
[+] throwaway787544|3 years ago|reply
Operating system security was important back before VMs became cheap. At this point you are really wasting time if you invest significant effort in host security. It's the new network security.

Separate concerns into VMs and network connections separated by strong authentication, authorization and encryption, and practice least privilege. It doesn't matter if your OS or containers are made of security swiss cheese as long as compromising one thing doesn't lead to compromising another. For containers you need Firecracker VMs; there is no other strong isolation for Linux containers, everything else is pretty easy to pop.

[+] nix23|3 years ago|reply
>It doesn't matter if your OS or containers are made of security swiss cheese as long as compromising one thing doesn't lead to compromising another.

I hope you never work with customer related data.

[+] fsociety|3 years ago|reply
I’ve heard several high-level security engineers take this stance in organizations with stringent security requirements. There may be some truth here when prioritizing work, but I have yet to see strong authn/authz prevent red teams from wrecking havoc.

It’s too complex. We like to think of systems as being fairly constrained, but in practice I am not convinced that is true. Once you start adding other business functions to your environment, like observability, you can introduce non-obvious gaps in your policies.

Would having OS security completely prevent an attacker from wrecking havoc? Probably not. But it would make it much more difficult. It can also protect you from container/VM escapes.

Are SELinux policies complex? Yes, but they are constrained to a host’s system and are yet another layer of defense that can be used to secure software.

I’ve seen again and again how an SELinux policy can prevent an exploit from causing harm. It can also be an effective and quick way to mitigate risks from a new 0-day being posted while software teams work on patching.

I am fairly convinced, or bullish, that making SELinux easier to configure is a solvable problem. The biggest hurdle to it is this puzzling movement within security to avoid OS-level controls and instead focus on network-level controls. It puzzles me why a well-resourced org thinks you can only invest in one or the other.

[+] jdhendrickson|3 years ago|reply
This article is a prime example of why one should think about the content of a page before accepting it's premise.

I would argue that no solution in and of itself is complete.

I hope someday we can get away from the idea that containerization is the complete solution.

I have been a system admin for over 20 years, and it seems we still have not been able to get the concepts of defense in depth, and specifically many layers of defense on each level to soak in.

I see multiple people espousing the belief (in the comments on this article at least) that containerization has solved this issue and SELinux is no longer needed.

I can think of a certain chaos goose (Ian Coldwater) who has been providing plenty of reasons why it's bad to trust containers alone for years.

In my personal experience SELinux has stopped attacks in their tracks when a 0 day hit, until a patch from upstream became available and is an invaluable tool for blue team in general.

It is not the most intuitive of tools, but most truly powerful tools are not intuitive, that seems to be a common trade off.

[+] fargle|3 years ago|reply
How can I trust some opaque policy I cannot see? Not just trust that it is well intentioned not evil, (Oracle Linux?), but that it does what it says. And does what I need and that the opaque developers have correctly envisioned what I need to do.

Security by obscurity is worse than none at all. Security by buying an "Enterprise" product is worse than that.

What SELinux says is "real MAC security is super complicated, but don't worry we've taken care of all that for you, but you can't understand it or change anything". What I say is "easy peasy; if you access the machine you're root. Make sure that either it doesn't happen or it doesn't matter".