top | item 41259912

Unix file access rwx permissions are simple but tricky

67 points| hyzyla | 1 year ago |igoro.pro

74 comments

order

teo_zero|1 year ago

It's counterintuitive that the owner can have less rights than the others. Honestly, I've never seen it put in practice in any real-world file system.

Incidentally, this is also not very efficient: UNIX permissions as they are today require 9 bits, namely rwx for owner, rwx for group, and rwx for others. But in an alternative universe where owner's rights win over group's rights which win over others' rights, permissions could be coded in just 6 bits: 2 to express who can read, 2 for who can write, and 2 for who can execute. Each set of 2 bits would be interpreted this way: 00=nobody, 01=only owner, 10=group or owner, 11=everybody.

inetknght|1 year ago

Alas, you're missing some bits. Sticky bit in particular can be associated to each of those too. There's probably others that I don't remember off the top of my head.

> It's counterintuitive that the owner can have less rights than the others

I completely concur. I've also never seen it used in the wild, but I know about it because I stumbled upon it more than once building scripts and not being careful about what flags are set.

emmelaich|1 year ago

> owner can have less rights than the others.

Indeed it's funny that you can `sudo chown root regularfile` and you'll then be able to read it, since the group now applies not the user.

emmelaich|1 year ago

I'd argue you could drop x as well. It's really an attribute not a permission, since you can copy a file without x then chmod +x it.

That fails for setuid files but that's a setuid thing not an x thing. It also fails I guess for executables that check argv[0] but probably not important.

nneonneo|1 year ago

This would also have the advantage of being cheaper to list out; instead of rw-r----- (0640), it would just be “gu-” (-, u, g, a for the four levels).

meonkeys|1 year ago

I attended a Tanenbaum lecture once where he talked about how silly it was that nothing happens if permissions are reduced for a file while some other user/process has an open handle to it, and this is something Linux doesn't care to handle and MINIX does (or perhaps just that a kernel/filesystem should handle it, and few do -- I don't recall exactly). Surely an edge case (logging? what else? I never keep files open for too long), but I thought it was an interesting one.

You can test this in Bash: userA does cat>/tmp/newfile (assuming a chmod or relaxed umask so /tmp/newfile is created with permissions 0664), userA types in lines of text every few seconds, userB does tail -f /tmp/newfile and watches lines appear, then userA does chmod 600 /tmp/newfile, but userB can continue to tail -f /tmp/newfile and watch lines appear.

warkdarrior|1 year ago

Yes, it's the equivalent of "perimeter security" in networking. Once you are inside accessing a resource (connected to a network node or reading from a file descriptor in the kernel), you don't lose that access.

dwattttt|1 year ago

The existing pattern leads to very useful usecases though: there are resources a server only needs to open once (e.g. during startup), and being able to then remove access while holding onto the one handle you're going to use is a security win.

dataflow|1 year ago

How would you want memory-mapped files to work, if permission changes affected open files?

1oooqooq|1 year ago

The nicer things are suid a guid.

suid is to run things as another user without passwords. Mostly used for root access today and ignored for anything else. I personally think that's a missed oportunity when they added the unshare/namespace/capdrop stuff... would have been so nice if the interface to containers was a freaking simple 'suid as this lowly user' for a userland api. anyway.

and guid ON DIRECTORIES, are so that users can save files in a groups that then others can also update. So you can have `/srv/http/htdocs userA webmasters -rwxswx---`

then there's umask which may help or get in the way. and getfacl et al.

overall it's a mess that covers many usecases after you've been initiated.

remram|1 year ago

Interesting, I was just diving into the permission system today. I was wondering if it was possible to delegate administration of a directory, e.g. give permission to some non-root user to delete files created by others in that directory.

Turns out it doesn't seem possible. Even if you use ACLs, whatever default ACL you set can just be removed from sub-directories by their respective owners. This seems like a big blind spot, unless I just missed something; all those groups, access lists, bits, and I can't even do that?

emmelaich|1 year ago

Yeah AFAIK you'd have to make a frontend to `rm` and execute with sudo. I've done this a few times.

Relatedly, and possibly helps you implement half of the scheme. You can make a dropbox[0] style directory by removing the search (x) attribute and having some program continuously scan and rename dropped files to some random string.

[0] dropbox in the traditional meaning of course, not the cloud storage

jcovik|1 year ago

I actually never had the idea. It's truly unintuitive.

saulpw|1 year ago

I've been wondering about this for awhile. Do we really need multiple users for desktop unix? I get that you want some division between system and user, to protect the user against themselves. And read-only files are similarly useful, if only because some devices are read-only. But do we really need user/group/other permissions for desktop unix? and all the complexity of groups, and euid, etc.

Edit: not sure why I'm getting downvoted. Is it that offensive to question orthodoxy?

ratorx|1 year ago

User is useful for isolation, not just between system and user, but also between different bits of the system. This is more useful on a server running multiple different services, but desktop software often has multiple services as well (although I can’t think of an example right now).

Groups are a bit more niche IMO, but without groups there is no real other way to express the constraint of thing X uses file A,B, thing Y uses file B,C - how can they share the data without making things globally accessible or duplicating it. That’s probably a less frequent occurrence, but does come up (but again more on servers than desktop).

eadmund|1 year ago

Those multiple users could be used to implement sandboxing.

And of course if one has a family then one might want accounts for Mom, Dad, Alice and Bob.

pdonis|1 year ago

> I get that you want some division between system and user

Which, as others have pointed out, means various system services running as other users (since you don't want them running as your user, and you also don't want them running as root). On most desktop unix machines that only one person uses, that's the main use case for multiple users (and for multiple groups since groups are used to manage access to various functions like printing, usb sticks, cd-roms, etc.).

DSMan195276|1 year ago

Users are still useful for isolation, many daemons on your system are likely running with different UIDs (or could be configured to do so) to increase isolation between them and the rest of the system.

Groups are a bit less useful (IMO), but still good for handing out access to things like device files. If a daemon should have permissions to XYZ /dev file then you add them to the group associated with it.

quotemstr|1 year ago

You want every app of yours to be a different "user" so they can't access each other's data without arbitration. The term "user" is an unfortunate Unix inheritance. There's no reason that a single human (as he might on Unixes like Android or iOS) shouldn't have a hundred Unix "user" IDs at his disposal, one for each app.

jacobsenscott|1 year ago

It's a 70's permission system designed for 70's style computer usage - ie one computer shared by many people, with a relatively high level of trust among all the users.

arp242|1 year ago

It seems to me that stuff like iwd, ntpd, udevd, bluetoothd, dhcpcd, etc. etc. each running as a different user is pretty desirable. Every system works like this, including Windows.

The most obvious reason for this is so that a security problem in one of these daemons won't be able to read your Firefox cookies, install a rootkit, and stuff like that.

0cf8612b2e1e|1 year ago

It took me a shockingly long amount of time before I realized it was silly to have a username on my machines. I am the only person using this, why am I typing unnecessary cruft? Username switched to “a”, which ends up saving space in my home path and terminals.

pjmlp|1 year ago

Of course, nowadays even more so, unless users want to expose $HOME to the world.

hamandcheese|1 year ago

I feel like macOS had the right idea for desktop security, with a per-binary permissions model when it comes to accessing sensitive areas in $HOME.

I know this can be done in Linux using flat packs, snaps, and the like, but I would really appreciate if sandboxing could be done at a more fine grained level, without coupling sandboxing and distribution.

foresto|1 year ago

> Do we really need multiple users for desktop unix?

I do, and not just for system services as mentioned by others.

I have separate user accounts for general desktop use, gaming, software builds, software testing, and a variety of containers.

Isolation is useful.

wruza|1 year ago

Yes, we really need users for desktops on all operating systems. Fundamentally limiting a computer to a single user is immeasurably idiotic and I wonder how one comes to that question even.

Somehow it slipped in for phones and that’s a big part of why they suck. E.g. you can’t have work, life, private/second life and tmp/trash accounts on your phone and have to either carry multiple devices or mix lives together.

Chris_Newton|1 year ago

Do we really need multiple users for desktop unix?

I find them valuable. For example, I have a workstation that is used for different projects with different clients, as well as administrative work for my own business. I want 100% separation between assets related to those different contexts.

It’s bad enough that we have package managers allowing package installation scripts to run arbitrary code, or software wanting you to install via:

    curl https://example.com/imnotmalwareipromise.sh | sh
I’ve seen people seriously make the argument that if your entire system gets nuked by malware through these installation methods then this is entirely your fault. That’s obviously an absurd victim-blaming stance, but the fact is that the risk still exists with modern software development systems.

At least if I have separate users for each client or each major project then the worst that is going to be compromised by a vulnerability introduced during the work for that client or project is that same work.

It’s not just about security though. It’s also about convenience and manageability. Those different clients and projects frequently require the use of specific security credentials and configurations, often for remote services that other clients/projects also use. In a perfect world, I’d like all of the software I use to be XDG-friendly, and I’d like each client/project to have its own home directory with its own independent XDG-style directories underneath, so each user has the configurations and credentials required for its own work and has no knowledge of or access to those of any other user. Finished a project? Archive/nuke that entire user and home directory as appropriate, and nothing is left lying around to break anything or leak anywhere later.

I’m currently playing with NixOS, which means I can also have a limited set of system-wide software installed and have specific additional packages installed per-user or even activated on demand when I change into a specific directory. Again, this means my system has only the software I actually need available at any given time, at the exact version I need for that specific work, and if something is no longer needed by anything I’m doing then it will automatically get cleaned up next time I do an update/rebuild.

None of this really works without the concept of separate users running different software in their own isolated little worlds, possibly concurrently on the same workstation and even sharing the same input/output devices (in a safe way where again they can’t unreasonably interfere with each other – something else that is not 100% there yet, but certainly a lot better than on de facto single-human-user operating systems). The only real alternative is to spin up something like a different virtual machine for each client/project where everything from the OS down is isolated, but I don’t really gain anything by doing that and it’s potentially more work to set up and more difficult to share input/output devices.

stefan_|1 year ago

No, a group called "wheel", "dialout" and users "irc games uucp list gnats mail news" are essential to the Linux desktop. The only cruft facing the Linux desktop today is the unification of bin and sbin.