First of all, the whole point of the "-f" option is to disable confirmation, which really means "I know exactly what I'm doing". The easiest fix is to stop using that option all the damned time.
When you have strong permissions (e.g. running as "root"), you should never use patterns in destructive commands, period.
At best, you should perform a nondestructive pattern command such as a "find" and generate a precise list of target files that can be audited. For example, here is one way to produce a script of commands that deletes an exact list of matching files:
An amusing and helpful trick that I learned was to keep a file named "-i" in the directories that you want to protect. Glob style pattern matching picks it up and the program rm interprets it as the "-i" flag. It is of course not quite foolproof as it can be subverted, but it has saved the day on occasions, particularly one of my friend's. I had a friend who, for totally incomprehensible reasons, would name his files with * s and "."s only and then try to delete one of them with predictable and undesired results.
Well, I have done stupid things myself, for example typing "rm -rf / some_dir" instead of "rm -rf /some_dir". I noticed because it was taking a wee bit too long. It is always good to do an ls with the intended pattern first, to check what all files and directories are being matched before invoking rm with the same pattern.
In the other thread someone mentions using a file named -i. A better approach is to use a file named -X, which is an invalid flag for virtually every file-oriented command. They'll bail out complaining an invalid option has been supplied.
On company I contracted for had something clever going on. Not only did they litter -X files everywhere, attempting to remove one (rm -- -X) would result in an access violation of some kind and your session would be killed as a result, preventing a recursive rm from continuing.
People alias -i and forever supply -f. That doesn't do any good at all. The real answer is to be more careful. It eventually becomes habitual. In about 15 years I have lost data to rm twice: the one time I mistakenly removed the wrong folder, and once when I thought I had a copy of the data.
Because of the inherent dangers in -f, I rarely use it...with one major exception. Whenever I am trying to delete a directory with a git repository in it, the fact that a lot of the things in the .git directory are write-protected means that I have to either punch Y for what is likely dozens of files, or use -f (or some other incredibly ridiculous and equally dangerous thing like "yes | rm").
One: don't do anything as root. Root is the system's account, not your user account. If you need to run a service or application, make a new user for that! I've never needed root for anything other than system administration tasks, like apt-get or adding a user. Also, don't run multiple applications as the same user. If you have a web server, a blog, and a forum, you need three users. The web servers can talk to the backend servers via UNIX sockets or TCP.
Two: don't pass -f. Do you even know what -f does, or are you just cargo-culting it? If you need -f, rm will tell you. Don't use it until then.
Technically doesn't prevent rm -rf /* itself, but still goes long way to prevent a disaster: use a snapshotting filesystem, like NILFS2 http://en.wikipedia.org/wiki/NILFS
Some solutions here center on avoiding issuing rm -rf /* interactively... that's not enough! A broken script or unexpected variable expansion can wreak just as much havoc.
For example rm -rf $SOMEDIR/* :
- if $SOMEDIR is empty, or
- (if you suffer from bash) if $SOMEDIR contains trailing space so it will be expanded into separate words: SOMEDIR='foo '; rm -rf $SOMEDIR/* => rm -rf foo /* (which means, `remove ./foo and remove /* ')
An alias won't help if full path to command is specified; that is quite common for start-up scripts.
I have experienced consequences of rm -rf /* once or twice. Now I pause for a moment every time I am about to remove something and double-check the command. Sometimes even prepend `echo' for a dry run ;-)
Edit:
another nasty case of unintended deletion I had was due to a dumb Makefile rule:
$(CC) -o $(OUTFILE) $(INFILE)
for some reason $(OUTFILE) ended up empty, so outupt went to $(INFILE) -- a C source file -- effectively removing its content. How would I guard against that kind of data loss? A snapshotting filesystem...
I'm sure many of us know the feeling of dread that creeps over you when you suddenly realize an rm command you've dispatched is taking longer to complete than one would expect based on the contents of the directory you think you're deleting...
How about replacing rm with something like https://github.com/andreafrancia/trash-cli ? If you only purge the trash when it's necessary and not automatically after every rm, you'd save yourself.
Type rm -rf /* in your terminal emulator, place your finger over the Enter key and feel the temptation:
"We stand upon the brink of a precipice. We peer into the abyss—we grow sick and dizzy. Our first impulse is to shrink away from the danger. Unaccountably we remain... it is but a thought, although a fearful one, and one which chills the very marrow of our bones with the fierceness of the delight of its horror. It is merely the idea of what would be our sensations during the sweeping precipitancy of a fall from such a height... for this very cause do we now the most vividly desire it."
If I'm going to be doing something major to a lot of files, I often write a script that outputs the commands to execute, so I can verify what's going to be done. Then I reexecute and pipe to bash.
It's not quite applicable to something used as off-handedly as rm, though it could be done. Something like:
If any files begin with '-', the second version will just expand to '-filename' which rm will try to process as an option (possibly failing or creating undesired results). Using
./*
expands to
./-filename
which won't be picked up by option processing. Note: You can also do
rm -rf -- *
to prevent option processing after the '--'.
edit: added a bunch of breaks to prevent the * from being converted to italics.
The former will gleefully delete all files/directories, even if there exists a directory entry named "-i", without asking.
The difference is in glob expansion: ./* keeps the prefix on every expanded item. As mentioned above, using any sort of path (relative or absolute) prefix when globbing will circumvent all the careful "-i" wards a superstitious sysadmin may have put in place.
Create a version of rm that detects if you try to delete the root filesystem, deny it, and makes you put a --really-delete-filesystem-root flag to do so.
It does seem like this would be appropriate. All *nix systems share the desire to not accidentally rm -rf /, and it should be easy to check for inside rm.
Needing the effects of "rm -r /<something>/* " is rare. Just cd first.
I think I rarely use rm -r with an absolute path. And tab completion does something similar (list your targets) if you don't jump the gun with <enter>.
PS I'm entirely comfortable with my Alt-B as "rxvt -e 'sudo zsh'".
[+] [-] makecheck|14 years ago|reply
When you have strong permissions (e.g. running as "root"), you should never use patterns in destructive commands, period.
At best, you should perform a nondestructive pattern command such as a "find" and generate a precise list of target files that can be audited. For example, here is one way to produce a script of commands that deletes an exact list of matching files:
[+] [-] DanielRibeiro|14 years ago|reply
Guis know this, but somehow this piece of UX is forgotten on command line tools.
[+] [-] srean|14 years ago|reply
Well, I have done stupid things myself, for example typing "rm -rf / some_dir" instead of "rm -rf /some_dir". I noticed because it was taking a wee bit too long. It is always good to do an ls with the intended pattern first, to check what all files and directories are being matched before invoking rm with the same pattern.
[+] [-] weaksauce|14 years ago|reply
you can even do a ls -rf ./somedirectory and then just do ^ls^rm at least you can in bash on os x.
[+] [-] DanielRibeiro|14 years ago|reply
[+] [-] smosher|14 years ago|reply
On company I contracted for had something clever going on. Not only did they litter -X files everywhere, attempting to remove one (rm -- -X) would result in an access violation of some kind and your session would be killed as a result, preventing a recursive rm from continuing.
People alias -i and forever supply -f. That doesn't do any good at all. The real answer is to be more careful. It eventually becomes habitual. In about 15 years I have lost data to rm twice: the one time I mistakenly removed the wrong folder, and once when I thought I had a copy of the data.
[+] [-] LeafStorm|14 years ago|reply
[+] [-] jrockway|14 years ago|reply
Two: don't pass -f. Do you even know what -f does, or are you just cargo-culting it? If you need -f, rm will tell you. Don't use it until then.
[+] [-] dexen|14 years ago|reply
Some solutions here center on avoiding issuing rm -rf /* interactively... that's not enough! A broken script or unexpected variable expansion can wreak just as much havoc.
For example rm -rf $SOMEDIR/* :
- if $SOMEDIR is empty, or
- (if you suffer from bash) if $SOMEDIR contains trailing space so it will be expanded into separate words: SOMEDIR='foo '; rm -rf $SOMEDIR/* => rm -rf foo /* (which means, `remove ./foo and remove /* ')
An alias won't help if full path to command is specified; that is quite common for start-up scripts.
I have experienced consequences of rm -rf /* once or twice. Now I pause for a moment every time I am about to remove something and double-check the command. Sometimes even prepend `echo' for a dry run ;-)
Edit:
another nasty case of unintended deletion I had was due to a dumb Makefile rule:
for some reason $(OUTFILE) ended up empty, so outupt went to $(INFILE) -- a C source file -- effectively removing its content. How would I guard against that kind of data loss? A snapshotting filesystem...[+] [-] georgemcbay|14 years ago|reply
There should be a name for that.
[+] [-] wgx|14 years ago|reply
http://www.unwords.com/unword/onosecond.html
[+] [-] mkramlich|14 years ago|reply
[+] [-] wahnfrieden|14 years ago|reply
[+] [-] sjs|14 years ago|reply
[+] [-] cmsj|14 years ago|reply
[+] [-] GoodIntentions|14 years ago|reply
appropriate countermeasures imho:
1. run an account with the right amount of access.
2. don't use sudo. (su + password makes you think a bit more)
3. this sounds dickish, but I mean it constructively. Pay attention. the -f flag means something...
4. when all else fails, rsync'd folders are a beautiful thing :)
[+] [-] m_for_monkey|14 years ago|reply
"We stand upon the brink of a precipice. We peer into the abyss—we grow sick and dizzy. Our first impulse is to shrink away from the danger. Unaccountably we remain... it is but a thought, although a fearful one, and one which chills the very marrow of our bones with the fierceness of the delight of its horror. It is merely the idea of what would be our sensations during the sweeping precipitancy of a fall from such a height... for this very cause do we now the most vividly desire it."
Edgar Allan Poe - The Imp of the Perverse
[+] [-] jtchang|14 years ago|reply
1. Backups 2. Sudo
I rarely do anything as root. If I am switching to root I already know what command I want to run. Backups are for when I do something stupid.
[+] [-] barrkel|14 years ago|reply
It's not quite applicable to something used as off-handedly as rm, though it could be done. Something like:
might produce output like: And so on. That can then be piped to bash as a confirmation.[+] [-] jarin|14 years ago|reply
[+] [-] pyre|14 years ago|reply
edit: added a bunch of breaks to prevent the * from being converted to italics.
[+] [-] antifuchs|14 years ago|reply
The difference is in glob expansion: ./* keeps the prefix on every expanded item. As mentioned above, using any sort of path (relative or absolute) prefix when globbing will circumvent all the careful "-i" wards a superstitious sysadmin may have put in place.
[+] [-] unknown|14 years ago|reply
[deleted]
[+] [-] mahyarm|14 years ago|reply
[+] [-] sliverstorm|14 years ago|reply
[+] [-] mindslight|14 years ago|reply
I think I rarely use rm -r with an absolute path. And tab completion does something similar (list your targets) if you don't jump the gun with <enter>.
PS I'm entirely comfortable with my Alt-B as "rxvt -e 'sudo zsh'".
[+] [-] tung|14 years ago|reply
[+] [-] random42|14 years ago|reply
[+] [-] dools|14 years ago|reply
find . -name "whatevs"
hit enter, verify you are deleting what you expect then hit the up arrow and type:
| xargs rm -rf
[+] [-] sirn|14 years ago|reply