top | item 2267916

My 10 UNIX Command Line Mistakes

103 points| gnosis | 15 years ago |cyberciti.biz | reply

64 comments

order
[+] bradleyland|15 years ago|reply
One of my worst was `source ~/.bash_history` via accidental tab completion. I was expecting .bash_profile. The nasty part was, I couldn't kill it because of all the mess (system instability) it was creating. A couple of 'cd ..' calls and a 'rm -rf *' ended up biking some root directories.

I ended up restoring from backup.

[+] kragen|15 years ago|reply
You have just added a new reason to my list of reasons never to put `rm -rf *` into my command-line history again.
[+] bho|15 years ago|reply
Could you add 'rm -rf' to histignore? I remember it works with patterns but can't remember if you can use it for specific commands. I'm not on my linux box now so I can't test it.
[+] sipefree|15 years ago|reply
With so many SSH accounts all over the place, I've started using color-coded prompts for all my connections:

http://i.imgur.com/KKQHj.png

This way I can be sure I'm typing `halt` into the right box, because it's easier to spot the wrong color than the wrong hostname in 10pt text.

[+] RK|15 years ago|reply
My lazy way to do something similar with Gnome Terminal is to make an "SSH profile" with a different background color. This helps reduce my mistakes.

Usually I'll just do things like df on the wrong machine and wonder what's going on with my disk space, but I've made a few less happy errors in the past.

[+] ohkine|15 years ago|reply
Unrelated to anything but: Your terminal colours seem a bit brighter than the standard ones in Snow Leopard. Are Terminal's colour capabilities improved in Lion, or are you using that SIMBL plug-in, or...?
[+] wglb|15 years ago|reply
Have you considered using 'screen' instead?
[+] chalst|15 years ago|reply
I wanted to shutdown VPN interface eth0, but ended up shutting down eth1 while I was logged in via SSH

Not done that exact one, but I've added firewall rules that cut me out in exactly the same way.

Another favourite is typing "shutdown -h now" into the wrong terminal: I know a few people who have admitted to doing that.

[+] kree10|15 years ago|reply
I always hated remote firewall changes. I usually had two terminals open. In one, I'd run my "cya" script that slept for a minute, restored the old firewall rules, slept another minute, then did a reboot. In the other terminal, I'd run the new firewall rules. If I didn't screw up, I'd go back to the other terminal to stop the "cya" script. If I did screw up, the worst case was a few minutes of downtime that I'd hope nobody would notice.
[+] xiongchiamiov|15 years ago|reply
Another favourite is typing "shutdown -h now" into the wrong terminal: I know a few people who have admitted to doing that.

I do that less now that the machine's hostname is prominently displayed in my prompt.

[+] code_duck|15 years ago|reply
Guilty of "Typing UNIX Commands on Wrong Box" here.

I thought I was logged into my home server. Turns out I shut down the main web server. Oops!

[+] there|15 years ago|reply
I wanted to append a new zone to /var/named/chroot/etc/named.conf file., but end up running:

./mkzone example.com > /var/named/chroot/etc/named.conf

in at least tcsh, "set noclobber" will help with this. when you try to overwrite a file that exists (usually from doing > instead of >>) you will get an error that the file exists instead. if you really want to overwrite the file, you have to use ">!".

[+] gnosis|15 years ago|reply
It's similar in zsh. Unless CLOBBER is set, you need to use ">!" or ">|" to overwrite a file when redirecting output.

A nice safety feature that's already saved my bacon.

[+] Dobbs|15 years ago|reply
My worst:

`cp -r folder backup` turns out folder was a symlink. Then I messed up my script and deleted all of the contents. Backup was destroyed with the original since I copied the symlink instead of the directory. Luckily I had just setup a slave server and was able to copy 95% of the files from there.

Recently I did a `rm -rf /directory/` instead of a `rm -rf /directory/directory2`. Once again luckily I had real backups.

Every-time I screw up, or a system has problems (stupid hard drives) the belief that backups the most important part of a system is reinforced. It basically doesn't matter what you do if you have proper backups you can recover.

The catch there is that no backup is truly a backup until it is tested.

[+] gnosis|15 years ago|reply
"no backup is truly a backup until it is tested"

Unfortunately, even that is not enough, in the long run.

You have to periodically retest your backups, and transfer them to new media as they age.

It's also a good idea to store backups off-site (preferably in multiple geographically-dispersed locations).

And, it almost goes without saying that the more frequently you do backups, the less data you'll lose when you actually have to restore from them.

Before long, it's a full time job just to keep the backup system humming along smoothly, testing and retesting backups, and transferring them from old media to new.

Of course, this problem gets a lot harder and more time consuming as the quantity of data you need to backup/restore grows.

I keep reading about the crazy amounts of data generated by projects like the LHC, and my mind boggles at what the challenges in doing backups of that amount of data must be like.

[+] kragen|15 years ago|reply
Here's a compendium of tips for avoiding mistakes like these, largely culled from other comments here, but some from the article.

1. Never `rm `, especially `-rf`. If you must, `rm -rf ../tmp/` or similar. You do not want this command to be in your history. Especially in the form `rm -rf * & cd; trn`.

2. Set up backups early on. Use `git` or similar for as much as possible.

3. `noclobber` (or not setting CLOBBER in zsh) may help. I found `noclobber` irritating because it also refused to append to nonexistent files.

4. Take three deep breaths after typing and before executing a `shutdown` or `rm -r` command, or typing a password at a password prompt. (Did you just give your intranet server password to whoever has backdoored your weekend-project VPS?)

5. Keep backups in a format that you can readily verify is usable. `rsync` with `--link-dest` is handy for this, as is `git`.

6. When referring to a directory that you believe exists, don't say `foo/bar/baz`. Say `foo/bar/baz/.` so that whatever command doesn't try to create the directory `baz` if it doesn't exist yet.

7. Color-coded prompts or terminal windows by server.

8. Tab-complete to ensure that things exist.

9. Use `sudo` instead of root shells.

10. Keep your config files in source control. RCS is okay, but Git is better.

[+] adulau|15 years ago|reply
Nice idea. Usually you see blog posts with the latest success or the code you are proud of but showing mistakes and errors is not that common. I'm dreaming to see the academic papers where instead of all the wonderful results you'll see the errors path to reach the final result.
[+] vog|15 years ago|reply
> Usually you see blog posts with the latest success or the code you are proud of

I'm observing the exact opposite. While the official project sites show indeed only success stories, I'm observing a trend that those sites increasingly add a "blog" section where the developers talk freely about their experiences and mistakes.

> I'm dreaming to see the academic papers where instead of all the wonderful results you'll see the errors path to reach the final result.

This would not only be great, but should be mandatory (maybe by law) in order to get rid of the publication bias. (http://en.wikipedia.org/wiki/Publication_bias)

[+] SoftwareMaven|15 years ago|reply
The worst I've done is unplug the internet connection of an ISP, who happened to be a competitor of the company I was working at. Not a good day at all...
[+] Graham24|15 years ago|reply
First day at college the lecturer says "As soon as you've finished editing your programmes, type 'rmcobol < a.txt > a.out'".

So, having finished first, I type "rm cobol < a.txt > a.out", and I spend a while wondering what "cobol not found" means before I realise what I've done.

[unix syntax from memory, may be wrong]

[+] mrpollo|15 years ago|reply
i once did `sudo mv . /var/www/` when in root... i had been copying files to my webserver.. before i knew it my connection had closed and i couldn't ping the server, after running to the colo i find i had no backups my rsync had been failing for the last couple of days and i had failed to check the logs, after pulling and old copy of the site from what i think it was one of the developers laptops, i was able to get the site running, old and w/o the latest db, after a while i mounted the drive and for my surprise everything was still there, lesson learned always check your current path... i always find myself just typing as fast as i can and sometimes while switching from tty's i loose track of where i am...
[+] unknown|15 years ago|reply

[deleted]

[+] ominous_prime|15 years ago|reply
Many systems, like Solaris and Ubuntu, don't have a firewall as the default.
[+] InclinedPlane|15 years ago|reply
My worst was uninstalling libc on a linux machine. I had been going round and round trying to get the right packages and versions on the machine and ended up getting it in a bad state and for some reason got it into my head that it'd be a good idea to uninstall libc and install a newer version. Note that when you do so (at least on debian) you will not just be given a y/n prompt, you have to type in some sentence such as "Yes, I know this is a very bad idea." at the prompt before proceeding. When I'd reached that point I figured why the hell not, it'll be curious to see what the result is. The machine became pretty much unusable after that and I ended up just reinstalling from scratch, IIRC.
[+] yesbabyyes|15 years ago|reply
On Linux killall command kill processes by name (killall httpd). On Solaris it kill all active processes.

As a young programmer on my first real programming job, an online stock broker in 1999, I did this. I had been running Linux since -95 and was familiar with Solaris from college. I had no idea about this though. I was so ashamed, but I didn't face any dire consequences.

I will never forget my lesson (then again, I will probably not be managing Solaris anymore).

I have also done variations on "ifconfig eth1 down" (or messing around with iptables) on a remote computer.

[+] dedward|15 years ago|reply
I'll cop to the same - though thankfully it was my workstation and not a production server (and more than a decade ago).

I would imagine just about everyone from the same era who was newly exposed to administering both sunos/solaris and linux probably did the same thing. (the longbeards who already new Sunos/solaris would already know better)

But seriously.. what a stupid command. What was the practical value of "killall" on solaris? seriously?

[+] lazyant|15 years ago|reply
I've been burnt the first time I used 'rsync'. It seems natural that when you are in a computer and you do "something another_computer" you are in the 'client' and the 'another_computer' is the 'server', like when you check out an svn or whatever code locally from a server. rsync has this backwards from this convention, I added the -delete option to 'merge' files (so it didn't copy the ones I already had locally) and I ended up deleting files in the 'server'.
[+] sjs|15 years ago|reply
rsync follows the convention set by mv, cp, ln, etc. Source then destination.

Even git clone follows this convention.

[+] gnosis|15 years ago|reply
When doing stuff like this for the first time, I like to try it out on some test files before going for the real thing.
[+] samuel1604|15 years ago|reply
quoting :

4. Use CVS to store configuration files.

using CVS ? even if it was written 2009 that still pretty backward...

[+] kgo|15 years ago|reply
Heck, I've used rcs in the past five years because the server didn't even have cvs installed. It's not exactly like you need advanced branching/merging, atomic commits, and a distributed system for a one line change to /etc/hosts.
[+] geirr|15 years ago|reply
One of my worst errors was trying to find a function definition while I was coding on a c++ project. I managed to execute 'grep operator> *'. First I found it strange that the grep operation didn't return anything. Then I started opening my files, and they were suddenly all empty. It took a few minutes before I realized exactly what I had done... (Of course, I hadn't committed anything the previous 5 hours.)
[+] div|15 years ago|reply
I pulled the classic more then once:

mv stuffwithoutbackup tosomeotherplacewithatypointhepath

curse at typo

rm -rf tosomeotherplacewithatypointhepath

curse more vigorously

[+] billswift|15 years ago|reply
The best way I have found to reduce typos in a pathname is to use auto complete, even when just typing the path would be faster.
[+] kgo|15 years ago|reply
# on windows box

scp file linuxbox:~

# on linux box # I now have a file called /home/me/~

rm ~

# some stupid error...

rm -fr ~

# that's taking a while...

NOOOOOOO!

[+] megamark16|15 years ago|reply
I did almost the same thing, except someone created a symlink named "" (just an asterisk). I "ln -l"'d the current directory, saw the weird broken symlink, and without thinking did "rm ". The slow motion double take immediately after hitting enter was like a shot from a movie. "NOOOOOO!".
[+] Sniffnoy|15 years ago|reply
...OK, I have to ask. Why would you name a file "~" in the first place?
[+] wnoise|15 years ago|reply
On the systems I used to maintain, every single configuration file was generated by a script. If you make a mistake, you just re-edit and re-run the script.