How do I prevent accidental rm -rf /*?

92,979

Solution 1

One of the tricks I follow is to put # in the beginning while using the rm command.

root@localhost:~# #rm -rf /

This prevents accidental execution of rm on the wrong file/directory. Once verified, remove # from the beginning. This trick works, because in Bash a word beginning with # causes that word and all remaining characters on that line to be ignored. So the command is simply ignored.

OR

If you want to prevent any important directory, there is one more trick.

Create a file named -i in that directory. How can such a odd file be created? Using touch -- -i or touch ./-i

Now try rm -rf *:

sachin@sachin-ThinkPad-T420:~$ touch {1..4}
sachin@sachin-ThinkPad-T420:~$ touch -- -i
sachin@sachin-ThinkPad-T420:~$ ls
1  2  3  4  -i
sachin@sachin-ThinkPad-T420:~$ rm -rf *
rm: remove regular empty file `1'? n
rm: remove regular empty file `2'? 

Here the * will expand -i to the command line, so your command ultimately becomes rm -rf -i. Thus command will prompt before removal. You can put this file in your /, /home/, /etc/, etc.

OR

Use --preserve-root as an option to rm. In the rm included in newer coreutils packages, this option is the default.

--preserve-root
              do not remove `/' (default)

OR

Use safe-rm

Excerpt from the web site:

Safe-rm is a safety tool intended to prevent the accidental deletion of important files by replacing /bin/rm with a wrapper, which checks the given arguments against a configurable blacklist of files and directories that should never be removed.

Users who attempt to delete one of these protected files or directories will not be able to do so and will be shown a warning message instead:

$ rm -rf /usr
Skipping /usr

Solution 2

Your problem:

I just ran rm -rf /* accidentally, but I meant rm -rf ./* (notice the star after the slash).

The solution: Don't do that! As a matter of practice, don't use ./ at the beginning of a path. The slashes add no value to the command and will only cause confusion.

./* means the same thing as *, so the above command is better written as:

rm -rf *

Here's a related problem. I see the following expression often, where someone assumed that FOO is set to something like /home/puppies. I saw this just today actually, in the documentation from a major software vendor.

rm -rf $FOO/

But if FOO is not set, this will evaluate to rm -rf /, which will attempt to remove all files on your system. The trailing slash is unnecessary, so as a matter of practice don't use it.

The following will do the same thing, and is less likely to corrupt your system:

rm -rf $FOO

I've learned these tips the hard way. When I had my first superuser account 14 years ago, I accidentally ran rm -rf $FOO/ from within a shell script and destroyed a system. The 4 other sysadmins looked at this and said, 'Yup. Everyone does that once. Now here's your install media (36 floppy disks). Go fix it.'

Other people here recommend solutions like --preserve-root and safe-rm. However, these solutions are not present for all Un*xe-varients and may not work on Solaris, FreeBSD & MacOSX. In addition, safe-rm requires that you install additional packages on every single Linux system that you use. If you rely on safe-rm, what happens when you start a new job and they don't have safe-rm installed? These tools are a crutch, and it's much better to rely on known defaults and improve your work habits.

Solution 3

Since this is on "Serverfault", I'd like to say this:

If you have dozens or more servers, with a largish team of admins/users, someone is going to rm -rf or chown the wrong directory.

You should have a plan for getting the affected service back up with the least possible MTTR.

Solution 4

The best solutions involve changing your habits not to use rm directly.

One approach is to run echo rm -rf /stuff/with/wildcards* first. Check that the output from the wildcards looks reasonable, then use the shell's history to execute the previous command without the echo.

Another approach is to limit the echo command to cases where it's blindingly obvious what you'll be deleting. Rather than remove all the files in a directory, remove the directory and create a new one. A good method is to rename the existing directory to DELETE-foo, then create a new directory foo with appropriate permissions, and finally remove DELETE-foo. A side benefit of this method is that the command that's entered in your history is rm -rf DELETE-foo.

cd ..
mv somedir DELETE-somedir
mkdir somedir                 # or rsync -dgop DELETE-somedir somedir to preserve permissions
ls DELETE-somedir             # just to make sure we're deleting the right thing
rm -rf DELETE-somedir

If you really insist on deleting a bunch of files because you need the directory to remain (because it must always exist, or because you wouldn't have the permission to recreate it), move the files to a different directory, and delete that directory.

mkdir ../DELETE_ME
mv * ../DELETE_ME
ls ../DELETE_ME
rm -rf ../DELETE_ME

(Hit that Alt+. key.)

Deleting a directory from inside would be attractive, because rm -rf . is short hence has a low risk of typos. Typical systems don't let you do that, unfortunately. You can to rm -rf -- "$PWD" instead, with a higher risk of typos but most of them lead to removing nothing. Beware that this leaves a dangerous command in your shell history.

Whenever you can, use version control. You don't rm, you cvs rm or whatever, and that's undoable.

Zsh has options to prompt you before running rm with an argument that lists all files in a directory: rm_star_silent (on by default) prompts before executing rm whatever/*, and rm_star_wait (off by default) adds a 10-second delay during which you cannot confirm. This is of limited use if you intended to remove all the files in some directory, because you'll be expecting the prompt already. It can help prevent typos like rm foo * for rm foo*.

There are many more solutions floating around that involve changing the rm command. A limitation of this approach is that one day you'll be on a machine with the real rm and you'll automatically call rm, safe in your expectation of a confirmation… and next thing you'll be restoring backups.

Solution 5

You could always do an alias, as you mentioned:

what_the_hell_am_i_thinking() {
   echo "Stop." >&2
   echo "Seriously." >&2
   echo "You almost blew up your computer." >&2
   echo 'WHAT WERE YOU THINKING!?!?!' >&2
   echo "Please provide an excuse for yourself below: " 
   read 
   echo "I'm sorry, that's a pathetic excuse. You're fired."
   sleep 2
   telnet nyancat.dakko.us
}

alias rm -fr /*="what_the_hell_am_i_thinking"

You could also integrate it with a commandline twitter client to alert your friends about how you almost humiliated yourself by wiping your hard disk with rm -fr /* as root.

Share:
92,979

Related videos on Youtube

Valentin Nemcev
Author by

Valentin Nemcev

Updated on September 18, 2022

Comments

  • Valentin Nemcev
    Valentin Nemcev over 1 year

    I just ran rm -rf /* accidentally, but I meant rm -rf ./* (notice the star after the slash).

    alias rm='rm -i' and --preserve-root by default didn't save me, so are there any automatic safeguards for this?


    I wasn't root and cancelled the command immediately, but there were some relaxed permissions somewhere or something because I noticed that my Bash prompt broke already. I don't want to rely on permissions and not being root (I could make the same mistake with sudo), and I don't want to hunt for mysterious bugs because of one missing file somewhere in the system, so, backups and sudo are good, but I would like something better for this specific case.


    About thinking twice and using the brain. I am using it actually! But I'm using it to solve some complex programming task involving 10 different things. I'm immersed in this task deeply enough, there isn't any brain power left for checking flags and paths, I don't even think in terms of commands and arguments, I think in terms of actions like 'empty current dir', different part of my brain translates them to commands and sometimes it makes mistakes. I want the computer to correct them, at least the dangerous ones.

    • user606723
      user606723 over 12 years
      FYI, you can also do rm -rf . /mydir instead of rm -rf ./mydir and kill whatever directory you were in. I find this happens more often.
    • slillibri
      slillibri over 12 years
      To use a gun analogy, this question says please make the gun recognize that I am aiming at my foot and not fire, but I don't want to have any responsibility for not aiming the gun at my foot in the first place. Guns, and computers, are stupid and if you do a stupid thing then you will get these results. Following along the gun analogy, nothing will keep you from hurting yourself except vigilance and practice.
    • Gilles 'SO- stop being evil'
      Gilles 'SO- stop being evil' over 12 years
      @slillibri Guns have safeties. Asking how to put better safeties on the rm command is a perfectly legitimate sysadmin question.
    • WernerCD
      WernerCD over 12 years
      @slillibri this is less akin to asking how not to shoot myself... and more akin to asking how to protect anyone from getting shot. YOU may know how not to shoot your own foot... but what about your stupid use... rs... coworke... ers... I mean... 8 year old kid who is acting out a video game? If you have a gun in the house, it best have a two locks and an alarm to alert you... This is no different. Protect your assets (Family and priceless data).
    • Giorgio
      Giorgio over 12 years
      Maybe this is a silly suggestion, but why not use a tool like mc (midnight commander)? With mc you are always asked for confirmation when you want to delete a directory.
    • Valentin Nemcev
      Valentin Nemcev over 12 years
      @Giorgio I'll try to use vim file-manager more often :)
    • Paul
      Paul over 12 years
      sudo rm /bin/rm not recommended, but will prevent most rm's :-)
    • Kjetil Joergensen
      Kjetil Joergensen over 12 years
      @Gilles rm had safeties, by adding -r and -f, those safeties were removed. (-r allows rm to use readdir/rmdir, -f allows rm to use chmod). Using the gun analogy and adding a dash of hyperbole: this is (in my opinion) akin to asking how to avoid shooting somebody when pointing the gun at them (arguments to rm), turning off the safety (-rf) and pulling the trigger (rm).
    • aculich
      aculich about 12 years
      @ValentinNemcev Doing an accidental rm -rf /* is an age-old Unix rite of passage! Now it's time for you to learn the find command to save yourself from this kind of grief in the future. Certainly you should avoid some of the bad advice that is found in the answers to this question. Suggestions such as using specially-named -i file are akin to telling a kid learning to ride a bike to never pedal, just push on the ground with your feet, oh and also make sure to hold in the brake lever all the time. If you want to ride with the big boys, use find.
    • user
      user almost 11 years
      This rm & gun analogy is horrible. rm is something you use a dozen times a day - with & without safety in your regular programming life even if you aren't from Texas. Please don't make it a gun debate ;)
    • Admin
      Admin about 9 years
      @Paul but what if you do /path/to/rm -rf /bin/ rm. The /path/to bit is to stop some idiot running it and trying to report me - it had happened!
    • meso_2600
      meso_2600 about 8 years
      what do you mean by "--preserve-root by default didn't save me". it should? what went wrong with --preserve-root ?
    • Aaron
      Aaron over 7 years
      Limit root and sudo to folks that are cautious. Make backups of your data. Always use set -u in your bash scripts. If you are working with folks that blow away / often, then consider nfs diskless or initrd ram disk diskless booting. There are other ways to make / read-only but it gets tricky depending on your setup.
    • InQβ
      InQβ about 5 years
      type in the command without pressing Enter, check, breath-in, check again, breath-out, check once more, Enter.
    • Robin Khurana
      Robin Khurana about 3 years
      For what it's worth, many (including myself) consider alias rm="rm -i" to be a dangerous practice, rather than a safe one. Here's why: it causes a person to expect that rm will always ask them first whether they really want to do the thing. If they're then on some other system, or logged into a different account (perhaps root!), or whatever, and the alias isn't there... they expect it, don't get it, and a catastrophic removal very likely ensues. instead, use echo rm or type rm -i commands. Making these into habits is, IMHO, the best way to prevent these sorts of things.
    • val is still with Monica
      val is still with Monica almost 3 years
      @Paul rm is often a shell built-in, so effects are going to be surprisingly limited.
  • cjc
    cjc over 12 years
    I'm not convinced sudo would prevent something like this. You can make the same typo as the OP, even if you type "sudo" before the "rm".
  • Valentin Nemcev
    Valentin Nemcev over 12 years
    Mentioned working as root in edit
  • Valentin Nemcev
    Valentin Nemcev over 12 years
    Mentioned working as root in edit
  • user3114802
    user3114802 over 12 years
    @Valentin: o_O !!
  • Valentin Nemcev
    Valentin Nemcev over 12 years
    safe-rm looks very good, looking into it now...
  • Valentin Nemcev
    Valentin Nemcev over 12 years
    @Khaled I'm using sudo and backups, I just want something better for this specific problem
  • Valentin Nemcev
    Valentin Nemcev over 12 years
    I try thinking twice before doing dangerous things, but somehow it doesn't always work, I've destroyed thing in the past because of inattention like this.
  • Henk Kok
    Henk Kok over 12 years
    safe-rm is neat. Also that's a nifty trick with the -i file. Hah. Silly bash.
  • WernerCD
    WernerCD over 12 years
    Amazing what kinda trickery is done in unix.
  • wnrph
    wnrph over 12 years
    I use Alt + # to comment out commands. Use it a couple of times and it becomes second nature to you.
  • Ali
    Ali over 12 years
    +1 for telnet miku.acm.uiuc.edu
  • David W
    David W over 12 years
    The creating file named -i is absolutely pure genius. I could've used that about a year ago when I accidentally ran an rm -rf /etc/* on VPS... (fortunately, I take nightly snapshots, so was able to restore in under 45 minutes).
  • bukzor
    bukzor over 12 years
    @SachinDivekar: What you call a "regex" is in fact a glob. If "/dir/*.conf" were a regex, it would match "/dir///.conf" and "/dirxconf" but not "/dir/myfile.conf".
  • bukzor
    bukzor over 12 years
  • MadHatter
    MadHatter over 12 years
    OK, what am I looking for? Are you making the point that the regexp syntax for file-matching is different (and sometimes called by a different name) from that used in eg perl? Or some other point that I've missed? I apologise for my slowness of thought, it's first thing Saturday morning here!
  • MadHatter
    MadHatter over 12 years
    Sachin, at the risk of sounding peevish, it's a bit lame to come back and edit a copy of someone else's answer into your own. Your answer was a very good one without needing to harvest other people's ideas to bulk it out - have the confidence to let it stand on its own merits!
  • Sachin Divekar
    Sachin Divekar over 12 years
    @MadHatter sorry and thanks for opening my eyes. I got my lesson.
  • Danny Staple
    Danny Staple over 12 years
    And you should use a VM or spare box to practice recoveries - find out what didn't work and refine said plan. We are getting into a fortnightly reboot - because there have been power outtages in our building, and every time it has been painful. BY doing a few planned shutdowns of all the racks, we've cut it from a few days of running around to about 3 hours now - each time we learn which bits to automate/fix init.d scripts for etc.
  • MadHatter
    MadHatter over 12 years
    Not to worry, and thanks for taking the criticism so well. I look forward to reading lots more of your wise answers on SF in the future!
  • Chris S
    Chris S over 12 years
    It's strange there's no recycle bin feature to get around all of this, even if it was a command line that just moved the folder recursively to a ~/rubbish folder
  • bukzor
    bukzor over 12 years
    These things that you're calling "regexp" are in fact globs. It's not a different regex syntax; it's not a regex.
  • Stefan Lasiewski
    Stefan Lasiewski over 12 years
    And try this command on a VM. It's interesting! But take a snapshot first.
  • Sachin Divekar
    Sachin Divekar over 12 years
    @bukzor +1 you are right. while on command-line, * is not regex, its a glob, used by bash for pathname expansion.
  • user2910702
    user2910702 over 12 years
    mv -t DELETE_ME -- * is a bit more foolproof.
  • MadHatter
    MadHatter over 12 years
    That argument could certainly be made; however, from the wikipedia article on regular expressions, I find that "Many modern computing systems provide wildcard characters in matching filenames from a file system. This is a core capability of many command-line shells and is also known as globbing" - note the use of "also known as", which seems to me to indicate that calling tokens containing metacharacters to match one or more file names regexps isn't wrong. I agree that globbing is a better term because it doesn't mean anything other than the use of regular expressions in filename matching.
  • Sachin Divekar
    Sachin Divekar over 12 years
    +1 for use of ls.
  • Mircea Vutcovici
    Mircea Vutcovici over 12 years
    It is genius. Sorcery would be touch -- -rf
  • Stefan Lasiewski
    Stefan Lasiewski over 12 years
    @haggai_e: Good tip. When I was new to Unix, I ran once ran into a bug where rm -rf * also removed . and ... I was root, and this traversed the into lower directories like ../../.., and was quite destructive. I try to be very careful with rm -rf * ever since.
  • apgwoz
    apgwoz over 12 years
    While safe-rm is a great idea, it's bound to fail. It works fine on your own systems where you know it's installed, but start managing another system where you assume it is and it is not, and you're in trouble. You effectively train yourself that safe-rm will save you, and you become less careful. So, be careful with all of these tricks.
  • aculich
    aculich about 12 years
    It is good you suggest using find, but I recommend a safer way of using it in my answer. There is no need to use xargs rm since all modern versions of find have the -delete option. Also, to safely use xargs rm you also need to use find -print0 and xargs -0 rm otherwise you'll have problems when you encounter things like filenames with spaces.
  • aculich
    aculich about 12 years
    Previewing the files first before deleting them is a good idea, and there is an even safer and more expressive way to do it using the find as I explain in my answer.
  • aculich
    aculich about 12 years
    @eventi I agree that there is some terrible advice and ugly hacks in this thread. And it's definitely a good idea to look at something before destroying it, but there is an even better way to do that using the find command.
  • aculich
    aculich about 12 years
    @Giles Not using rm directly is good advice! An even better alternative is to use the find command.
  • aculich
    aculich about 12 years
    @MadHatter Checking to see what files match before you delete them is good advice, but there is a safer and more expressive way to do it with the find command.
  • aculich
    aculich about 12 years
    +1 for using some method of previewing your files before you delete them, however there are simpler and safer ways to do that using the find command.
  • aculich
    aculich about 12 years
    It's good you're trying to preview your files before deleting them, however this solution is overly-complicated. You can instead accomplish this very simply in a more generic way using the find command. Also, I don't understand why you say "the good thing about it is that it's only Bash"? It is recommended to avoid bash-isms in scripts.
  • thinice
    thinice about 12 years
    My point wasn't about the nuances about xargs but rather using find first, without deleting files and then continuing..
  • aculich
    aculich about 12 years
    Yes, I think that scoping out files using find is a good suggestion, however the nuances of xargs are important if you suggest using it, otherwise it leads to confusion and frustration when encountering files with spaces (which is avoided by using the -delete option).
  • aculich
    aculich about 12 years
    And if you need the directory to remain you can do that quite simply by using find somedir -type f -delete which will delete all files in somedir but will leave the directory and all subdirectories.
  • kln
    kln about 12 years
    To prevent us from "rm -rf /*" or "rm -rf dir/ *" when we mean "rm -rf ./*" and "rm -rf dir/*" we have to detect the patterns " /*" and " *" (simplistically). But we can't just pass all the command line arguments through grep looking for some harmful pattern,because bash expands the wildcard arguments before passing them on (star will be expanded to all the contents of a folder). We need the "raw" argument string.That's done with set -f before we invoke the "myrm" function which is then passed the raw argument string and grep looks for predefined patterns. *
  • aculich
    aculich about 12 years
    I understand what you are trying to do with set -f which is equivalently set -o noglob in Bash, but that still doesn't explain your statement that "The good thing about it is that it's only Bash". Instead you can eliminate the problem entirely and in a generic way for any shell by not using rm at all, but rather using the find command. Have you actually tried that suggestion to see how it compares with what you suggest here?
  • kln
    kln about 12 years
    @aculich by only bash I mean no python or perl dependencies, everything can be done in bash. Once I amend my .bashrc I can continue working without having to break old habits. Every time I invoke rm bash will make sure I don't do something stupid. I just have to define some patterns that I want to be alerted of. Like " *" which would remove everything in the current folder.Every now and again that will be exactly what I want but with a bit more work interactivity can be added to "myrm".
  • kln
    kln about 12 years
    @aculich OK gotcha.No I haven't tried it.I think it requires significant change in workflow. Just checked here on Mac OS X my .bash_history is 500 and 27 of those commands are rm. And these days I don't use a terminal very often.
  • aculich
    aculich about 12 years
    How is find -delete or find dir -delete a significant change in workflow? It accomplishes exactly the same thing as rm -rf ./* and 'rm -rf dir/*` without being prone to globbing errors or needing rubegoldberg-esque functions defined in .profile, plus if you want to preview the list of files before you delete them, just remove the -delete from the find command.
  • kln
    kln about 12 years
    It's not true that those are equivalent. BSD find doesn't default to the current dir and * doesn't expand to dot files so find -delete is not equivalent to rm -rf ./* , find dir -delete is equivalent to rm -rf dir and not rm -rf dir/* . Those are very minute differences, but I still need to adjust to a new way of thinking about things. Another thing, let's say I'm editing a big project tree with thousands of files and dozens of levels in the file hierarchy, "find dir1/dir1/dir2 dir1" is pretty close to "find dir1 dir1/dir/2 dir/1"
  • kln
    kln about 12 years
    I don't preview a list of files anywhere in my solution. If there is no * after a space on the command line (any other pattern can be defined) the user won't notice anything different.
  • aculich
    aculich about 12 years
    It's clear you don't preview a list of files in your solution... that is exactly the point I'm making with find that you can do that easily simply by leaving off the -delete.
  • aculich
    aculich about 12 years
    Sure, so BSD find doesn't allow you to omit the directory, so you have find . -delete instead of find -delete. Also, the '*' glob may or may not expand dot files... it depends on a setting, which by default matches the behavior you describe, but if the system or someone has set shopt -s dotglob then it will expand dot files, too. If you are actually dealing with thousands of files you may also run up against the "Argument list too long" error, but that's another one you can avoid by using find.
  • kln
    kln about 12 years
    Aha, I think now I understand what you mean. In essence you want to get rid of globbing errors by removing the need to use the * . Is that right? You build removing, which is a potentially dangerous thing, to be inconsistent with the rest of the system, which relies on globbing, thus telling the user to be careful with it.
  • aculich
    aculich about 12 years
    My intent is to provide a safe, general, effective, extensible answer to the original question: "How do I prevent accidental rm -rf /*?" Using find . -delete is safe(r) because it avoids this very common accidental mistake. It is also safer because makes it easy to preview the file list before deleting. It is a general method that works on any unix system and is not shell-dependent. It is effective because it accomplishes the same thing as rm -rf ./* but is more extensible, for example adding `-iname '*~' makes it easy to delete all *~ files in all subdirectories. How would rm do that?
  • Sachin Divekar
    Sachin Divekar about 12 years
    @aculich, I have +1 your answer, its very simple and effective. I just provided possibilities of what can be done to prevent firing rm -rf accidently on wrong files. Somebody can use these tricks somewhere else.
  • kln
    kln about 12 years
    well,if you just want to prevent an accidental "rm -rf /*" ,all you need to do is to tell the shell to look for the pattern '-rf /*' (line 6 above).After that you NEVER EVER have to worry about it.And you won't need to reaccustome to some totally new way of doing things.And as for previewing the file list,you could easily do that with less typing.But if you're going to bother to do that,why use find.Your solution is not a straight drop-in replacement for rm .Yeah,find can find all '*~' but what if you mess up the regex.You still have to check.Take another look at my answer.It's very extendable
  • eventi
    eventi about 12 years
    I fail to see how find is simpler or safer, but I like your find . -name '*~' example. My point is that ls will list the same glob that rm will use.
  • Calmarius
    Calmarius over 11 years
    Though the I option won't echo back what you are going to delete.
  • Victor Sergienko
    Victor Sergienko over 10 years
    rm -rf $FOO won't help if you need to rm -rf $FOO/$BAR. cd $FOO && rm -rf $BAR will help, though it's way longer.
  • Paschalis
    Paschalis almost 10 years
    Simple solution here: superuser.com/a/765214/144242. It uses safe-rm and asks you before deleting each file. I hope it helps someone :)
  • Hawkeye Parker
    Hawkeye Parker over 9 years
    @apgwoz and everyone: you can alias rm (safe-rm, -i, --perserve-root, whatever) to something like "myrm". That way, when you're on another system, you won't be depending on your rm customizations.
  • apgwoz
    apgwoz over 9 years
    It's true, you could do that. The suggestion of using find instead of -r and making sure it's selecting the files you're after is better advice in my opinion though.
  • Kyle Strand
    Kyle Strand over 9 years
    @MadHatter The distinction between regex and globs is not merely a matter of which character means what, or whether globs are a "flavor" of regex; standard globs in fact fail the formal definition of regex.
  • Kyle Strand
    Kyle Strand over 9 years
    I must not be old-school enough...what's the significance of telnet miku.acm.uiuc.edu?
  • Naftuli Kay
    Naftuli Kay over 9 years
    Try it and find out. It's non-destructive. If you're as paranoid as you should be, run in a VM.
  • Asclepius
    Asclepius over 9 years
    @VictorSergienko, with bash, how about specifying ${FOO:?}, as in rm -rf ${FOO:?}/ and rm -rf ${FOO:?}/${BAR:?}. It will prevent it from ever translating into rm -rf /. I have some more info about this in my answer here.
  • user454322
    user454322 over 8 years
    Listen to bsdnow.tv/episodes/2015_08_19-ubuntu_slaughters_kittens around 1:21:05 for an interesting and fun discussion about rm -rf /
  • Faron
    Faron over 8 years
    Moreover to this discussion (Linux) -- I also use 'trash-cli' alongside with safe-rm; this give me another layer of protection since any files that are removed via command-line are "trashed" into 'trash can' on the desktop-gui. Indeed - rm is the command that must never be underestimated.
  • xenithorb
    xenithorb about 8 years
    -1 You can't alias commands with spaces in them let alone /* which is an invalid name
  • S. Acarsoy
    S. Acarsoy almost 8 years
    It's the first thing I install on every machine. It should be the default removal tool, with rm being only used when you need to absolutely remove something right now. I'm sad that it has not yet taken off, but one day it will. Probably after a very public instance of rm causing a huge problem which could not have been addressed by backups. Probably something where the time taken to recover plays a huge factor.
  • malthe
    malthe over 7 years
    A copy-on-write filesystem such as btrfs can help as well. You can easily set up a simple automated snapshot rotation that runs locally (in addition to external backup).
  • Victor Yarema
    Victor Yarema over 6 years
    "if FOO is not set, this will evaluate to empty string" - this is just not true in some cases. Bash and Zsh have some really useful options. One of them is -u (Treat unset variables as an error when substituting.). This one can be set by simply calling set -u. This way you can make yourself safe in case when you try to use undefined variable.
  • Shovas
    Shovas about 6 years
    +1 After using linux for 20 years, I still think there should be some kind of trash-can behaviour for rm.
  • Daniel Hitzel
    Daniel Hitzel about 6 years
    Very interesting discussion. I like your approach and made a little snippet. It is super inefficient, since it calls find at most 3 times, but for me this is a nice start: github.com/der-Daniel/fdel
  • gnucchi
    gnucchi almost 6 years
    Smart, but would not work on BSD rm, where options must come before file names.
  • gnucchi
    gnucchi almost 6 years
  • Sirex
    Sirex almost 6 years
    if i was allowed to i would, in a nanosecond. It's garbage compared to linux.
  • André Werlang
    André Werlang about 4 years
    @VictorYarema set -u is a must have, but you'll still need ${FOO:?} if there was a previous FOO=.
  • André Werlang
    André Werlang about 4 years
    @VictorSergienko if $FOO is empty and $BAR is either . or /, congratulations on your new empty home directory.
  • ka3ak
    ka3ak almost 4 years
    I think it's the most common case when rm -rf * is triggered in a script when some folder-variable didn't exist or its value wasn't calculated correctly. Happened to me today. Fortunately I recognized it soon enough, killed the script and the most important content wasn't deleted from my system. Only a regular backup is the solution in my opinion. Most important content - every day, less important - once a week as example.
  • alper
    alper almost 4 years
    Is there any way to create touch ./-i as a hidden file under all folders?
  • alper
    alper almost 4 years
    Would it be safe to write a script for echo rm -rf /stuff/with/wildcards* to continue if y pressed and perform rm -rf /stuff/with/wildcards*? @Gilles 'SO- stop being evil'
  • Gilles 'SO- stop being evil'
    Gilles 'SO- stop being evil' almost 4 years
    @alper It wouldn't cause additional harm, but it wouldn't help either, because typing y would become a reflex. A failsafe is only useful if it adds a safety check, not if it just adds an automatic step.
  • alper
    alper almost 4 years
    You are right, I never think in the perspective off automatic reflex. Maybe just copying previous command into clipboard, and paste it right away seems like a better option. @Gilles 'SO- stop being evil'
  • user3789902
    user3789902 almost 4 years
    a great case for snapshottable filesystems, with roll back. ZFS has this feature... docs.oracle.com/cd/E19253-01/819-5461/gbcxk/index.html
  • Alex
    Alex over 2 years
    This should be the correct answer, don't use / at all when doing rm -rf. I realized this after 2 years and did it wrong the whole time. There is no need for a trailing or prepended shlash (with or without dot) in 90% of cases.