The answer depends on what you mean by really gone and also depends upon your file system, your OS, and rm itself.
It can also depend upon what removed the file: did you do it with "rm" or did some other program or script do the removal? To understand what options you may or may not have, we need to look at what "rm" does.
First: traditional Unix:
When you rm a file all that happens is that the directory entry is set to point at 0 instead of the inode it did point at, and the link count for the inode is decremented by one. If the link count has reached 0, and no process has the file open, then the inode itself is marked us unused, and the disk blocks that the file used are returned to the free list. If a process has the file open then the release of the inode and the reclaim of disk blocks waits until the process is done.
So, at this point, your data is still there until some other process needs disk blocks and these happen to get reused. If nothing asks for new blocks, or these blocks get put at the end of the free list, you might be able to recover the data by scrounging through the free list.
(But see Freeing disk space with ">" also.)
Secure Unix systems may zero disk blocks before releasing them to the free list; you wouldn't have any way to get your data if that was in effect.
Now: Some file systems have various schemes to keep "versions" of your data or to provide an undelete feature. In that case, older copies of your data may still be there. However, even if your file system supports something like that, it may not be turned on by default: see Does SCO OpenServer5 support the 'undelete' feature? (file system versioning) for example.
Linux ext2 file systems document an undelete in chattr but to my knowledge it isn't actually implemented at all. See Undeleting files on the Linux ext2 filesysten with debugfs and e2undel.
See Shred for comments and links for ext3 filesystems.
Newer Ubuntu has more help, too: Ubuntu DataRecovery.
Some systems mess with rm either by an alias or a binary to store the "deleted" file somewhere safe for a while. These systems will provide some recovery method also, of course.
Many systems alias "rm" to run "rm -i". That can be very annoying, but you can easily avoid it by being explicit: /bin/rm file.
That alias doesn't help you with getting back deleted files, but it's common enough that I want to mention it here. On my systems, I remove that alias immediately.
Seriously: how often do you screw up and remove something you shouldn't have? Sure, it happens: misplaced wildcards or a slip of the finger can do it. More than once I've meant to do something like "rm *.bak" and accidentally put a space after the "*". This stuff happens, but it happens so infrequently that it's not worth putting up with "rm -i" the rest of the time.
And when I do screw up, how bad can the damage be if I'm not running as root?
If you do happen to be root, the damage can be horrible. I have to admit that yes, I have (once) accidentally typed "rm -r /". Worse, I did it during a training session on a live system. The results were dramatic and embarrassing. Fortunately I saved most of it by immediately pulling the power plug, so little had a chance to commit from cache, but I still had a bit of restoring to do. That was a bad day.
I think that while "rm -i" is annoying, a new "-ii" (intelligent interactive) could be less so. In my version, "rm -ii" would be silent and immediately obedient when asked to remove one file. With more than one, the first thing it would do is get an approximate count - to save time it wouldn't look very far, but might instead say "You are matching more than 20 files". Perhaps both of these limits should be user configurable: "rm -ii 1:20" is clumsy. but as it would probably ordinarily be used as an alias, that's unimportant. The most important part is the simple addition of a "c" answer, which would stop asking for confirmation and just remove the remaining files. You could give that at any time, of course. That "ii" could save you from disaster without being quite so annoying.
Somebody had the thought that not even root should be able to remove the earth it stands on. See "rm -rf /" protection at "Meddling in the Affairs of Wizards" for the interesting story of how they got that by the standards committee.
First thing is to look at the script or application documentation to see what it really does - that may give you important information about what it really did. Your data could be tucked away safely already.
If the the application still has the file open, its data blocks will not be deleted until it closes the file. Therefore, using a tool like "lsof" can find the inode number of the file. Armed with that, you could use a filesystem debugger like debugfs (for ext2, see link earlier) to set the link count for that file back to 1. Actually, if you know the name of the file already, you can skip lsof and go right to the file system debugger. Your only remaining problem would be to recreate a directory entry for it, which again you can do with a good fsdb. I haven't used debugfs (and of course that assumes an ext fs), but it looks like it has the necessary commands for all that.
Let's illustrate this with a simple example. I'll use "less" to look at a file and in another window I'll find out everything I need to know:
# w 9:26 up 3:37, 6 users, load averages: 0.46 0.59 0.63 USER TTY FROM [email protected] IDLE WHAT tony console - 5:50 3:36 - tony s002 - 5:51 - w tony s001 - 5:51 57 -bash tony s000 - 5:51 - ssh [email protected] tony s003 - 8:28 1 less foo.t tony s004 - 9:20 5 -bash
I can see that "tony" is using "less" on "foo.t". Let's get more:
# ps -ts003 PID TTY TIME CMD 1320 ttys003 0:00.04 login -pf tony 1321 ttys003 0:00.03 -bash 1658 ttys003 0:00.00 less foo.t # lsof -p 1658 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME less 1658 tony cwd DIR 1,2 5984 501763 /Users/tony/Downloads less 1658 tony txt REG 1,2 137712 36113911 /usr/bin/more less 1658 tony txt REG 1,2 600576 36113586 /usr/lib/dyld less 1658 tony txt REG 1,2 302137344 60870019 /private/var/db/dyld/dyld_shared_cache_x86_64 less 1658 tony 0u CHR 16,3 0t33139 711 /dev/ttys003 less 1658 tony 1u CHR 16,3 0t33139 711 /dev/ttys003 less 1658 tony 2u CHR 16,3 0t33139 711 /dev/ttys003 less 1658 tony 3r CHR 2,0 0t0 304 /dev/tty less 1658 tony 4r REG 1,2 1719027 62534493 /Users/tony/Downloads/foo.t
So, the full file name is /Users/tony/Downloads/foo.t and its inode is 1719027. If it was not "less" but rather some application that had removed foo.t but still had it open, that's all we'd need to go after the data blocks.
Realize that the blocks might be overwritten by new data. There aren't any guarantees here.
You could also just power crash the machine. Probably not goood advice in general, but fsck should find an unreferenced file with a non-zero link count and stick it in lost+found. I am NOT recommending this as a good method, but I confess I have done exactly that in past years on Unixes where I didn't have the tools I needed to do anything else.
This isn't really a case where a file has been removed, but rather when your editing changes have been lost. Vi and vim can recover "lost" editing changes most of the time.
Just in case some Windows Vista user stumbled in: Recover Files with Shadow Copies on Any Version of Windows Vista
Windows 7 has real undelete: Recover lost or deleted files.
Also see How to Recover Deleted Files with Free Software, which also covers Apple Macs.
15 Free File Recovery Software Programs is all Windows.
If you found something useful today, please consider a small donation.
Got something to add? Send me email.
More Articles by Anthony Lawrence © 2013-08-07 Anthony Lawrence
It all sounds good from the pulpit,but come Monday morning all the sinners are back to business as usual writing crappy code. (Tony Lawrence)