Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Recovering Deleted Files on ReiserFS3?

Cliff posted more than 10 years ago | from the who-said-unix-doesn't-need-an-undelete dept.

Data Storage 126

DarkSarin asks: "I have a rather serious problem: I managed to accidentally delete some files (rather important ones at that!) while trying to back them up to cd (I was using a GUI burning software that will remain nameless for now). How do you recover accidentally deleted files in Reiserfs? This thread (started by me) indicates that you can't recover them. Note that I had found a way to rebuild the tree, but that didn't work. It seems odd to me that you wouldn't be able to recover accidental deletions, but that really does seem to be the case. Help? Please?"

cancel ×

126 comments

Sorry! There are no comments related to the filter you selected.

Primary post! (-1, Troll)

Anonymous Coward | more than 10 years ago | (#7605066)

Will of warrior!

Happened to me the other day with ext3 (1)

menders (449045) | more than 10 years ago | (#7605142)

Didn't find a way to recover my files either. :(

Re:Happened to me the other day with ext3 (0)

Anonymous Coward | more than 10 years ago | (#7606213)

mwundel.exe?

Doesn't Windows 3.1 support ReiserFS3?

Re:Happened to me the other day with ext3 (2, Interesting)

deek (22697) | more than 10 years ago | (#7606520)


Depending on what you want to undelete, you can always do a grep -a -100 STRING /dev/DEVICE. That recently came in handy for me when I accidently deleted a directory with a script file that I really needed. Took me a whole day to write that script, so I was not eager to rewrite it again. I managed to recover the whole script.

Solution (2, Informative)

scheme (19778) | more than 10 years ago | (#7606579)

Since ext3 is just ext2 with added features, you can undelete the file the way you would do so on ext2. There's actually an undelete howto for ext2. The basic gist of it is that you immediately unmount and remount the partition read only. Then you grab a list of last delete blocks and use that to recontruct the file. I've done it once or twice but I've been fortunate enough to have a tape backup solution that has been able to alleviate the need for this for a while now.

Re:Solution (0, Offtopic)

ihtagik (318795) | more than 10 years ago | (#7606693)

Put your hand on a hot stove for a minute, and it seems like an hour. Sit with a pretty girl for an hour, and it seems like a minute...
How about sitting with a pretty girl while putting your hand on a hot stove?

(Yeah, I know, Offtopic etc,etc...)

Re:Solution (0)

Anonymous Coward | more than 10 years ago | (#7607284)

You couldn't have done that on ext3. The undelete utilities do not work on ext3, as the filesystem zeros out indirect blocks - something which doesn't happen on ext2 making it possible to undelete the files.

Re:Solution (1)

kormoc (122955) | more than 10 years ago | (#7607960)

Really? I seem to recall that ext3 zeros out the inodes when they are deleted, and well, it makes it a bit hard to recover unless you want to build the inodes by hand. I've tried it and so has cyranovr
see here [slashdot.org] and here [slashdot.org] .

Please respond and tell us what we did wrong.

Good luck... (4, Interesting)

SanityInAnarchy (655584) | more than 10 years ago | (#7605172)

A filesystem has never (AFAIK) implemented a trash / recycle bin folder -- not on Windows or OS X, and not on any UNIX that I know of.

The reason for this is that a recycle bin is to save you from accidental deletions. If you delete a file from the nice, big, friendly GUI, it usually asks you at least once whether you want to delete it, then instead moves it to the trash. When it's time to empty the trash, it asks you again to make sure you're not screwing yourself over.

However, many programs create temporary files and then promptly delete them -- so many times that it would be ridiculously inefficient (both in space and fragmentation levels) to put them into the trash. Furthermore, can you imagine looking for your files in the middle of all sorts of files with names like 11025u012348512i51253.tmp?

As someone said on the other forum, there's the hard way -- grep for it on the raw partition. This may not even work with ReiserFS, I'm not sure. The usual way to protect yourself from this is to back up in the first place (yeah, I know) and to only run programs you trust as a user that can delete files that you need.

I would suggest that you try the grep method, and if that doesn't work, learn from it. The safest way to do this is (ironically) the command line. If you type "cp", you know for sure it will copy the file. If you type "mkisofs" or something similar, it is very unlikely that it will delete the files. And these tools (along with mv, which does delete the old copy after the new one is successful) have been around for so long and are so simple that the only way you could screw this up is through a very stupid mistake (like rm instead of cp) or using an experimental filesystem, which despite the opinion at Gentoo, ReiserFS is not.

Re:Good luck... (4, Interesting)

GigsVT (208848) | more than 10 years ago | (#7605266)

On the other hand, an operating system is considered inefficient these days if it doesn't use 100% of the RAM for something or another.

What use is "empty" disk space? The OS might as well use it for something, as long as it can ditch things that aren't important if there is a demand for space. As for your temp file issue, it's easy enough to just make /tmp and /var/tmp (and any others) a different file system that doesn't act this way.

Modern file systems don't need to have a limited number of inodes. Even ext3 by default creates way too many inodes on large file systems, if you are going to be storing files of any significant size.*

I think it's high time for filesystem reform, and it doesn't need anything revolutionary like databased buzzword filled paradigm shifting crap. It's just logical evolutionary improvements.

*And it wastes 5% of the space by default! That's 100 GB on a 2TB fs completely wasted! Always use -m0 on storage fs's or -m1 on system fs with mke2fs. Use -T largefile4 to make one inode per 4MB, which is fine for storing "large" files. Otherwise the fs takes hours to create all those damn unnecessary inodes on a large fs.

Re:Good luck... (2, Insightful)

Gilk180 (513755) | more than 10 years ago | (#7605435)

The problems with this are in efficiency. Leaving the files in place would create fragmentation problems. Moving them to another part of the disk would result in a lot of unnecessary disk activity.

Periodic backups are a much better answer.

Schemes like this would also require the fs to delete old files when the space is needed, but this is what is done now. The data is still there until the space is used by something else (and even after that for all of you super security freaks). Given, the choice of what to delete could be made in a manner to use the least recently deleted space first, but this would again cause efficiency problems.

Re:Good luck... (1)

Zorton (2520) | more than 10 years ago | (#7605760)

Doesn't the move command simply change the inode? I've noticed that many trash bin type arrangements simply change the inode (or whatever). Dosen't this get by the frag problem?

Zorton

Re:Good luck... (1)

warrax_666 (144623) | more than 10 years ago | (#7607584)

No, because there is less (contiguous) free space to put new files in.

Re:Good luck... (1)

Gilk180 (513755) | more than 10 years ago | (#7607869)

Yes, but it leaves them in the same portion of the disk, causing fragmentation problems.

Re:Good luck... (1)

91degrees (207121) | more than 10 years ago | (#7612050)

Periodic backups are a much better answer.

Except this is what caused the problem in the first place.

Re:Good luck... (1)

GigsVT (208848) | more than 10 years ago | (#7613599)

I think a balance can be found.

Until someone trys it, someone who is a real file system whiz, not some hack, we won't really know just how reasonable it is.

That's the way computer science works; 1000 people say you can't do it, but one person does it, and it works, well, and suddenly everyone changes their opinion.

How many people, for example, really expected SGI to clean up XFS enough to merge into the official kernel?

I think it can be done. I can contemplate an algorithm that balances delete recovery priority policies with performance, and still comes out on top, with little speed degradations under most scenarios, and tolerable speed degradation in the worst case.

Between this imaginary file system I described and RAID, backups wouldn't be such a big deal.

A lot of people are backing up to hard disk anyway these days. What's the difference whether you are backing up to the same volume or a different one? You are just shifting where you need the space.

You'd still need some backups in case the filesystem fails or something, but It'd eliminate needing to use the backups very much.

I imagine something like rdiff-backup locally. You want to look at the filesystem the way it was 10 minutes ago? No problem. If you are going to take it this far, you might as well store deltas to updated files also.

It would have the advantage of being able to backup these snapshots too, rather than getting the nasty smearshot that you get when you backup a live filesystem these days.

I wish I knew more about filesystems... I might tackle this one day.

Re:Good luck... (2, Informative)

xenocide2 (231786) | more than 10 years ago | (#7605635)

Actually, rotational storage is very different from standard memory. Its considered inefficient not to use as much RAM as possible because using one page is as useful as the next. There's a uniform cost across all areas of RAM. In contrast, one prefers linear writes in a disk because it improves throughput. Each page on disk is not identical in usage cost. What we're paying here is the oppertunity cost if we use a specific page in disk.

On the other hand, I agree that a marked for deletion queue makes a great use of "extra" disk space on a desktop system. But this use should not be forced on a filesystem that's used in a wide variety of different situations. Ideally, this idea can be done on top of the file system.

Re:Good luck... (1, Informative)

Anonymous Coward | more than 10 years ago | (#7606440)

And it wastes 5% of the space by default! That's 100 GB on a 2TB fs completely wasted! Always use -m0 on storage fs's or -m1 on system fs with mke2fs.

tune2fs can fix that after creating the filesystem. But it's not wasted space, it's just reserved for root (or another user ID, if you change it - useful as a cheap quota system).

ReiserFS v3 and v4 are pretty good with space efficiency. No space is reserved for inodes, and tail-packing means very little space is wasted storing the last block of a file.

Re:Good luck... (1)

shaitand (626655) | more than 10 years ago | (#7611089)

Windows never uses 100% of RAM!!!! It's generally using more like 15-50% of physical ram and triple whatever physical amount that is in swap. Wonder why that is...

Re:Good luck... (1)

GigsVT (208848) | more than 10 years ago | (#7613418)

Well, it uses a good chunk of RAM for cache. Windows VM never was highly acclaimed or anything.

Re:Good luck... (1, Insightful)

Anonymous Coward | more than 10 years ago | (#7605342)

A lot of higher end storage appliances and some filesystems support file versioning. Even NTFS has streams which cna be used to version a given file.

As to files being created and destroyed frequently, this is why we partition into at least:
/
swap /var /tmp /usr

obviously var and tmp would not be a place to version files.

you could consider the use of versioning in a place like /home.

NetWare has... (1, Informative)

Anonymous Coward | more than 10 years ago | (#7605783)


A filesystem has never (AFAIK) implemented a trash / recycle bin folder -- not on Windows or OS X, and not on any UNIX that I know of.

NetWare has had a very sophisticated file undeletion capability since time immemorial.

If Novell ports it to SuSE, you Linux clowns might just find yourselves in possession of a mission-critical operating system after all [not that you deserve it].

Re:NetWare has... tsarkon agrees with use of clown (-1, Troll)

Anonymous Coward | more than 10 years ago | (#7606336)

Oh please!! Don't expect the Linux clowns to know the awesome power of Netware (or a real integrated directory since 90% of these Clowns are secretly Windows admins).

Man, Netware does rule when you think about it. I've used it for some time. In fact, up until very recently, I think I still had a client with 3.12.

I like Solaris, Netware, FreeBSD, and other real OEs and these clowns are constantly screwing things up. XFS gets the sideline for garbage like Reiser and EXT3. No kernel debugger. The list I could rattle off is infinite. Don't mention that maintainers never acknowledge or care about bug reports LKML or bugzilla.

The clowns most certainly do not deserve anything good merged into Lin-sux.

Re:NetWare has... tsarkon agrees with use of clown (1)

decepty (662114) | more than 10 years ago | (#7606930)

Not like Novell just bought SuSE or anything...

Re:NetWare has... tsarkon agrees with use of clown (0)

Anonymous Coward | more than 10 years ago | (#7607104)

a real company bought a fake one to play lip service to linux to make them ripe for IBM to buy them.

whats worth more? SuSE's shit hack of Lin-sux? or NDS? I know what I think is better.

SuSE should be dead.

Re:Good luck... (1)

Bombcar (16057) | more than 10 years ago | (#7605850)

As I posted below, there is one way I know of to make a recycle bin be system wide......

[drum roll, please]

LVM!

It will keep a frozen in time snapshot of the drive at a given time until it runs out of COW space (copy on write). The space dedicated to snapshots are not seen by the filesystem, and when the filesystem is changed after a snapshot LVM copies the modified data away to the snapshot dedicated area. (I guess you could call the snapshot reserve a "Secret Cow Level". :)

You can run multiple snapshots at the same time and get something that is almost a file rollback system.

But as we all know, nothing is a substitute for real offsite backup. As they say, if it isn't backed up, it doesn't really exist.

Re:Good luck... (4, Informative)

SteveOU (541402) | more than 10 years ago | (#7606316)

I will point out that the filesystems included in Novell's Netware product did include a deletion-recovery tool, accessible via the salvage command. My understanding was that Netware would not permantently delete a file until that disk space was needed for active data or until a timeout period expired.

Damned handy tool, too. We had IBM's TSM for our major backup operations, but for those "oops" moments, salvage was sure handy. I hope that the new Novell might consider implementing those features on existing linux filesystems, or at least contribute native linux implemenations of their filesystems.

Re:Good luck... (1)

larry bagina (561269) | more than 10 years ago | (#7606633)

However, many programs create temporary files and then promptly delete them -- so many times that it would be ridiculously inefficient (both in space and fragmentation levels) to put them into the trash.

Actually, there is an OS that lets you specify that a file is temporary, though I can't remember offhand which one (VMS? NT? OS X? dragonfly BSD?). Or maybe I'm thinking of SQL - for small temporary tables, you can often have them stored in memory.

Anyhow, you could add an fcntl flag to indicate a file is temporary, so unlink would work as normal; otherwise unlink would copy the file to your trash folder. It's fairly trivial to have a pre-loaded user-library overide a system call, but it could be done in the kernel too.

Re:Good luck... (1)

hummassa (157160) | more than 10 years ago | (#7608334)

maybe opening the files with
tmpfile()
or something alike?

Re:Good luck... (1)

LWATCDR (28044) | more than 10 years ago | (#7608565)

Actually I think DOS had an Open Temp file function. Yes it could be written. A commit/rollback function for changes to a file would be nice as well. Yes I know you can create a temp file then copy or rename it but that does not take any metadata with the file.

Re:Good luck... (2, Informative)

isj (453011) | more than 10 years ago | (#7607547)

A filesystem has never (AFAIK) implemented a trash / recycle bin folder -- not on Windows or OS X, and not on any UNIX that I know of.

Actually, OS/2 implemented it. It could be enabled/disabled per drive, the size of the trashcan could be configured, and it worked even for temporary files made by programs. And yes, it was somewhat slow.

Some file systems do have a trash can actually. (2, Informative)

TheSimkin (639033) | more than 10 years ago | (#7611773)

Just FYI. Netware's file system does have a trashcan built in and will keep the files that you've deleted. Even multiple versions of them until ther is no room on the storage device, then it will start to actually delete the oldest deleted files at that point. It's quite useful! you can disable the function globally or just on a directory/tree also. It has been doing this since version 2 for sure. Possible even before that.

Stop!!! (5, Interesting)

zulux (112259) | more than 10 years ago | (#7605178)

Take the filesystem offline. NOW!

Then use dd to copy the partition to another partion/disk. Then mess with the copy.

A lot of silesystems do a good job at keeping files and their blocks in order. I've had luck with *BSD file-system by grepping for somthing at the begining of the file and grabbing a big chunk of data afterwords. This works great for MS Office Documents, JPEG or anthing that begins with a known preamble.

This may not work for your filesystem.

Re:Stop!!! (1)

ralphclark (11346) | more than 10 years ago | (#7613295)

If it's shortish text files you're looking for, Reiserfs isn't so amenable to this type of treatment because it doesn't just store file contents in nice neat blocks like older file systems do. It stuffs shorter files into the spaces between bigger chunks of data. I think it even stores some small files in structured usually used for filesystem metadata.

The Coroners Toolkit (1, Interesting)

hookedup (630460) | more than 10 years ago | (#7605223)


This may help..

TCT is a collection of programs by Dan Farmer and Wietse Venema for a post-mortem analysis of a UNIX system after break-in. The software was presented first in a Computer Forensics Analysis class in August 1999 Examples of using TCT can also be found on-line in a series of columns in the Doctor Dobb's Journal. Notable TCT components are the grave-robber tool that captures information, the ils and mactime tools that display access patterns of files dead or alive, the unrm and lazarus tools that recover deleted files, and the findkey tool that recovers cryptographic keys from a running process or from files.

Site here [porcupine.org]

Re:The Coroners Toolkit (3, Informative)

menscher (597856) | more than 10 years ago | (#7606009)

Nice job karma-whoring, but TCT does NOT work with journaled filesystems.

More questions... (2, Insightful)

ComputerSlicer23 (516509) | more than 10 years ago | (#7605227)

First off, I have several questions. Do you have an original copy of the partition before you started running recovery tools (after you deleted the file, but before you created new ones)? If not, make an image immediately. You want the most original image you can find. Now, the second question, is how much data am I a looking for, and how large is the partition? (How large is the needle, and how large is the haystack?). What type of data am I looking for? Is it a word document? A text file? A gif? A jpg? Some html? A PDF? The smaller the file, the more likely that if it got overwritten, it all got overwritten. However, the more likely you are to recover all of it. If it was a very large file, it's possible that you can recover pieces and parts, but not all of it. Now, it's my understanding that you can recover anything written to a harddrive, even if you have overwritten it several times. However, it's very, very expensive to do so. So now the question is how much money is it worth to you? The guys as ReiserFS probably have the best shot at helping you. They probably don't want to however. The more you know about the order of the files in the directory, the more you know about how the files were constructed, and the order files got put on disk the better. They you can make better educated guesses about the sequence in which the pages got allocated to know where to go look for the file. Do you have anything on the drive you are worried about posting? Can you post an image of the drive? I'm not an expert in this area, but I've seen people recover mail spools at an ISP using dd. People leave ISP's over losing all their mail, so they worked really hard at it (however, that was an ext2 filesystem). Kirby

Re:More questions... (1)

jnana (519059) | more than 10 years ago | (#7605467)

Now, it's my understanding that you can recover anything written to a harddrive, even if you have overwritten it several times.

If a certain sequence of bits on the disk was originally 1011010010001011101001, and it got overwritten with 0110101101010010101111, how -- barring psychics, voodoo, and fairy dust -- can the original be recovered? Simpler case: a certain bit used to be 1, it was overwritten a few times. How do I know what it was (let's say non-journaled filesystem) before being overwritten?

Maybe you meant 'partially overwritten but with a reasonable number of bits still the same'?

Re:More questions... (2, Informative)

zulux (112259) | more than 10 years ago | (#7605543)

If a certain sequence of bits on the disk was originally 1011010010001011101001, and it got overwritten with 0110101101010010101111, how -- barring psychics, voodoo, and fairy dust -- can the original be recovered?

By reading the slop in between tracks. The writes look more like layers, with little bit of data poking out from the edges, to a scanning electron microscope.

Think of paint layers - at the edge, you can somtimes pick out the previous colors and the order that they were painted.

Of course, this isen't for meere mortals. People like the CIA get to play with this stuff.

Re:More questions... (1, Funny)

Anonymous Coward | more than 10 years ago | (#7605975)

huh, so maybe the HD manufacturers can start advertising 800 GB drives **

** as long as you have access to the CIA tech to read the old bits

Re:More questions... (2, Informative)

Gordonjcp (186804) | more than 10 years ago | (#7607602)

People like the CIA get to play with this stuff.


Except that they don't. It's entirely a myth that the CIA can read multiply-overwritten data from hard disks. The idea that the tracks look like layers doesn't hold up - you'd have to use less and less write density every time. It doesn't happen that way.


Now, what you can do - and what does work - is look at the analogue signal from the head and see what the variance from an "average" one or zero is. So, if the head returns a 4mV pulse for a one, on average, then it's likely that a 4.1mV pulse used to be one last time, and a 3.9mV pulse used to be a zero. This is a gross oversimplification, but you get the idea. It works, but not very well.

Re:More questions... (0)

Anonymous Coward | more than 10 years ago | (#7608012)

Actually the parent is correct - except the width doesn't change. The head position is slightly different each time, so it's like your paintbrush is aligned differently.

Re:More questions... (1)

Gordonjcp (186804) | more than 10 years ago | (#7608576)

It's still not going to leave a readable track after, though. The tracks written are very much wider than the gap between the tracks, so head positioning must be very accurate (hence the servo tracks on one platter).

Re:More questions... (3, Informative)

Nucleon500 (628631) | more than 10 years ago | (#7607078)

It has to do with the analog nature of the storage. If you had 0, 1, 1, 0, and you overwrote that with zeros, you'd then have 0, .1, .1, 0. Chances are that the drive itself (without at least modified firmware) can't tell the difference, but a data recovery lab can. You can actually still read data after it's been written between 5 and 20 times - each time, subtract the obvious and multiply the residue.

Re:More questions... (1)

jnana (519059) | more than 10 years ago | (#7607138)

thanks, that makes sense. I wasn't thinking that the value could be somewhere between zero and one. I always wondered why 'srm' overwrites so many times, and now I understand.

Name the program please (2, Insightful)

damu (575189) | more than 10 years ago | (#7605245)

If you had this problem then I or anyone will have this problem too, so please let us know what program you are talking about. Was a user error? Was it a bug? Is the bug being worked on?

Re:Name the program please (1)

GigsVT (208848) | more than 10 years ago | (#7605302)

Backup software that even has the ability to delete files from the filesystem sounds like a major design flaw to me.

Reminds me of LoneTar, which helpfully will tell you that /dev/null is an insecure backup device, and ask you if you want to secure it. Who wouldn't want to secure it? Anyway, it chmods 000 it, which breaks everything under the sun, usually in mysterious ways.

The reason is because /dev/null is usually set up as a dummy tape device in lone tar, and it doesn't know it's not a real tape device, which you usually wouldn't want 666.

Re:Name the program please (2, Informative)

martinde (137088) | more than 10 years ago | (#7605503)

> If you had this problem then I or anyone will have this problem too, so please let us know what program you are talking about.
> Was a user error? Was it a bug? Is the bug being worked on?

I'm not poster so I don't know the answer to your question, but I will say I've accidently done this in K3b. I had files highlighted in the list of files to burn, AND there were files highlighted in the tree view of my filesystem. I hit the delete key thinking it would remove the ones from the list of files to burn; nope, it deleted them from the filesystem!

Re:Name the program please (0)

Anonymous Coward | more than 10 years ago | (#7605835)

That should have moved them to the trash can no? Shift-Delete should have moved them to the shit-can. Or Oh Shit!-can.

Re:Name the program please (3, Informative)

DarkSarin (651985) | more than 10 years ago | (#7607116)

The program, which I now feel safe in naming, was CDBakeOven 2.0 (yeah, I know, beta software and all that-it still shouldn't EVER do this!)

To the user who gave instructions on how to use rebuild tree, those are about the same steps
I used (same -S option) on --rebuild-tree, to no avail.

So, the end result is--thanks, but so far the best advice still seems to be to pay the $25 to the folks who made the fs. I may yet do that. In the mean time, I am using my sorry winXP install....

blech

Re:Name the program please (1)

hummassa (157160) | more than 10 years ago | (#7608376)

OK, now, isn't CDBakeOven a KDE app? did it not use KDE's trashcan to delete your files? check it out, maybe you are sweating for nothing....

Re:Name the program please (1)

DarkSarin (651985) | more than 10 years ago | (#7611882)

Nope--I checked. I will double check, but I wasn't running KDE at the time--maybe it did anyway. I will check again.

Suddenly... (5, Insightful)

Anonvmous Coward (589068) | more than 10 years ago | (#7605283)

...every Windows user looks at that Recycle Bin shortcut on their desktop and smiles.

(No, that's not really a troll. Human error happens.)

Re:Suddenly... (1)

Josh Booth (588074) | more than 10 years ago | (#7605524)

Would it require a kernel patch to create a "Recycle Bin" of sorts? Maybe a piece of code could check for the UID or GID of the file being deleted and decide whether to back it up, based on a table of UID/GID's. It would probably be useful now that Linux systems are becoming more desktop and end-user oriented.

As an amusing anecdote, I once was writing a rudimentary file manager when I accidentally deleted all my source! After locking down my filesystem and learning how to undelete files, I realized that the code to copy files recursively that I had just written and tested I had used to copy the entire source tree.

Re:Suddenly... (1)

Phork (74706) | more than 10 years ago | (#7605805)

No, it doesnt require a kernel patch at all, in fact, there are already implementations of a trash can for linux(though i dont remember the url off hand). they work using ld.preload to replace the unlink call, thereby causing all aps to use the trash can.

Re:Suddenly... (1)

Bombcar (16057) | more than 10 years ago | (#7605870)

I'm going to explode, but yes, there is a kernel patch that has something similar to the recycle bin functionality: LVM.

Google the LVM snapshots, and if the frequency is high enough, you'll only lose a little time's worth of ze data.

Re:Suddenly... (1)

decepty (662114) | more than 10 years ago | (#7606963)

...or you could script it. Anything on God's green Earth can be created with a shell script. I'm serious.

Re:Suddenly... (1)

Emil Brink (69213) | more than 10 years ago | (#7608346)

Not true. If a C program deletes a file by calling the unlink() standard function, your shell script can't do much to prevent it. I'm guessing you were thinking about scripting a replacement for the rm command, or something, but that doesn't catch all file deletions. I'm not sure replacing unlink() through a preload does either, but it at least sounds a lot better.

Re:Suddenly... (2, Insightful)

zulux (112259) | more than 10 years ago | (#7605578)

...every Windows user looks at that Recycle Bin shortcut on their desktop and smiles.

The recycle bin only works if it's a well-behaved GUI app.

Do this...

START->RUN->COMMAND and hit enter.

type in

DEL c:\*.* and hit enter.

If you're asked any questions - say 'yes'

Now.... Try to find your files in the "Recycle Bin."

Re:Suddenly... (1)

eht (8912) | more than 10 years ago | (#7605748)

There are apps out there that protect files deleted at command line or even from network shares, Norton has one, I imagine there are others.

Re:Suddenly... (2, Insightful)

GreyWolf3000 (468618) | more than 10 years ago | (#7605860)

Nevertheless, that is not functionality present with the Recycle Bin. Desktops such as Gnome and KDE both have their own implementations that work about as well. Norton's software is data recovery. Apples and oranges, buddy.

Re:Suddenly... (1)

slittle (4150) | more than 10 years ago | (#7607298)

Gnome and KDE don't protect you from shell deletions, only what you delete via their API.

Norton Utilities includes an extension to the command line so things deleted there (or anywhere via whatever non-Recycle Bin API they use) will also go to the Recycle Bin.

OTOH, it fills the 'Bin up pretty quick, since lots of apps create and delete many temporary files, and you normally only want the things you've interactively deleted.

Re:Suddenly... (2, Insightful)

OrangeTide (124937) | more than 10 years ago | (#7607220)

Suddenly Bill Gates erases your minds so you forget that Windows stole the idea from MacOS (which stole the idea originally and made it mainstream:)

Re:Suddenly... (1)

Hanji (626246) | more than 10 years ago | (#7605690)

I don't know what exactly happened to the poster, but as someone else pointed out, recycle bin is only helpful in nice well-behaved apps. For example, I once had my text editor spaz out, lock up, and trash the source for something I was working on ... recycle bin ain't going to help there.

(Luckily, I had the editor set to be making backups, which were OK)

Well...to be fair. (1, Informative)

Anonymous Coward | more than 10 years ago | (#7605800)

KDE has a trash bin too.

But in the context menu it asks you if you want to delete or move to trash. Not the same thing! In DOS, delete, or del usually just write a lowercase delta IIRC over the first character of the file name marking the space as free to be used.

Right now, his enemy is the "relatively" obscure file system, and how much writing he's done to the harddrive since the "incident".

Re:Suddenly... (0)

Anonymous Coward | more than 10 years ago | (#7606187)

Actually yes, it is a troll. Coming from a long line of trolls, in your case. Fucking twat.

Re:You again! (1)

Lord Bitman (95493) | more than 10 years ago | (#7607233)

Because of windows recycle bin, I never hit "delete" without holding "shift". Recycle bin? There is no recycle bin!

Re:You again! (1)

FreeForm Response (218015) | more than 10 years ago | (#7607497)

You know you can turn that off, right?

Uncheck the "Display delete confirmation dialog" option in the Recycle Bin properties page.
First thing I do on a new Windows install... followed by deleting all the worthless crap on the FS that Windows thinks I need ("Online Services" and such).

Re:You again! (1)

Lord Bitman (95493) | more than 10 years ago | (#7607593)

I like the confirmation dialog, just not the recycle bin.

Re:You again! (1)

Grotus (137676) | more than 10 years ago | (#7609901)

Then in that same properties page check the "Do not move files to the Recycle Bin." option (and the "Use one setting for all drives" if desired).

Re:You again! (1)

TiggsPanther (611974) | more than 10 years ago | (#7607532)

More often than not, I do the same.

Ok, I've been caught out a few times...
1) Shift
2) Delete
3) Notice WHICH file got deleted...
4) Panic/swear

Tiggs

Alias 'rm' for console work (1)

neglige (641101) | more than 10 years ago | (#7607615)

Simple alias rm to move the files to a recycle bin folder somewhere in your home dir. If you want it to be fancy, add a timestamp to the filename so the same original filename won't get overwritten. That several GUIs have a recycle bin has already been mentioned.

To clear the trash, you have to use 'rm' unaliased. Normally, you can't do such a thing by accident :) While this method is neither foolproofed nor perfect it should help at least a bit to prevent future accidents.

Try this (5, Informative)

Anonymous Coward | more than 10 years ago | (#7605286)

I really don't understand how this was done. None the less you CAN recover from this. Here's a little tutorial I found. I Highly suggest doing the backup first!!! :

If you're really really desperate, you can do what I did a few weeks ago. In my \
case, fsck didn't recover the partition either, indeed it crashed. So here's what's \
I did from the beginning of what I think fixed it:

1) reiserfsck --rebuild-tree
2) mount
3) reiserfsck -S
4) debugreiserfs to get metadata for Vitaly
5) mount
6) mount again

I'm not sure why this happened, but after the second mount, the partition was not \
recognizeable as ReiserFS anymore. I suspect it had to do with a few really huge \
files that were originally on the partition that reisefsck -S tried to recover. In \
doing so it probably hosed lots of stuff. Now, it was as simple as

7) reiserfsck --rebuild-tree

And I had most of my data linked under lost+found! Took me a few hours to sort \
through it all but I got back most of what I cared about. Maybe if you use the new \
pre8 fsck you won't need to jump through these hoops. Since the potential for data \
destruction is high here, I wouldn't blame you for not trying. And yes, this all \
happened by trial and error ::-)

This might help too :
http://marc.theaimsgroup.com/?l=reiserfs&m=1048613 18421306&w=2 [theaimsgroup.com] .

Good luck!

Re:Try this (1)

Rysc (136391) | more than 10 years ago | (#7605839)

What kind of person uses a mouse to copy and paste from Emacs?

Ever heard of macros? xclip?

Re:Try this (1)

DarkSarin (651985) | more than 10 years ago | (#7611912)

I used this command:
reiserfsck --rebuild-tree -S -l rebuild.log /dev/yourdevice

after taking the disk offline (umount /dev/hdb6 as su(do))

I then (after waiting a long time for it to finish) checked the lost & found dir, and got nothing useful (although it did pick up some stray music files that I don't know where they came from!).

Re:Try this (0)

Anonymous Coward | more than 10 years ago | (#7613259)

although it did pick up some stray music files that I don't know where they came from!

Or at least that's what you told the judge, right?

Ask Namesys (4, Informative)

cornice (9801) | more than 10 years ago | (#7605289)

Pay Namesys $25. They wrote ReiserFS so they should know. You'll be getting really great support and helping those who wrote your file system. Look here:

http://www.namesys.com/support.html

likely response: (1)

exhilaration (587191) | more than 10 years ago | (#7606390)

"Nope, you're outta luck. Thanks for the cash!"

There was a post about this on Gentoo (0)

Anonymous Coward | more than 10 years ago | (#7605694)

http://thread.gmane.org/gmane.linux.gentoo.user/55 649 Deals with the ReiserFS specifically.

The way i did it (3, Informative)

jjshoe (410772) | more than 10 years ago | (#7605719)

I managed to re-create the resier file system three times over 20 years of digital photos, with no backup ofcourse. I was able to replay the journal recovering all but the most recent photos.


I beleive i used the --rebuild-tree option. You should follow the steps in the manpage under Example.


so in short, man reiserfsck before asking slashdot :)

Re:The way i did it (1)

Bombcar (16057) | more than 10 years ago | (#7605883)

Uh, you have a backup now, right?

And how the *heck* did you get 20 years of digital photos? I assume some were scanned......

From History [about.com] :

The first digital cameras for the consumer-level market that worked with a home computer via a serial cable were the Apple QuickTake 100 camera (February 17 , 1994), the Kodak DC40 camera (March 28, 1995), the Casio QV-11 (with LCD monitor, late 1995), and Sony's Cyber-Shot Digital Still Camera (1996).

Re:The way i did it (1)

jjshoe (410772) | more than 10 years ago | (#7606500)

Backup now? no. You'd think i would have learned... Yes, quite a few photos were scanned that i no longer have the origionals of due to a fire unfortunatly. I realize now the origional poster said the rebuild tree option didnt work because i honestly re-made the reiser file system three times over the existing reiser file system and it re-played all but the most recently copied images..

Re:The way i did it (1, Flamebait)

Dr. Photo (640363) | more than 10 years ago | (#7606672)

Backup now? no. You'd think i would have learned...

*Smack!*

"I don't make backups" is computerese for "I have no important data which I can't either reconstruct or re-download."

If you can't make that claim, then it's time to reexamine your backup policy.

Re:The way i did it (1)

jjshoe (410772) | more than 10 years ago | (#7609539)

Oh i agree, i was able to re-construct them so ;)

Re:The way i did it (1)

Suppafly (179830) | more than 10 years ago | (#7606508)

The first digital cameras for the consumer-level market that worked with a home computer via a serial cable were the Apple QuickTake 100 camera (February 17 , 1994), the Kodak DC40 camera (March 28, 1995), the Casio QV-11 (with LCD monitor, late 1995), and Sony's Cyber-Shot Digital Still Camera (1996).

The keyword there is consumer-level.. don't make assumptions.

Backup? (1)

geggibus (316979) | more than 10 years ago | (#7605803)

That should teach you!

Real users never make backups!

-K

Re:Backup?? (1)

Lord Bitman (95493) | more than 10 years ago | (#7607228)

I coulda sworn that real users run an FTP server and know lots of people..

Let me be Mr. Barn-door-closer.... (2, Insightful)

Bombcar (16057) | more than 10 years ago | (#7605804)

But it can be helpful in the future to dedicate, say 10% of your drive to an LVM snapshot [sistina.com] space....

I haven't done this yet (I'm lucky! I have a real tape drive [inostor.com] to backup my stuff.....) but I plan to make my system take a snapshot every hour and every day (total of two) so that at most I lose an hour's worth of work.

Also, I've always wondered if it was possible to make an operating system that would take as long to destroy something as it did to create it. For example, your term paper took ten days to write, so the rm termpaper.tex command would take ten days to run :)

Re:Let me be Mr. Barn-door-closer.... (1)

arth1 (260657) | more than 10 years ago | (#7606142)

Also, I've always wondered if it was possible to make an operating system that would take as long to destroy something as it did to create it. For example, your term paper took ten days to write, so the rm termpaper.tex command would take ten days to run :)


Good old UFS is close. A reoccuring job we run at work creates around 50000+ new files and directories, does a quick rename, and then deletes 50000+ old files and directories. This takes a looong time. The funny thing is, the delete process takes *longer* than creating the files, probably because UFS does a commit and sync after every delete, while it buffers up all the writes.

Regards,
--
*Art

deleted filesystem, NTFS, ext2/3... (1)

1eyedhive (664431) | more than 10 years ago | (#7608031)

I had to use data recovery software a few years back after I accidentally started a NTFS format on a drive I was using as a temporary storage dump while the main drives were being upgraded... got back only 70% after a 3% format :( took hours too.

Best advice here is to keep active backups (Tape/CD is good for archival), if the files are small (docs/text/logs/source code), HD space is dirt cheap, get another drive (or partition)
mount as something like /mnt/snafu
and set rsync/cron/whatever to copy the files from the open (shared) partition to the snafu drive (something thats only writable (and therefore screwable by root and not your backup op or your regular user type) I'm sure there are more advanced way to do it, like say on the fly, but setting a cron script to run every 5 minutes and unless you go straight into doing CD backups 5 within 5 minutes of copying data to the drive, you're OK.

I have a shitload of files, most of them size-wise are .avi .mpg and .mp3, the rest are programs, docs, etc. all of which easily fit elsewhere (the actual "mission critical" files can fit on a single CD, and therefore anywhere on my LAN) even with RAID, there's no excuse for a good backup, RAID does mechanical failures, but an errant rm -rf very mportantdir/veryimportantfile
is hard to recover from, if doing backups to media, MAKE SURE that they're duped on the HD first, in case something like this happens to you, a quick tar -czvf stufftobackup.tgz [your files here] before you run your backup will save your ass, better yet, make the tar, then just back that up, don't screw around with the originals where possible.

Re:deleted filesystem, NTFS, ext2/3... (0)

Anonymous Coward | more than 10 years ago | (#7611421)

Dude, he was in the middle of the CD backup when the files got deleted ;-p

-- vranash

Re:deleted filesystem, NTFS, ext2/3... (1)

epsalon (518482) | more than 10 years ago | (#7612105)

I second this. See my journal entry [slashdot.org] about a crash that could have been a disaster.

Stop mucking around (3, Interesting)

You're All Wrong (573825) | more than 10 years ago | (#7608047)

"I was using a GUI burning software that will remain nameless for now"

_Either_
- you fucked up, be a man and admit it's your fault;
- the software fucked up, in which case let others know what it was and how it fucked up so that they can avoid risking the same bug.

YAW.

Tough shit (0)

Anonymous Coward | more than 10 years ago | (#7608052)

well, shit happens. move on and learn from it not to use fancy clickety-clack guis on your important data. while you're at it, man (cp|mv|rm|tar). Ok, maybe not rm.

AC, sitting on several hundred gigs of "important" data (come on, you know what i mean) without a backup since years.

Wow this is so easy I'm surprised no one got it (4, Insightful)

anthony_dipierro (543308) | more than 10 years ago | (#7608506)

How do you recover accidentally deleted files in Reiserfs?

It's really easy. You just restore from backup.

Script for emergency file recovery (1)

wmshub (25291) | more than 10 years ago | (#7611337)

This little snippet of shell has saved me from disaster a couple times. Let's say you just deleted "foo.c", and you need it back! Right away! If you know that the code will contain the text "while (mungeCount < superMungeCount) {", then you can try:

tr </dev/hda '\n' '~' | tr '\0-\37\200-\377' '\n' | grep "while (mungeCount < superMungeCount) {" | tr '~' '\n' >foo-recovered.c

This does have its problems. If the file spanned multiple blocks it may not get all of it, but you'd be surprised how often it does get all of it. You might get multiple versions of the file concatenated together, and you'll have to text edit to pull out the one you want. Also, of course, it only works on text files. But the two times that I needed this, it worked, so it might help you out too. Note that you should run it immediately after deleting the file - ever disk access means that much more chance that the block will be recycled. :-(

Re:Script for emergency file recovery (1)

DarkSarin (651985) | more than 10 years ago | (#7612107)

Yeah, that's great, but it was not really a text file--one was a OpenOffice Presentation(.sxw?) and the other very important one was an OO.org text (.sxw?). I also lost numerous .pdf files. These were all for a paper and presentation due about 30 minutes after the actual deletion. I was VERY fortunate that I had already printed the paper. Nothing to be done about the presentation.

Like I said, these were important files.

Re:Script for emergency file recovery (1)

epsalon (518482) | more than 10 years ago | (#7612263)

OO files are actually ZIP files with XML data inside. Try searching for "mimetypeapplication/vnd.sun.xml.writerPK" to find OOWriter files and similar stuff for OOPresenter. It's best to create a test file of each with the same version OO and pass it through "strings".

In any case, DO NOT run anything from the relevant filesystem and especially DO NOT MOUNT the filesystem rw.

Odd??? (1)

FreeLinux (555387) | more than 10 years ago | (#7611925)

It seems odd to me that you wouldn't be able to recover accidental deletions

Why would this seem odd? None of the most widely used file systems allow for undelete. If you think the recycle bin is undelete try del *.* and then see what you can recover. The only one that really supports undelete, and does it really well, is Netware's Salvage utility.

There are kludgy solutions for FAT and NTFS but there really isn't a true deleted file recovery system in any of the mainstream file systems. That includes ext2/3, Reiser, and more.

If your lost files were text then strings and grep can probably get back a fair bit of your data but, it won't be an undeleted file. If the files weren't text then they are gone. Grieve and move on because unless you were storing the numbers to your swiss bank accounts in those files, you'll see that their loss isn't really the end of the world.

Remember undelete in DOS? (1)

91degrees (207121) | more than 10 years ago | (#7612020)

Or diskdoctor on the Amiga? It's a shame that modern file systems don't seem to be able to do something that was fairly universal 15 years ago.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?