Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Linux Gains Lossless File System

CmdrTaco posted more than 8 years ago | from the i-was-sick-of-my-old-lossyfs-anyway dept.

Data Storage 331

Anonymous Coward writes "An R&D affiliate of the world's largest telephone company has achieved a stable release of a new Linux file system said to improve reliability over conventional Linux file systems, and offer performance advantages over Solaris's UFS file system. NILFS 1.0 (new implementation of a log-structured file system) is available now from NTT Labs (Nippon Telegraph and Telephone's Cyber Space Laboratories)."

cancel ×

331 comments

Sorry! There are no comments related to the filter you selected.

Bloat? (3, Insightful)

shadowknot (853491) | more than 8 years ago | (#13712789)

Please correct me if I'm wrong here but wouldn't a log that is only appended to and never overwritten cause a massive ammount of bloat after a period of prolonged use?

Re:Bloat? (0)

Skye16 (685048) | more than 8 years ago | (#13712859)

I wouldn't know, but I would assume the "log" they were referring to was more along the lines of http://en.wikipedia.org/wiki/Logarithm [wikipedia.org] than http://en.wikipedia.org/wiki/Data_logging [wikipedia.org] .

Re:Bloat? (4, Funny)

Skye16 (685048) | more than 8 years ago | (#13712892)

Scratch my previous post, I actually read the article. My bad :)

Re:Bloat? (3, Interesting)

Ayanami Rei (621112) | more than 8 years ago | (#13712876)

There is a "cleaner" that is on the TODO list.
Of course, you can delete files and re-use the space. But the performance slows down greatly once you start filling in "holes" left in the log after wrapping to the end of the allocated area. (A similar situation to database where you might want to compact, vacuum, condense, etc. a table).

Re:Bloat? (1)

Qzukk (229616) | more than 8 years ago | (#13713114)

hm, might be interesting (read exciting but not necessarially useful) to implement this in some sort of really huge circular buffer, where old files are simply overwritten when their time is up.

Data is the new currency my friend (5, Interesting)

Work Account (900793) | more than 8 years ago | (#13712925)

Aside from gasoline and water, data is the most valuable thing in the world.

Walmart's most prized possesion is their billion-billion-billion transaction customer sales database. They use it to find things like, among other things, men tend to buy beer and diapers at the time.

With disks costing $1.00/GB or less these days, many people including myself simply DON'T delete data anymore. I keep all my original digital photos (in .tiff format) along with full-quality movies and all the games I've ever played back to Duke Nukem on 80x386 on a RAID array that's grown to nearly 2 terabytes.

So yes, for many people, disk space is just something you keep adding to, like you'd move from a coupe to a sedan when you have kids and when you have that 6th kid you move to a minivan and if you happen to have 2 more, you get a cargo van when #8 comes along :) Same for HDDs

Re:Bloat? (5, Informative)

ivan256 (17499) | more than 8 years ago | (#13712999)

I wrote a (unfortunatly, closed source) filesystem that was remarkably similar to this once. Generally these types of filesystems are used when you're constantly writing new data. You're going to be eating the space anyway, but you want the reliability of syncronous writes with the performance of asyncronous cached writes. Reading from these filesystems is incredibly slow in comparison.

The version I wrote took advantage of the client's bursty IO pattern and used the slow periods to offload the data to an ext2 filesystem on a seperate disk. Hopefully your system memory was large enough that the offload to the secondary filesystem happened without any disk reads. Once that was done, the older sections of log could be re-used.... But only once the disk filled up and wrapped back to the beginning, because you want to keep your writes (essentially... There's other timing tricks you can play to get more speed) sequential.

There's been lots of research done on this method of write structuring. Look for papers on the "TRAIL [sunysb.edu] " project (also closed source), for example.

Re:Bloat? (2, Informative)

b100dian (771163) | more than 8 years ago | (#13713010)

Actually this is a journal filesystem, as opposed to journalized. That is, each file is a journal.

Re:Bloat? (1)

NitsujTPU (19263) | more than 8 years ago | (#13713290)

Actually.

Log file systems are faster, safer, and just better. Period.

Different FS (1)

phorm (591458) | more than 8 years ago | (#13713344)

I usually put /var/log in a seperate partition anyhow. The easy solution would be to put this FS just on the areas that you want to be "lossless" and leave the rest with standard filesystems.

New Improved? (4, Insightful)

TheRaven64 (641858) | more than 8 years ago | (#13712792)

The article was a bit light on details. Perhaps someone could enlighten me as to exactly why this is better than existing log-structured filesystems, such as NetBSD's LFS.

Re:New Improved? (2, Informative)

cowens (30752) | more than 8 years ago | (#13712940)

On its project page LFS is listed as a related project.

Re:New Improved? (2, Funny)

EvilTwinSkippy (112490) | more than 8 years ago | (#13713050)

The article was a bit light on details. Perhaps someone could enlighten me as to exactly why this is better than existing log-structured filesystems, such as NetBSD's LFS.

Logs structures are suceptible to termites, carpenter ants, and various forms of rot.

Re:New Improved? (0)

Anonymous Coward | more than 8 years ago | (#13713305)

Logs structures are suceptible to termites, carpenter ants, and various forms of rot.

And BSDs are sceptible to maggots, worms, and various forms of rot as well.

(sorry, couldn't resist the obvious joke)

Re:New Improved? (1)

sbryant (93075) | more than 8 years ago | (#13713335)

The larger log structures don't cooperate with the flush procedure; leaving things unflushed is just asking for problems - you're going to get an overflow sooner or later.

-- Steve

Re:New Improved? (5, Funny)

rpresser (610529) | more than 8 years ago | (#13713339)

Logs structures are suceptible to termites, carpenter ants, and various forms of rot.

Even worse, when many logs are added together, the problems multiply.

Re:New Improved? (4, Informative)

Feyr (449684) | more than 8 years ago | (#13713299)

the why is dependent on your application,

for common servers, or day-to-day use. it isn't

but notice how this was developped by a telecom company? a log structured filesystem is perfect or even required, due to speed and integrity constraints (depending on the size of the network), when you're dealing with billing and monitoring data on a telecom network. you want something that's simple and extremely resistant to failures. a complete system crash (which never happen, short of nuking the box) should not result in any data loss, or the extreme minimum, and you should be able to recreate that data from somewhere else (eg, the other endpoint in a telephone network).

a log structured filesystem allow this, the "head" is never over previous data in normal operation. you don't typically read the data back until the end of a cycle (whatever that cycle may be) or in a debugging condition. you simply append to the end. minimizing head movement, and thus increasing mtbf (replacing a disk in those things is costly)

this is also extremely useful for logging to WORM media (write once, read many), for security logs mostly. you don't want a hacker to be able to remove them, no matter what they do

Horrible headline (4, Insightful)

Quasar1999 (520073) | more than 8 years ago | (#13712796)

A lossless file system? Good lord... I most certainly hope all the exisiting file systems out there are not lossy. I have hundreds of gigabytes of data that I don't want to lose.

Or is this filesystem somehow able to recover data once the hard drive crashes? That would be neat...

Re:Horrible headline (0)

Anonymous Coward | more than 8 years ago | (#13712887)

Aw, c'mon, you can always find more pr0n.....

Re:Horrible headline (5, Informative)

TheRaven64 (641858) | more than 8 years ago | (#13712976)

The title was written by a numpty. This is a log-structured filesystem. These systems have been around for ages. NetBSD has LFS (originally from 4.4BSD), and I believe Minix also had some form of log-structured filesystem.

A log-structured filesystem doesn't modify existing files. Every time you write to the disk, you simply append some deltas. This gives very good write performance, but poor read performance (since almost all files will be fragmented, and the entire log for that file must be replayed to determine the current state of the file). To help alleviate this, most undergo a vacuuming process[1], whereby the log is replayed, and a set of contiguous files is written. This also frees space - something that is not normally done since deleting a file is done simply by writing something at the end of the log saying it was deleted. In addition to the good write performance, log-structured filesystems also have an intrinsic undo facility - you can always revert to an earlier disk state, up until the last time the drive was vacuumed.

The snapshot facility is not particularly impressive. It's a feature intrinsic to log-structured filesystems, and also available in other filesystems (such as UFS2 on FreeBSD and XFS on Linux). The performance advantage claims must be taken with a grain of salt - write performance for log-structured filesystems is always close to the theoretical maximum of the disk, but this is at the expense of some disk space, and read speed (although LFS did beat UFS in several tests on NetBSD).

[1] This is usually done in the background when there is little or no disk activity.

Re:Horrible headline (5, Insightful)

addaon (41825) | more than 8 years ago | (#13713230)

It should be said that "good write performance, bad read performance" is essentially the point, not a defect. It's easy these days to speed up reads a huge amount through caching; these days 100MB+ of UBC isn't rare. But when you have to write, you have to write (for reliability reasons); this can't be cached into memory, so it should be optimized for. The goal here is to make BOTH operations as fast as possible, though one is made fast at the disk layer and one is made fast above it.

Don't worry (1)

Work Account (900793) | more than 8 years ago | (#13712990)

I'm sure CmdrTaco will provide a better headline when he posts the dupe later tonight [-;

Re:Horrible headline (1)

HermanAB (661181) | more than 8 years ago | (#13713042)

It depends on your reference. It is lossless compared to FAT and NTFS. In that context, it is a huge breakthough...

Re:Horrible headline (1)

Mark Gillespie (866733) | more than 8 years ago | (#13713155)

I can understand that comment with regard to FAT. But please tell us Why NTFS is lossy? It's a journelling filesystem, as is EXT3, supports all the features that EXT3 does (and provide more). Technically you could argue that because NTFS5 supports encrpyion, you could encrypt data and forget the password, but that's the only thing I can see that makes it's any less lossy that EXT3. Or perhaps your just a *nix fanboy that likes to bash Microsoft...

I GOT A GREASED UP YODA DOLL SHOVED UP MY ASS! (-1)

Anonymous Coward | more than 8 years ago | (#13712797)

GO LINUX!

So... (5, Funny)

Juiblex (561985) | more than 8 years ago | (#13712803)

If it is lossless, I won't be able to store MPEG, XVid, JPEG and MP3 on it anymore? :(

Re:So... (2, Funny)

valeriyk (914993) | more than 8 years ago | (#13712832)

No, but you can use the soon to be released MILF 1.0 file system for your jpg and mpg needs.

Re:So... (3, Funny)

BottleCup (691335) | more than 8 years ago | (#13713041)

No, but you can use the soon to be released MILF 1.0 file system for your jpg and mpg needs.

Now that's one filesystem I would like to fsck upon every boot(y) ;)

Re:So... (1)

parasonic (699907) | more than 8 years ago | (#13712833)

Definitely not, but look at the bright side...you'll begin to admire FLAC once you're forced to use it :)

Re:So... (1, Funny)

Iriel (810009) | more than 8 years ago | (#13712917)

I wouldn't have space for that on a Linux box after my (practically monthly) regular download of $distro['foo']!

That's why I have Windows, because I can afford to lose what's on it ^_^
</toungeincheek>

Old news (5, Funny)

Anonymous Coward | more than 8 years ago | (#13712804)

Websites with MILFS have been around for years.

Oh, wait. NILFS. My bad.

Re:Old news (1)

schon (31600) | more than 8 years ago | (#13712845)

Oh, wait. NILFS. My bad.

OK.. so we all know MILFs are "Mothers I'd like to"...

What would NILFs be?

Norweigians?

Re:Old news (4, Funny)

SatanicPuppy (611928) | more than 8 years ago | (#13712881)

Well, we know it doesn't stand for nerds...

(Sorry...Couldn't resist)

Re:Old news (0)

Anonymous Coward | more than 8 years ago | (#13712916)

No. NERDS.
 
/me runs away *fast*.

Re:Old news (0)

Anonymous Coward | more than 8 years ago | (#13713093)

Nerds can be of both sexes! NILF! remember willow? /* me runs!

Re:Old news (3, Funny)

schon (31600) | more than 8 years ago | (#13713157)

Willow wasn't a nerd, she was a geek (geeks have social skills.)

I can think of at least one Norwegian-ILF (Kristanna Loken.)

Re:Old news (1)

Feyr (449684) | more than 8 years ago | (#13713338)

you need to see more norwegians if you can only think of one

Willow!? (1)

Malawar (674186) | more than 8 years ago | (#13713247)

"Madmordigan!!!!!"

I suggest N = "Nobody" (1)

StressGuy (472374) | more than 8 years ago | (#13712937)

Otherwise, an ACLU lawyer might start to salivate...

Re:Old news (0)

Anonymous Coward | more than 8 years ago | (#13713132)

Yeah but these are bursty NILFS.

N is for... (1)

JemalCole (222845) | more than 8 years ago | (#13713300)

Nannies.

And yes, they've been around for years.

Database Servers (4, Insightful)

mysqlrocks (783488) | more than 8 years ago | (#13712819)

Log-structured filesystems write down all data in a continuous log-like format that is only appended to, never overwritten. The approach is said to reduce seek times, as well as minimizing the kind of data loss that occurs with conventional Linux filesystems.

This sounds a lot like how database servers work. They keep both a log file and a database file. The log file is continuously written to and is only truncated when backups occur.

Privacy (1)

lilmouse (310335) | more than 8 years ago | (#13712956)

Has anyone considered the privacy implications of this yet?

Not sure I like logs listing that 3 years ago, I had a file named bad_kiddie_pr0n.jpeg (or whatever) on my computer.

They'd better have a good cleanup script!

--LWM

Re:Privacy (0)

Anonymous Coward | more than 8 years ago | (#13713111)

dude this is not for you! I assume reading from this FS will be slow though as the point is larger reliability.

The dreaded question (3, Funny)

digitalgimpus (468277) | more than 8 years ago | (#13712825)

Will there be a Windows Driver?

If there isn't, this has no chance on taking off. Consumers today want portability. They don't like lock-in. A linux exclusive format is lock-in.

Create a good windows (and Mac OS) driver, and it's got massive potential.

Re:The dreaded question (3, Insightful)

pesc (147035) | more than 8 years ago | (#13712895)

Consumers today want portability. They don't like lock-in.
That's unfortunately not true, which is proved by all the people using NTFS (or Office).

Re:The dreaded question (3, Funny)

reynaert (264437) | more than 8 years ago | (#13712915)

Will there be a Windows Driver? If there isn't, this has no chance on taking off.

Yes, that's why I only use FAT filesystems on my Linux server.

Re:The dreaded question (2, Informative)

thc69 (98798) | more than 8 years ago | (#13713260)

Will there be a Windows Driver? If there isn't, this has no chance on taking off.

Yes, that's why I only use FAT filesystems on my Linux server.
You're probably joking, but fyi... There's at least one driver for mounting ext2 fs in windows: ext2fsd. If you don't need to mount it, explore2fs works well too.

Re:The dreaded question (1)

Soup50 (541788) | more than 8 years ago | (#13713005)

Will there be a Windows Driver? If there isn't, this has no chance on taking off.
Yeah. that's why there is no real market for ext2/ext3 as well. Because windows cant read it...

Re:The dreaded question (1)

smeager (792621) | more than 8 years ago | (#13713204)

Actually there are many drivers out there that will let you read from ext2/ext3 filesystems on windows. I use Ext2fsd to read my linux partitions on my windows machine (dual boot) and it works great. (It would be nice though if there was a cross-platform FS (other then FAT) that would allow rw, without any hick-ups. The newest version of Ext2fsd 0.25 states it has read/write capabilities but I have yet to try it).

Re:The dreaded question (0)

Anonymous Coward | more than 8 years ago | (#13713240)

SHHHH...don't tell that to my computer. It's currently running WinXP and reading my ext3 partition.

If it hears you saying that's not possible it might get pissed.

erm.. lossless file system? (0)

Anonymous Coward | more than 8 years ago | (#13712828)

As opposed to LOSSY filesystems? WTF?!?

Robust, maybe? As if that's a new thing? You mean.. Journalled? No, that's not new, either.

Hooray for excellent journalism as ever from slashdot!

Needs a new name (1, Funny)

winkydink (650484) | more than 8 years ago | (#13712830)

NILFS is too close to MILFs

Re:Needs a new name (1)

Speare (84249) | more than 8 years ago | (#13712963)

Score:5, Insightful? We need a new meta-moderation scheme, where we can moderate the moderations as Funny.

Re:Needs a new name (0)

Anonymous Coward | more than 8 years ago | (#13713017)

It's marketing, man... marketing.

Re:Needs a new name (4, Funny)

chochos (700687) | more than 8 years ago | (#13713019)

NILF: Netserver I'd Like to fsck (but I don't need to anymore, apparently)

Re:Needs a new name (1)

Scaba (183684) | more than 8 years ago | (#13713024)

Too close? Not close enough!

Re:Needs a new name (2, Funny)

EvilTwinSkippy (112490) | more than 8 years ago | (#13713087)

You know, F'ed is the last state I want to see my file system in.

Stable? (5, Informative)

theJML (911853) | more than 8 years ago | (#13712839)

I like how they say it's reached a stable release but if you look at the known bugs on the Project Home Page http://www.nilfs.org/ [nilfs.org] You'll see that:

The system might hang under heavy load.

The system hangs on a disk full condition.
Aren't those kind of important to saying that something is stable?

NILFs really turn me on! (1, Redundant)

McGregorMortis (536146) | more than 8 years ago | (#13712849)

Or am I thinking of something else? I'm not sure now. I better go check. No interruptions for 10 minutes, please.

Here's an overview for lazy people like me (5, Informative)

Work Account (900793) | more than 8 years ago | (#13712850)

NILFS is a log-structured file system developed for the Linux kernel 2.6. NILFS is an abbreviation of the New Implementation of a Log-structured File System. A log-structured file system has the characteristic that all file system data including metadata is written in a log-like format. Data is never overwritten, only appended in this file system. This greatly improves performance because there is little overhead regarding disk seeks. NILFS also has the following specific features:

        * Slick snapshots.
        * B-tree based file and inode management.
        * Immediate recovery after system crash.
        * 64-bit data structures; support many files, large files and disks.
        * Loadable kernel module; no recompilation of the kernel is required.

Lossless??? (1)

matr0x_x (919985) | more than 8 years ago | (#13712857)

I don't find the article explains the whole "lossless" concept very well. Is this lossless as in "my gmail emails are lossless" or is this lossless as in "my Escalade is lossless (because when it gets stolen or lost I call onstar and they find it for me)"

NTFS (2, Interesting)

JustASlashDotGuy (905444) | more than 8 years ago | (#13712872)

" When the system reboots, the journal notes that the write did not complete, and any partial data writes are lost. "

Isn't this similar to NTFS's journaling file system?

Bundling (2, Interesting)

superpulpsicle (533373) | more than 8 years ago | (#13712883)

If they are serious about a filesystem, it has to be bundled with the linux distros every release. Take Reiser and JFS for example, some distros have it, some don't. Not every release of the same distro has it, what a mess. Only two have stayed permanently EXT2, EXT3. Everything else is trendy.

Re:Bundling (1)

m50d (797211) | more than 8 years ago | (#13713108)

I've never used anything but reiserfs. If a distro won't support it, I won't use that distro, simple as that. It's a really nice filesystem.

Shutdown versus power off (2, Funny)

Matt Perry (793115) | more than 8 years ago | (#13712886)

More file integrity is always good. Ever since journaling file systems became available I just started turning the power off to my computers (via a power strip) rather than going through the shutdown command. It never made sense to me that we'd have to "shut down" as opposed to just turning the thing off.

Re:Shutdown versus power off (2, Informative)

Kiashien (914194) | more than 8 years ago | (#13712988)

...that's mean to the hard drive. That can cause serious damage- and not because data hasn't been written yet...

When you cut the power to a HD, the head stops wherever it is- sometimes even settling on the drive itself. This causes severe wear and tear, and sometimes damage. It's one of the many reasons why people are told always leaving your computer on is less damaging than turning it off.

The difference between cutting power and a proper shut down, is that during a proper shut down, the OS ensures A) everything is written (which you may not care about) and B) that the drive heads are in a locked, safe(ish) position.

You may not care about A, but you really should care about B.

Re:Shutdown versus power off (2, Informative)

dlamming (152302) | more than 8 years ago | (#13713103)

Don't be ridiculous. Every drive in the last 5 years, maybe the last 10, is able to park the head safely even in the event of your pulling the plug on the drive itself. It's got some springy/inductory dealie that pulls the head to a safe landing area.

That doesn't mean that you won't lose data that hasn't been written yet, of course. :)

Re:Shutdown versus power off (1)

Kiashien (914194) | more than 8 years ago | (#13713161)

Eh, who said he had a new hard drive? I still use a 40 gig from 1998.

Haven't had a reason to throw it out. It's much simpler to simply be careful, and be aware of what could happen, as there's no way to simply "know" that the hard drive can prevent it.

Besides, that particular technology wasn't in every drive (and still may not be). It was only included massively in laptop drives for many years, as they deal with the issue of power loss much more often. It is highly possible it is included in all drives now, but I'd be very surprised if it was in "all" hard drives in the last 5 years. I could believe the last 3. Anything that equals more manufacture costs rarely is made unless specifically needed for some reason- like a high-end drive, or a laptop in this case.

Re:Shutdown versus power off (2, Informative)

morpheus800e (245254) | more than 8 years ago | (#13713315)

Automatic head parking has been around for a LONG time - Here's [custhelp.com] the data sheet from the 120 MEGAbyte drive that I had years ago. It came in my 386. Note the following line, about half-way down:
"Turning the system power off causes the WD Caviar to perform an automatic head park operation."

It wasn't a high-end drive at the time, (just a consumer-level IDE drive), and was utterly obsolete years ago, yet it still had the technology to park the heads out of the way when power is disconnected. There's no way that turning off power to a new drive is going to physically damage it. You just might lose the data on it.

Re:Shutdown versus power off (1)

Rendus (2430) | more than 8 years ago | (#13713130)

Hah. The parent post is brought to you by the letters I, B and M, and the year 1994.

The only major issue with abruptly removing power to the HD or PC is if there were writes in cache waiting to be written to disk.

Re:Shutdown versus power off (4, Interesting)

pesc (147035) | more than 8 years ago | (#13712998)

Ever since journaling file systems became available I just started turning the power off to my computers (via a power strip) rather than going through the shutdown command.

That's a very bad idea. Normally, journaling file systems only guarantee that the file/directory structure remains intact. It does not necessarily guarantee that the data in the files hit the disk. Also, your disk will probably have a cache that is lost when you remove power. Whatever is in the cache will also be lost.

So your file system may be intact, but your practices will probably destroy data.

Re:Shutdown versus power off (0)

Anonymous Coward | more than 8 years ago | (#13713031)

Jesus wept. It doesn't make sense to use "shutdown"? Hitting the power switch and praying the filesystem is smart enough to pickup the pieces is better practice than using a system command designed to make sure things close files cleanly? This may be the most retarded thing I've ever read on slashdot.

Why don't you just use the power switch to get out of every program? It's so laborious to hit ^X or ^C or that [X] on the menu bar, and god forbid someone make me goto File->Close/Exit.

Re:Shutdown versus power off (2, Interesting)

TheRaven64 (641858) | more than 8 years ago | (#13713129)

Wow, just wow. Seriously, I really, really hope you don't have any important data.

Here's a little (simplified) tutorial on what happens when you a program writes a file to disk:

  1. It goes into the OS filesystem cache. This is often several hundred MBs. At some point, the OS decides it is not important enough to be kept around. At this point,
  2. it is written to the hard drive. Here, it sits in the hard drive controller's on-board cache. When the cache is full,
  3. it is written to disk.
    • At any given time, you may have several hundred MBs of data that show up when you browse your filesystem, but have never made it to disk. When you hit the power button, all of this data evaporates as the current fades from your RAM chips (over about 10ns). The only thing journalling filesystems do to improve this is ensure that, when the data is being written to disk, if the system crashes in the middle of the write then the system can be restored to either the beginning or end of the transaction, rather than the middle. A journalled filesystem will (in theory) never be corrupted by the power failing, but that does not mean you will never lose data - you will, and potentially large amounts of it. In short, if you continue with this practice, then you are a numpty.

Re:Shutdown versus power off (0)

Anonymous Coward | more than 8 years ago | (#13713323)

Fuckwit!

actual info about the fs (4, Informative)

cowens (30752) | more than 8 years ago | (#13712900)

Worst. Headline. (0, Flamebait)

NemosomeN (670035) | more than 8 years ago | (#13712909)

Today. (Sorry, slashdot tends to do this too often for me to pull an "evar".)

Lossless? Well, it damned well better be, at least by design. This suggests that other filesystems in Linux are by their very nature lossy. "MS FUD" story in 5... 4...

MILF? (0)

Anonymous Coward | more than 8 years ago | (#13712912)

N.I.L.F. hunter?

Good news (1)

kc01 (772943) | more than 8 years ago | (#13712914)

I'm delighted to hear this. I have found ext3 to be a very fragile filesystem- Unexepected reboots or powerfails can have a devastating effect.

I've experienced roached ext3 filesystems a number of times- With no hardware failure. Recovering that data by converting it to ext2 helped once, but I'm still trying to recover the data after the last incident. I'm sure (rather, hope!) that these experiences aren't the norm.

Bazaar Tailights. (0)

Anonymous Coward | more than 8 years ago | (#13712918)

"NILFS 1.0 (new implementation of a log-structured file system) is available now from NTT Labs (Nippon Telegraph and Telephone's Cyber Space Laboratories).""

Go, go Cathedral model!

Excellent information retention (4, Funny)

totallygeek (263191) | more than 8 years ago | (#13712919)

I installed this lossless file system. rm is now chmod 444. I have not been able to lose information since.

Note: instead of modding this +1 funny, mod it +0.1 pathetic.

Where the hell did I put that? (0)

Anonymous Coward | more than 8 years ago | (#13712962)

Now if I could just ask the filesystem what I did with papers around my house. Seriously folks, other than hard drive crashes, I find that I lose much more than a filesystem does. And I don't see how this will help with hard drive crashes.

I don't know about anyone else.... (1)

Doctor Memory (6336) | more than 8 years ago | (#13712974)

...but I want a business card that says I work at "Cyber Space Laboratory"!

There's no replacement for ext3fs yet for me... (4, Insightful)

davegaramond (632107) | more than 8 years ago | (#13712977)

I'd looove to replace ext2/3 as my filesystem for years since it's not so fast and most distro don't include binary tree indexing for ext3 (so large dir is slow). Unfortunately I haven't been able to do so. Here are my requirements:

1. Distro support. I don't want to have to compile my own kernel. The FS needs to be supported by the distro (Debian in this case). I want to be able to create root partition and RAID with the FS.

2. ACL and extended attributes.

3. extended inode attributes would be nice ("chattr +i" is handy sometimes).

4. optionally I would like to be able to create large Bestcrypt partitions (e.g. 30GB) with that FS.

5. fast large dir and small files performance (I have millions of small files on my desktop).

6. no need to fsck or fast fsck (i.e. journalling or some other technique or whatever).

7. disk quota!

8. optionally, transparent compression and encryption will be a big plus point.

9. Snapshots would be nice too, for consistent backups.

10. Versioning is also very welcome.

XFS: very close but it still has problems with #4. It also doesn't have undelete like ext2/ext3 (not that it's a requirement though).

JFS: it just lacks many features.

Reiser3: How's the quota support, still have to patch kernel everytime? Plus it doesn't have ACL.

Reiser4: not ready yet.

I might have to look at FreeBSD after all. Background fsck, hmm....

Re:There's no replacement for ext3fs yet for me... (3, Funny)

metamatic (202216) | more than 8 years ago | (#13713074)

Reiser3 works fine on Debian with no kernel patching required.

It seems as if you're holding out for perfection, not willing to upgrade from ext3 to anything else unless you find The Perfect Filesystem. I think that's kinda silly; better to get 90% of what you need now, than to wait another 2-4 years, surely?

Re:There's no replacement for ext3fs yet for me... (3, Informative)

m50d (797211) | more than 8 years ago | (#13713134)

Reiser3: How's the quota support, still have to patch kernel everytime? Plus it doesn't have ACL.

It does have ACL, and quota support is fine at least in gentoo kernels (can't check a vanilla one atm)

Re:There's no replacement for ext3fs yet for me... (0)

Anonymous Coward | more than 8 years ago | (#13713236)

I'd looove to replace ext2/3 as my filesystem for years since it's not so fast and most distro don't include binary tree indexing for ext3 (so large dir is slow). Unfortunately I haven't been able to do so. Here are my requirements [as a porn master]:

1. Distro support. I don't want to have to compile my own kernel. The FS needs to be supported by the distro (Debian in this case). I want to be able to create root partition and RAID with the FS.
Can't take time to upgrade system, jerking off...

2. ACL and extended attributes.
Need fine-grained control over who can see my porn.

3. extended inode attributes would be nice ("chattr +i" is handy sometimes).
Don't want anyone deleting my porn.

4. optionally I would like to be able to create large Bestcrypt partitions (e.g. 30GB) with that FS.
Don't want anyone knowing about the vast amounts of horse porn or tentacle porn I own.

5. fast large dir and small files performance (I have millions of small files on my desktop).
Duh.

6. no need to fsck or fast fsck (i.e. journalling or some other technique or whatever).
Can't wait for system boot, must have porn NOW.

7. disk quota!
To keep everyone else from taking up my valuable disk space with their porn.

8. optionally, transparent compression and encryption will be a big plus point.
I don't want to have to work hard to avoid the FBI.

9. Snapshots would be nice too, for consistent backups.
I've spent years and years building up my stock of dirty pictures. Can't have them disappear on me.

10. Versioning is also very welcome.
All those TGP sites keep using the same filenames, dammit.

Re:There's no replacement for ext3fs yet for me... (1)

fireboy1919 (257783) | more than 8 years ago | (#13713245)

8. optionally, transparent compression and encryption will be a big plus point.

9. Snapshots would be nice too, for consistent backups.

10. Versioning is also very welcome.

I sure hope that none of these things are ever part of the filesystem itself. I want my filesystems 100% portable, and fast. You know why NTFS isn't so much, right? All the extra, nearly useless features that should be handled by the OS, but that are done by the file system instead.

These should be layers on top of the file system that are implemented by something else. 9 is basically a cron job, and 10 can be done more ways than I have fingers (and I have a full decimal complement) without using the file system, and there are a few solutions on freshmeat that will do 8 today.

Hopefully there aren't any file system designers who actually think that putting these into their system is actually a good idea.

As far as no "fsck"...I would never use a filesystem without it. Disks and memory go bad. Anyone who tells you that their filesystem never needs checking is deluding themselves. If anything, I want a filesystem with even more error recovery capability. I'd like the file metadata to have lots of redundancy for the sake of reliability, and some forensic tools that are at least as good as the ones already available for ext2.

Re:There's no replacement for ext3fs yet for me... (1)

TheRaven64 (641858) | more than 8 years ago | (#13713282)

I might have to look at FreeBSD after all. Background fsck, hmm....

It might be worth it:

  1. Not really an issue, since there aren't really FreeBSD distros. UFS2 is supported by NetBSD, however.
  2. Yup. Both available.
  3. Yup. Been around since 4.4BSD (1993). It's called chflags though, not chattr.
  4. No idea.
  5. It's not bad, but I don't have any real performance figures so try it and see.
  6. Softupdates should ensure consistency after a fsck, and the fsck can run in the background. There was a Summer of Code project to add journaling to UFS2/3, but the wiki page was never updated so I have no idea how much progress was made.
  7. Yup, since 4.2BSD.
  8. Yup, since FreeBSD 5.0.
  9. Sadly not. I've not seen a filesystem that did versioning outside VMS, but it would be a nice addition.

NILFS? (0, Redundant)

Andor666 (659649) | more than 8 years ago | (#13712981)

The firs 'N' stands for 'Nieces' ?

Ah, ah, ok, I'm a redundant 'MILF' one :P ;)

Pity not what I thought it was (1)

SmallFurryCreature (593017) | more than 8 years ago | (#13713036)

RAID obviously protects against complete harddisk failure. This protects against dataloss during other hardware failure like powerloss BUT is there anything that protects against data loss due to slight defects on the hard disk?

I am probably not the only one to come back to an old file saved years ago only to find a glitch in it. I noticed it with a couple of movies. Movies I know were perfect as I watched them without copying them. So the only explanation is that part of the disk got corrupted.

The solution is availbale manually in the form of PAR or other recovery software but wouldn't it be nice if this was part of the OS or hardware itself? I would happily add another disk to my raid this one loaded with data to help check that the data is 100% correct.

Does such a solution exist?

That is what raid can do? (1)

nietsch (112711) | more than 8 years ago | (#13713270)

Raid 5 allows you to keep 1 or more parity checksums of the volume. In principle you could use partitions on the same disk if you cannot afford a multi disk setup.

HDFS (home-dir FS)? (4, Interesting)

Ramses0 (63476) | more than 8 years ago | (#13713069)

I've had an idea kicking around for a while now... "HDFS / Home-Dir File System" ... I want a (s)low-performance, bloated, version controlled, roll-back featured, viewcvs.cgi enabled file system for my /home/rames (or at least /home/rames/documents).

With FUSE [sourceforge.net] it might even be possible for mere mortals like me.

Basically, I very rarely push more around more than 100-200kb at a time of "my stuff" unless it's big OGG's or tgz's, etc. Mostly source files, documents, resume's, etc. In that case, I want to be able to go historical to any saved revision *at the file-system level*, kindof like "always on cvs / svn / (git?)" for certain directories. Then when I accidently nuke files or make mistakes or whatever, I can drag a slider in a GUI and "roll-back" my filesystem to a certain point in time and bring that saved state into the present.

Performance is not an issue (at first), as I'm OK if my files take 3 seconds to save in vim or OpenOffice instead of 0.5 seconds. Space is not an issue because I don't generally revise Large(tm) files (and it would be pretty straightforward to have a MaxLimit size for any particular file). Maintenance would also be pretty straighforward: crontab "@daily dump revisions > 1 month". Include some special logic for "if a file is changing a lot, only snapshot versions every 5-10 minutes" and you could even handle some of the larger stuff like images without too much work.

Having done quite a bit of reading of KernelTraffic [kernel-traffic.org] (Hi Zack) and recently about GIT [wikipedia.org] , maybe it's time to dust off some python and C and see what happens...

--Robert

Re:HDFS (home-dir FS)? (1)

cowens (30752) | more than 8 years ago | (#13713232)

What you want is something like the katie [netcraft.com.au] fs. it is a versioned filesystem. You can access the current version by saying vi /home/user/foo or an older version by saying vi /home/user/foo@@main/5 where main is the branch and 5 is the version number. I don't know if katie is still under active development anymore though.

V0.1 vs V1.0 (1)

HermanAB (661181) | more than 8 years ago | (#13713086)

I suppose the released version is lossless, compared to their first development version, which was a wee little bit buggy?

Lossless vs. Lossy Filesystems (1)

xilmaril (573709) | more than 8 years ago | (#13713096)

nobody seems to know the difference between lossy and lossless filesystems. neither do I, and neither does whoever wrote the article, it seems.

but hey, that's never slowed me down.

This new filesystem is like old ones, with a big difference and a few small ones.

It has something called 'snapshots', which seems to mean that you can work off of a partition, but seperately load up the version of that partition you had before you last had a power failure, or whatever went wrong.

it also claims to:
        * Fast write and recovery times
        * Minimal damage to file data and system consistency on hardware failure
        * Correctly ordered data and meta-data writes
        * File and inode blocks are managed by a B-tree structure
                    o Can create and store huge files
        * Internal data are processed in 64 bit wide word size

that no doubt means something, to somebody. lucky them!

and yes, this is actually more informative than the article.

remember kids, never, NEVER RTFA. it will only confuse and scare you.

Linux files systems suck ass.. (0, Troll)

mcdade (89483) | more than 8 years ago | (#13713117)

So what are the choices? ext2/ext3 which are slow, reiserfs which sucks ass when it breaks..

I noticed how they compare the new filesystem to Sun's UFS, which isn't the bomb.. Look into ZFS from Sun, if they ever release it! We saw a demo on this almost a year ago now, suppose to be released with Solaris 10, but wasn't ready. We were so hyped about this after we lost a shitload of disk arrays under veritas due to hardware issues. This shouldn't happen under ZFS, cause if you have a mirror, it would know that the data being written out was writing corrupt data to the mirror and automagically fix it.

seriously .. check out ZFS.

For the guy posting about FreeBSD, with background fsck.. it's nice.. have yet to loose something on a FreeBSD box due to inproper shutdowns.. (power failures)

Comming soon: NILFS_Hunter.com (-1, Redundant)

denis-The-menace (471988) | more than 8 years ago | (#13713136)

A gorgeous and horny woman cruses computer stores for quickies.
Everything is recorded from pickup to cleanup.

WWW.NILFS_Hunter.com

Isn't this just like a tape drive on hard disk? (1)

GecKo213 (890491) | more than 8 years ago | (#13713200)

Sounds to me an aweful lot like a tape drive. Start at one end and start writting until you're done. I can see the point of wanting to keep all parts of each single file together in one block so that it's not broken up. That way there is no need to defrag, but I thought ext2 and ext3 did that type of thing already. Correct me if I'm wrong, but I was told that ext2/ext3 would keep a file whole at just about every cost pending a really really full drive and absolutley no contiguous room to put it, then it'd break it up. Am I wrong?

I'm also not sure how seek time is going to be affected by this. I'm not an expert on seek times for hard drives or anything, but if the File Allocation Table says it's out at point X and the PC then gives instructions to retrient that file by saying "the file starts at X address" the seek time is the same no matter how the file is sitting on the drive. Correct me if I'm wrong, but lets look at a quick scenario to make sure we're all on the same page. Then if I have fault in my thinking, please tell me. Lets say that a file starts at the farthest point away from the reader head on the drive that it can start and still fit the file there. (The very last section on the drive that the file can fit) In both instances the head would have to move the same distance from it's starting point wouldn't it? I though that seek time was dependant mostly on the hard drive hardware speed itself as the time that it took the head to get to the starting point of the file. If the file is broken up into tiny pieces all over the drive it's going to take longer for the file to be read in completely, but the seek time isn't changed to find the head of the file is it? Maybe I misunderstand "seek time" someone, if I'm way off please enlighten me. I'm always good for an education. Thanks.

Nothing has been gained just yet... (2, Interesting)

Jason Hood (721277) | more than 8 years ago | (#13713243)

Here is a sampling of the known bugs

The system might hang under heavy load.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>