×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

EXT4, Btrfs, NILFS2 Performance Compared

timothy posted more than 4 years ago | from the where-will-you-put-your-bits-next-year? dept.

Data Storage 102

An anonymous reader writes "Phoronix has published Linux filesystem benchmarks comparing XFS, EXT3, EXT4, Btrfs and NILFS2 filesystems. This is the first time that the new EXT4 and Btrfs and NILFS2 filesystems have been directly compared when it comes to their disk performance though the results may surprise. For the most part, EXT4 came out on top."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

102 comments

What, no ReiserFS? (5, Funny)

Anonymous Coward | more than 4 years ago | (#28529729)

you folks are killing me

Re:What, no ReiserFS? (0)

Anonymous Coward | more than 4 years ago | (#28540867)

you folks are killing me

It seems that ReiserFS dies as a result of Nina Reiser's death. In a fictional context, this might be considered as "poetic justice".

Btrfs (4, Informative)

JohnFluxx (413620) | more than 4 years ago | (#28529793)

The version of Btrfs that they used was before their performance optimizations - 0.18. But they now have 0.19 which is supposedly a lot faster and will be in the next kernel release. There's about 5 months of development work between them:

# v0.19 Released (June 2009) For 2.6.31-rc
# v0.18 Released (Jan 2009) For 2.6.29-rc2

Re:Btrfs (3, Insightful)

Anonymous Coward | more than 4 years ago | (#28529975)

a filesystem whose version begins with a zero doesn't get to be in the same room as my data, much less in charge of maintaining it

Re:Btrfs (2, Insightful)

Anonymous Coward | more than 4 years ago | (#28530303)

Would it make you feel any better if the exact same code was labeled like this instead?
# v1.9 Released (June 2009) For 2.6.31-rc
# v1.8 Released (Jan 2009) For 2.6.29-rc2

Re:Btrfs (1)

BrokenHalo (565198) | more than 4 years ago | (#28532351)

Would it make you feel any better if the exact same code was labeled like this instead?

Not much. Actually, I don't care much about version numbers, since there are lots of well-established products out there with version numbers
What matters, though, is code maturity. For any general application, we can afford to put up with a few bugs here and there. A filesystem, however, needs to be proved to be safe, since errors can easily be found only after your last good copy of a file has disappeared out of the bottom of your backup cycle. This is an area where it pays to be conservative.

Re:Btrfs (0)

Anonymous Coward | more than 4 years ago | (#28533541)

You know computers in general flip bits once in a while and can never be proven safe. So by your logic, to be safe, you better not put your data on a computer.

The real secret is to accept that computers make mistakes, drop bits, etc and have your system as a whole work anyway.

BTW, I have seen pretty much every component of a computer system corrupt some data. That includes drives, cpus, data busses, memory, etc.

Re:Btrfs (0)

Anonymous Coward | more than 4 years ago | (#28540075)

http://en.wikipedia.org/wiki/Mainframe_computer#Characteristics [wikipedia.org] -- read the part about lock-stepping.

Don't forget automated backup systems using tape robots in vacuum, RAID, ECC, hardened semiconductors, Faraday cages, UPS, flash storage, physically distant redundant datacenters, self-healing, fault tolerance, power isolation, fiberoptics, convection cooling, hard vacuum, SECDED, differential voltage I/O, microkernels, gated ring memory protection, code correctness proving, quantization, Gray code, Convolutional code, physical parameter health monitoring, embedded accelerometers, complete module hot-swapping, online reconfiguration, LPAR, Stratus VOS, rotary transformer, bypass capacitors, Zener diodes, SNR, parasitic-minimization, debouncing, optical mice, sealed keyboards, LCD screens, discrete logic, MOSFETs, hex buffers, potting, ASICs, medium-scale integration, alloys, polypropylene, clean rooms, burn-in testing, Hi-POT, sandbeds, shock absorbers, electron microscopy, lead shielding, non-RoHS whisker-free solder, PLL clock distribution, non-bipolar semiconductors, terahertz imaging, and the myriad other technology used to carefully construct fault-tolerant, disaster-tolerant, mechanically and physically sound, extremely reliable computing machinery.

(Yes, many of the things I listed have much more to do with the manufacturing processes, the development of critical technologies now used, analysis of powered machinery, and noninvasive testing than things actually inside, connected to, or controlling the machine. I don't feel like explaining how each listed item is directly or indirectly involved at the moment. That being said, computers can be made far more reliable than many realize... if you have vast amounts of cash, which obviously entities like global corps, military and government, stock markets, and some lucky others do...)

Re:Btrfs (2, Insightful)

hardburn (141468) | more than 4 years ago | (#28530375)

A file system whose version begings with zero means the author's don't feel like putting a one there. Nothing more.

That said, btrfs is still under heavy development, and the on-disk format hasn't been finalized. Avoid it for anything important, but not because of arbitrary version numbers.

Re:Btrfs (1, Informative)

Anonymous Coward | more than 4 years ago | (#28530505)

bzzt

Most schemes use a zero in the first sequence to designate alpha or beta status for releases that are not stable enough for general or practical deployment and are intended for testing or internal use only. Alpha- and beta-version software is often given numerical versions less than 1 (such as 0.9), to suggest their approach toward a public "1.0" release

bzzzt yourself (0)

Anonymous Coward | more than 4 years ago | (#28531881)

Here's a trivial example from dmesg:

[ 0.682950] Linux agpgart interface v0.103

Re:Btrfs (2, Informative)

hardburn (141468) | more than 4 years ago | (#28532053)

Alpha- and beta-version software is often given numerical versions less than 1 (such as 0.9), to suggest their approach toward a public "1.0" release

That's just your personal conception, conditioned by many years of commerical software development. Putting the '1.0' in is a totally arbitrary decision. Lots of Open Source projects are in perfectly stable, usable condition when in 0.x status. The Linux kernel itself was pretty stable in 0.9, with the only major changes between that and 1.0 being stabilizing the TCP/IP stack (IIRC).

Some projects don't even use that nomenclature; Gentoo just uses the date of release. On the opposite side of the fence, lots of commerical offerings are crud until they reach at least 3.0. Windows, for instance, was a sick joke in 1.0 and 2.0

Re:Btrfs (1)

AlXtreme (223728) | more than 4 years ago | (#28532565)

Some projects don't even use that nomenclature; Gentoo just uses the date of release

Maybe that's because version numbers really don't mean much when it comes to distributions. Fedora 10, Ubuntu 9.04 or Debian 3.0 are merely ways to distinguish different versions of a distribution. Because distro's are so complicated and contain so much software (even small ones) you can't be sure that 3.0 will even have the same stuff as 2.0, while with single applications you can be quite sure that you'll get a decent improvement in features and reliability (if not performance).

Of course you're correct than in the open source world the difference between 0.9 and 1.0 doesn't have to be world-changing.

In the end, it's either a matter of how the developer sees his project or how many releases marketing wants to push out of the door this year.

Re:Btrfs (3, Insightful)

hedwards (940851) | more than 4 years ago | (#28533205)

What exactly it is that warrants an increment from 0.9.9 to 1.0.0 is going to vary somewhat, but in general there's supposed to be a few things in common amongst the releases.

At 1.0 release it's supposed to be feature complete, free of show stopper bugs and reliable enough for regular use. Yes, there is some degree of legitimate disagreement as to exactly what that means, but not that much. It's a convention which people have largely agreed to because there needs to be some way of informing the user that this isn't quite ready for prime time. Adding features later on isn't an issue, but it does need to have all the features necessary to function properly.

Then there's ZFS on FreeBSD which is experimental and will be experimental until there's enough people working on it for the dev to feel comfortable with things being fixed in a reasonable time.

Re:Btrfs (1)

xtracto (837672) | more than 4 years ago | (#28541103)

. Lots of Open Source projects are in perfectly stable, usable condition when in 0.x status.

Not only that, lots of Open Source projects are unusable, unstable condition when in 4.X condition!

Windows, for instance, was a sick joke in 1.0 and 2.0

IMHO, windows had to go up to "0x58,0x50" to stop being a sick joke.

Re:Btrfs (0)

Anonymous Coward | more than 4 years ago | (#28537327)

thats why more and more companys start their products with version 3 or 5 or even 8 ;)

Re:Btrfs (1)

Amazing Quantum Man (458715) | more than 4 years ago | (#28530563)

Btrfs tends to perform best at Bennigan's.

Re:Btrfs (0)

Anonymous Coward | more than 4 years ago | (#28533413)

Please explain you joke. I normally ignore poor attempts at humor, but this has the potential to be either very funny, or "shoot yourself in the face" bad.

Re:Btrfs (1)

Amazing Quantum Man (458715) | more than 4 years ago | (#28533651)

South Park. Butters. His favorite restaurant in the world is Bennigan's.

Re:Btrfs (0)

Anonymous Coward | more than 4 years ago | (#28535021)

E for effort, but shoot yourself in the face for setup, quality, and delivery.

Re:Btrfs (0)

Anonymous Coward | more than 4 years ago | (#28540335)

And the funny part was ?

Wait a second, What's up with SQL-lite test (1)

goombah99 (560566) | more than 4 years ago | (#28533175)

Talk about optimization or lack of it. Take a look at the SQL lite test. EXT3 is something like 80 times faster than EXT4 or BTRFS.

What heck is going on!!!. Postgress SQL does not seem to show this performance enhancement.

really this is an insanely different score, to the effect that if it's real no one in the right mind would run SQL on anything but EXT3.

Something must be wrong with this test.

Re:Wait a second, What's up with SQL-lite test (1)

goombah99 (560566) | more than 4 years ago | (#28533231)

Same sort of weirdness shows up in the Mac 10.5.5 versus Ubuntu tests [phoronix.com]. all the test fluctuate a small amount except for the SQL-lite test in which the Mac creams ubuntu.

why does SQL lite show such extreme behaviour in file systems.

Re:Wait a second, What's up with SQL-lite test (2, Interesting)

liquidpele (663430) | more than 4 years ago | (#28533309)

If I had to guess, I would say it was the way the FS driver was caching pages, and it happened to be very good at guessing what was going to be needed next from SQL Lite. Then again... The way they were storing and retrieving in SQL Lite may have a large impact on the results in that case.

Re:Btrfs (1)

fatp (1171151) | more than 4 years ago | (#28536955)

Then 0.19 is not actually released (no one use rc kernel, right?). We can only say it was not born in the right time.

BTW, since btrfs came from oracle, and it performs so poorly with sqlite and postgresql, I would be interested its performance with Oracle's own databases... oracle, Berkeley db, mysql... It would be interesting to see it runs well with Oracle RDBMS, but funny if it takes months to create the database (unitl 0.20 is out??)

Another lame filesystem review (0, Troll)

brunes69 (86786) | more than 4 years ago | (#28529797)

NILFS2 and Btrfs are both TRIM file systems optimized for SSD media. Comparing them to other file systems on a SATA drive is borderline stupidity, because you would never use them on a SATA drive. Any more than comparing NILFS2 or Btrfs to eXT3 on a SSD would be.

It's like comparing the performance of motor oil and sewing machine oil to lubricate an engine or a sewing machine. They're not the same thing just because they are both "oil".

Re:Another lame filesystem review (4, Insightful)

Nakarti (572310) | more than 4 years ago | (#28529853)

Saying a SATA drive is not an SSD is borderline stupidity, but who's to say that it really matters.
Comparing filesystems under a certain condition is comparing filesystems.
Comparing filesystems on different conditions is NOT comparing filesystems.

Re:Another lame filesystem review (1)

jpkotta (1495893) | more than 4 years ago | (#28538345)

Am I missing something? SATA is an interconnect. It doesn't care what is storing the data at the other end.

Another lame filesystem comment (5, Informative)

greg1104 (461138) | more than 4 years ago | (#28529969)

Btrfs includes support for TRIM on SSD, but that's a secondary addition. The main purpose of Btrfs is to compete against Sun's ZFS in the area of robust fault tolerance. If you look at the original announcement [lkml.org], you can see SSD support wasn't on the radar at all; that's strictly been an afterthought in the design. Btrfs is absolutely designed to work on SATA drives and to compete head to head against ext3/ext4.

Re:Another lame filesystem comment (1)

jabuzz (182671) | more than 4 years ago | (#28536265)

So competing against ZFS and ext3/ext4. That is fairly low goals is you ask me.

Yeah but... (0)

Anonymous Coward | more than 4 years ago | (#28530005)

...Does it run Linux?

Re:Another lame filesystem review (2, Informative)

Freetardo Jones (1574733) | more than 4 years ago | (#28530037)

NILFS2 and Btrfs are both TRIM file systems optimized for SSD media. Comparing them to other file systems on a SATA drive is borderline stupidity, because you would never use them on a SATA drive. Any more than comparing NILFS2 or Btrfs to eXT3 on a SSD would be.

This statement doesn't make any sense since SSDs can use both the original SATA and SATA II interfaces.

Re:Another lame filesystem review (2, Insightful)

geniusj (140174) | more than 4 years ago | (#28537249)

Though I never understood why one would choose to use an SSD on a SATA interface. Using a medium that support parallel access over a serial interface doesn't seem all that logical to me..

Re:Another lame filesystem review (1)

hardburn (141468) | more than 4 years ago | (#28543153)

"Parallel", in this case, doesn't usually mean parallel commands. It means it uses several wires to send a single command.

Implementing the electronics is easier on a serial connection. It's easier to jam the clock speed up than to add all the extra pins required on the ICs to support a parallel connection.

Mind you, SSD speeds are going to rise faster than the designed-by-committee SATA standard can keep up. It won't be long before SSDs are going to have be on the northbridge.

Re:Another lame filesystem review (1)

Directrix1 (157787) | more than 4 years ago | (#28530359)

NILFS2 is made for SSD, but Btrfs isn't. NILFS2, because of how it stores files, should have a good read performance advantage due to their being no penalty for random access on SSD, and if I'm not mistaken its write speed should be fast on just about anything.

Re:Another lame filesystem review (1)

Zygfryd (856098) | more than 4 years ago | (#28530831)

and if I'm not mistaken its write speed should be fast on just about anything.

Until you fill up the drive and the garbage collector needs to kick in. From what I know, their garbage collector is currently very basic and unoptimized. It's probably going to take a while before we get the perfect filesystem for the old, cheap SSDs.

Re:Another lame filesystem review (1)

Directrix1 (157787) | more than 4 years ago | (#28536587)

I believe the garbage collector runs the whole time in the background. So as long as you don't fill up the filesystem you will probably be alright.

Re:Another lame filesystem review (1)

hardburn (141468) | more than 4 years ago | (#28530487)

The SSD benchmark is coming.

But never mind that, because TFA has some problems interpreting the data. If all the numbers are coming out the same, that indicates the bottleneck is somewhere other than IO. For instance, when requesting a small static file over Apache, the file is probably being fetched right out of the cache. This test might catch a few badly implemented filesystems or hard drive electronics, but the ones in the article might as well be thrown out.

Re:Another lame filesystem review (1)

Kymermosst (33885) | more than 4 years ago | (#28535499)

For instance, when requesting a small static file over Apache, the file is probably being fetched right out of the cache. This test might catch a few badly implemented filesystems or hard drive electronics, but the ones in the article might as well be thrown out.

I stopped reading once I saw the Apache "benchmark." I guarantee it never touched the disk after the first time the small static file was read.

Re:Another lame filesystem review (1, Interesting)

Anonymous Coward | more than 4 years ago | (#28530583)

Others have pulled you up on the SATA/SSD remark so I won't cover that again, but you are also a little confused about those filesystems being optimised for SSD.

If you read the NILFS page (http://www.nilfs.org/en/about_nilfs.html), it says nothing about SSD. It has features you might want on any storage, any benefits to SSD media is just a side effect.

Re:Another lame filesystem review (1)

geniusj (140174) | more than 4 years ago | (#28537295)

Where I always saw the benefit for log-structued filesystems was in environments with lots of random writes and few reads (as the random writes will become sequential writes.) If you use it on a good SSD, however, I could probably safely remove the 'few reads' qualifier. Either way, I'm glad that Linux has one now.

Re:Another lame filesystem review (0)

Anonymous Coward | more than 4 years ago | (#28533657)

So just because something was made for SSD media it is of no interest how it behaves in other cases? We don't even want to know?

What if it behaves actually quite good? Wouldn't that be some interesting result?

If everyone would followed your advice we would have missed some great oportunities. Like the internet.

JFS? (1)

chrylis (262281) | more than 4 years ago | (#28529809)

Kinda disappointed the article didn't discuss JFS. After running into the fragility of XFS, I tried it out, and it's highly robust, fast, and easy on the CPU.

Re:JFS? (0)

Anonymous Coward | more than 4 years ago | (#28531917)

Fragility of XFS? Compared to JFS? LOL. I have been running XFS on mission critical systems for years and not lost any data. I can't say that about any other filesystem. JFS is relatively stable in the kernel these days but it used to cause kernel faults all the time. Problem is, there is no longer anyone maintaining JFS.

These benchmarks are very poorly done. The results are all over the place with sometimes a 100 or 1000 times difference in speed between filesystems that perform remarkably similar in other tests. That indicates a problem with their testing methods.

Re:JFS? (1)

ckaminski (82854) | more than 4 years ago | (#28532185)

I can same the same thing about ReiserFS 3, personally. XFS, JFS and Reiser have all been good to me over the years, but so has ext3, for that matter.

Re:JFS? (1)

jabuzz (182671) | more than 4 years ago | (#28536411)

I have personally come to the conclusion that ext3 is a pile of dino droppings. Basically quota support in ext3 is just not robust.enough. Give me XFS, any day of the week. In fact the ability to support "project" quotas and hence directory quota's is much much better than ext3, or ReiserFS or JFS, or even ZFS (which does not do quotas at all).

Re:JFS? (1)

Nuno Sa (1095047) | more than 4 years ago | (#28537449)

Never used reiserfs in production, but XFS and ext3 are very good. XFS in (my) "realworld" worlkloads is the best by far (exception for mass deletes, which are slow). I don't understand why XFS scores so badly in these benchmarks.

Anyway, one should always test before deployment, if the fs is important, and benchmark if speed is important.

Re:JFS? (1)

WuphonsReach (684551) | more than 4 years ago | (#28546785)

While I use ext3 for everything, mostly because it "just works" in 99% of cases, it has a few big issues. Mainly the slowness when deleting very large (>1GB) files or directories with lots of files.

So it tends to be a bit slow for Maildir storage or multimedia storage.

Re:JFS? (0)

Anonymous Coward | more than 4 years ago | (#28533095)

I have been running XFS on mission critical systems for years and not lost any data.

For the rest of us not running mission critical systems with battery back-up and no faulty graphics drivers... XFS was a source of head-aches, truncating open files on every single crash.
It wasn't even a bug, but a design decision. Maybe they've fixed it by now, but they definitely lost my trust.

Re:JFS? (0)

Anonymous Coward | more than 4 years ago | (#28533733)

I have been running XFS on mission critical systems for years and not lost any data.

Even if anecdotes were evidence -- which they aren't -- this tells us nothing about how robust XFS is. What if the reason you haven't lost data is simply that the hardware has operated perfectly? You need to tell us how many times XFS has saved data.

Re:JFS? (1)

Nuno Sa (1095047) | more than 4 years ago | (#28537541)

Sorry to barge in, but generally if your hardware fails you have way bigger problems than the fs drivers. (I mean, the software tries to write ABC but a CPU/cache/RAM/Chipset/whatever error results in the hard drive receiving ABB, is only detectable by scrubing the data after the fact).

Assuming *good* hardware and ocasional crashes related to the software not doing the right thing, then yes. You should expect the fs to save most, if not all, of your data. XFS should do this.

Why is JFS the red-headed stepchild? (4, Insightful)

JSBiff (87824) | more than 4 years ago | (#28532191)

Ok, I've been wondering this for a long time. IBM contributed JFS to Linux years ago, but no one ever seems to give it a thought as to using it. I used it on my computer for awhile, and I can't say that I had any complaints (of course, one person's experience doesn't necessarily mean anything). When I looked into the technical features, it seemed to support lots of great things like journaling, Unicode filenames, large files, large volumes (although, granted, some of the newer filesystems *are* supporting larger files/volumes).

Don't get me wrong - some of the newer filesystems (ZFS, Btrfs, NILFS2) do have interesting features that aren't in JFS, and which are great reasons to use the newer systems, but still, it always seems like JFS is left out in the cold. Are there technical reasons people have found it lacking or something? Maybe it's just a case of, "it's a fine filesystem, but didn't really bring any compelling new features or performance gains to the table, so why bother"?

Re:Why is JFS the red-headed stepchild? (1)

piojo (995934) | more than 4 years ago | (#28533061)

JFS has treated me very well for the last 2 years or so. It's fast when dealing with small files, unlike XFS. I've never noticed corrupted files after a hard boot, so I prefer it to EXT3. JFS also feels faster... of course, my perception isn't a benchmark.

I would love to see the next generation of filesystems catch on, though. I would really like my data to be automatically checksummed on my file server.

Re:Why is JFS the red-headed stepchild? (1)

jabuzz (182671) | more than 4 years ago | (#28536379)

Because as far as IBM are concerned JFS is not very interesting. I would point out the fact that the DMAPI implementation on JFS has bit rotted, and IBM don't even support HSM on it on Linux. For that you need to buy GPFS, which makes ZFS look completely ordinary.

Re:Why is JFS the red-headed stepchild? (4, Interesting)

david.given (6740) | more than 4 years ago | (#28537725)

Maybe it's just a case of, "it's a fine filesystem, but didn't really bring any compelling new features or performance gains to the table, so why bother"?

I think because it's just not sexy.

But, as you say, if you look into it it supports all the buzzwords. I use it for everything, and IME it's an excellent, lightweight, unobtrusive filesystem that gets the job done while staying out of my way (which is exactly what I want from a filesystem). It would be nice if it supported things like filesystem shrinking, which is very useful when rearranging partitions, and some of the new features like multiple roots in a single volume are really useful and I'd like JFS to support this, but I can live without them.

JFS also has one really compelling feature for me: it's cheap. CPU-wise, that is. Every benchmark I've seen show that it's only a little slower than filesystems like XFS but it also uses way less CPU. (Plus it's much less code. Have you seen the size of XFS?) Given that I tend to use low-end machines, frequently embedded, this is good news for me. It's also good if you have lots of RAM --- an expensive filesystem is very noticeable if all your data is in cache and you're no longer I/O bound.

I hope it sees more love in the future. I'd be gutted if it bit-rotted and got removed from the kernel.

Re:JFS? (1)

Wolfrider (856) | more than 4 years ago | (#28547763)

Word - I use JFS for all my major filesystems, even USB/Firewire drives. Works very well with VMware, and has a very fast FSCK as well.

Lots of formats? (-1, Troll)

jabjoe (1042100) | more than 4 years ago | (#28529815)

Surely an OS only needs a handful of formats, all closed and patent encumbered so no one else can read them? Oh wait, that sucks.....

Comparing Apples and Oranges (2, Insightful)

mpapet (761907) | more than 4 years ago | (#28529855)

All of the file systems are designed for specific tasks/circumstances. I'm too lazy to dig up what's special about each, but they are most useful in specific niches. Not that you _can't_ generalize, but calling ext4 the best of the bunch misses the whole point of the other file systems.

Re:Comparing Apples and Oranges (1)

jellomizer (103300) | more than 4 years ago | (#28530073)

Shh. We want our choice of our default install to be the winner so we look like we are smarter then people who actually chose something else.

Re:Comparing Apples and Oranges (1)

buchner.johannes (1139593) | more than 4 years ago | (#28533371)

Could you elaborate what the niches are for each?

Would it be technically possible to compare benchmarks with the Windows implementation of NTFS and FAT? Despite having a different underlying kernel?

Do these benchmarks make any sense? (3, Insightful)

Ed Avis (5917) | more than 4 years ago | (#28529923)

The first benchmark on page 2 is 'Parallel BZIP2 Compression'. They are testing the speed of running bzip2, a CPU-intensive program, and drawing conclusions about the filesystem? Sure, there will be some time taken to read and write the large file from disk, but it is dwarfed by the computation time. They then say which filesystems are fastest, but 'these margins were small'. Well, not really surprising. Are the results statistically significant or was it just luck? (They mention running the tests several times, but don't give variance etc.)

All benchmarks are flawed, but I think these really could be improved. Surely a good filesystem benchmark is one that exercises the filesystem and the disk, but little else - unless you believe in the possibility of some magic side-effect whereby the processor is slowed down because you're using a different filesystem. (It's just about possible, e.g. if the filesystem gobbles lots of memory and causes your machine to thrash, but in the real world it's a waste of time running these things.)

Re:Do these benchmarks make any sense? (1)

Lemming Mark (849014) | more than 4 years ago | (#28530345)

unless you believe in the possibility of some magic side-effect whereby the processor is slowed down because you're using a different filesystem. (It's just about possible, e.g. if the filesystem gobbles lots of memory and causes your machine to thrash, but in the real world it's a waste of time running these things.)

Some filesystems have higher CPU usage - aside from issues of data structure complexity, btrfs does a load of extra checksumming, for instance.

But your point stands that CPU-bound benchmarks are probably not the best way of measuring a filesystem. It would be interesting to measure CPU usage whilst running a filesystem-intensive workload, or even to measure this indirectly through the slowdown of bzip2 compression whilst running a filesystem-intensive workload in the background.

Re:Do these benchmarks make any sense? (4, Insightful)

js_sebastian (946118) | more than 4 years ago | (#28530393)

The first benchmark on page 2 is 'Parallel BZIP2 Compression'. They are testing the speed of running bzip2, a CPU-intensive program, and drawing conclusions about the filesystem? Sure, there will be some time taken to read and write the large file from disk, but it is dwarfed by the computation time. (...) Surely a good filesystem benchmark is one that exercises the filesystem and the disk, but little else.

That's one type of benchmark. But you also want a benchmark that shows the performance of CPU-intensive appliations while the file system is under heavy use. Why? because the filesystem code itself uses CPU, and you want to make sure it doesn't use too much of it.

Re:Do these benchmarks make any sense? (1)

Ed Avis (5917) | more than 4 years ago | (#28535169)

But you also want a benchmark that shows the performance of CPU-intensive appliations while the file system is under heavy use.

You do want that, but I'm pretty sure that bzip2 isn't it. Compressing a file is actually pretty light work for the filesystem. You need to read some blocks sequentially, then write some blocks sequentially. Compressing lots of small files is better, but the access is still likely to be pretty one-at-a-time. More challenging would be a task that needs to read and write lots of files of varying size, across complex directory structures, with fairly random access patterns. A parallel kernel compile would be nearer the mark, though even that may not be demanding enough to make a really good benchmark.

Re:Do these benchmarks make any sense? (1)

compro01 (777531) | more than 4 years ago | (#28530539)

A processor-intensive test will show which filesystem has the most overhead WRT the processor. And as the test shows, they're all pretty much the same in that regard.

Re:Do these benchmarks make any sense? (1)

Ed Avis (5917) | more than 4 years ago | (#28535187)

A processor-intensive test will show which filesystem has the most overhead WRT the processor.

Only if it's a filesystem-processor-intensive test, that is, you are making the filesystem work hard and (depending on how efficient it is) chew lots of CPU. Giving the filesystem easy work, while running something CPU-intensive like bzip2 separately, is a good benchmark for bzip2 but it doesn't tell you much about the fs.

Re:Do these benchmarks make any sense? (1)

_32nHz (1572893) | more than 4 years ago | (#28531015)

You need benchmarks to reflect your real world use. If you always run your benchmarks on idling systems then filesystems with on the fly compression would usually win. However they are not popular because this isn't a good trade off for most people. Parallel BZIP2 compression sounds a good choice as it should stress memory and CPU, whilst giving a common IO pattern, and a fairly low inherent performance variance. Obviously you are looking for a fairly small variance in performance, and the are a lot of other factors that must be accounted for before the results have any significance. Not publishing their data pretty much guarantees they don't know what they are doing.

Re:Do these benchmarks make any sense? (1)

ckaminski (82854) | more than 4 years ago | (#28532223)

<quote>All benchmarks are flawed</quote>

I'd argue that this is true only if they don't disclose their biases and limitations of testing methodology.

Re:Do these benchmarks make any sense? (1)

MrKaos (858439) | more than 4 years ago | (#28538177)

They then say which filesystems are fastest, but 'these margins were small'.

They also said "All mount options and file-system settings were left at their defaults", and I struggled to see what the point is of doing performance tests to find the fastest file system if you are not going to even attempt to get the best performance you can out of each filesystem.

Why not do a test that just uses dd to do a straight read from a target hard drive to a file(s) on the target filesystem to eliminate *any* variation with the source data? Read, write and delete times are the most important things to know and copying a large file on the same file system. What about how successive small file writes performed while a large write is under way. What about how the file system performs when it is 25%, 50% and 95% full? Why not just use the exact same shell script with different target file systems? For everything else Reiser did, what about comparisons to reiserfs, it's still a pretty good file system.

When I put my Studio systems together I spent time doing exactly the tests I outlined above to determine which file system would do the job. I actually thought this article might have been better than the tests I did, but as you rightly mentioned, most of the tests are to CPU bound and complicated to be of any use.

buttfs? (-1, Troll)

Anonymous Coward | more than 4 years ago | (#28530067)

im more of a pussyfs person thank you.

Re:buttfs? (1)

koutbo6 (1134545) | more than 4 years ago | (#28530223)

I think AC makes a good point if you think about it.
BUTTerfs guys ... couldn't you have thought of a better name?
Your fans will be at a huge disadvantage in flamewars

I'll be interested when .. (0, Offtopic)

ccr (168366) | more than 4 years ago | (#28530079)

When these filesystems actually have matured enough to NOT have at least dozen bugfix changesets in each revision of kernel Changelog. Even ext3fs has received few rather interesting corner-case fixes this year, so maybe ext4 will be reliable in 5 years or so.

Re:I'll be interested when .. (0)

Anonymous Coward | more than 4 years ago | (#28532433)

haha

my thought exactly. what kind of maniac would use a 'beta' filesystem in a production setting?

These all had slow benchmarks (-1, Troll)

Anonymous Coward | more than 4 years ago | (#28530101)

I need a filesystem with killer performance. Any suggestions?

ext4 on top (-1, Troll)

Anonymous Coward | more than 4 years ago | (#28530237)

If the system crashes, any of ext4's files that were modified recently will be truncated to 0 bytes, but I guess that's okay because it's fast!

so this means (0)

Anonymous Coward | more than 4 years ago | (#28530591)

ext4 DOES or DOES NOT outperform the reiserfs?


what?

Protection against corruption? (0)

Anonymous Coward | more than 4 years ago | (#28530661)

What are the default mount options?

Are the ubuntu default options sane*? I remember Linus ranting about stupid defaults for ext4, but couldn't find it anymore.

*sane being defined as: power outage doesn't leave you with a corrupt fs

NILFS2 is pretty interesting (4, Interesting)

Lemming Mark (849014) | more than 4 years ago | (#28530731)

NILFS2 (http://www.nilfs.org/en/) is actually a pretty interesting filesystem. It's a log-structured filesystem, meaning that it treats your disk as a big circular logging device.

Log structured filesystems were originally developed by the research community (e.g. see the paper on Sprite LFS here, which is the first example that I'm aware of: http://www.citeulike.org/user/Wombat/article/208320 [citeulike.org]) to improve disk performance. The original assumption behind Sprite LFS was that you'll have lots of memory, so you'll be able to mostly service data reads from your cache rather than needing to go to disk; however, writes to files are still awkward as you typically need to seek around to the right locations on the disk. Sprite LFS took the approach of buffering writes in memory for a time and then squirting a big batch of them onto the disk sequentially at once, in the form of a "log" - doing a big sequential write of all the changes onto the same part of the disk maximised the available write bandwidth. This approach implies that data was not being altered in place, so it was also necessary to write - also into the log - new copies of the inodes whose contents were altered. The new inode would point to the original blocks for unmodified areas of the file and include pointers to the new blocks for any parts of the file that got altered. You can find out the most recent state of a file by finding the inode for that file that has most recently been written to the log.

This design has a load of nice properties, such as:
* You get good write bandwidth, even when modifying small files, since you don't have to keep seeking the disk head to make in-place changes.
* The filesystem doesn't need a lengthy fsck to recover from crash (although it's not "journaled" like other filesystems, effectively the whole filesystem *is* one big journal and that gives you similar properties)
* Because you're not repeatedly modifying the same bit of disk it could potentially perform better and cause less wear on an appropriately-chosen flash device (don't know how much it helps on an SSD that's doing its own block remapping / wear levelling...). One of the existing flash filesystems for Linux (JFFS2, I *think*) is log structured.

In the case of NILFS2 they've exploited the fact that inodes are rewritten when their contents are modified to give you historical snapshots that should be essentially "free" as part of the filesystem's normal operation. They have the filesystem frequently make automatic checkpoints of the entire filesystem's state. These will normally be deleted after a time but you have the option of making any of them permanent. Obviously if you just keep logging all changes to a disk it'll get filled up, so there's typically a garbage collector daemon of some kind that "repacks" old data, deletes stuff that's no longer needed, frees disk space and potentially optimises file layout. This is necessary for long term operation of a log structured filesystem, though not necessary if running read-only.

Another modern log structured FS is DragonflyBSD's HAMMER (http://www.dragonflybsd.org/hammer/), which is being ported to Linux as a SoC project, I think (http://hammerfs-ftw.blogspot.com/)

Re:NILFS2 is pretty interesting (1)

jabuzz (182671) | more than 4 years ago | (#28536529)

This is all well and good, but how about having some real features

    * Robust bullet proof quota system
    * Directory quotas
    * Shrinkable online
    * Clusterable
    * DMAPI for HSM with a working implementation.
    * Storage pool migration so I can mix SATA and SAS/FC in the same file system and do something useful with it.
    * Ability to continue functioning when one or more disks is "gone" temporarily or permanently from the file system

Right back to IBM's GPFS it is then...

Re:NILFS2 is pretty interesting (0)

Anonymous Coward | more than 4 years ago | (#28536593)

You're free to use GPFS on your EeePC...
I'll be happy with fast SSD writes and none of the above features.

Dubious (4, Insightful)

grotgrot (451123) | more than 4 years ago | (#28531315)

I suspect their test methodology isn't very good, in particular the SQLite tests. SQLite performance is largely based on when commits happen as at that point fsync is called at least twice and sometimes more (the database, journals and containing directory need to be consistent). The disk has to rotate to the relevant point and write outstanding data to the platters before returning. This takes a considerable amount of time relative to normal disk writing which is cached and write behind. If you don't use the same partition for testing then the differing amount of sectors per physical track will affect performance. Similarly a drive that lies about data being on the platters will seem to be faster, but is not safe should there be a power failure or similar abrupt stop.

Someone did file a ticket [sqlite.org] at SQLite but from the comments in there you can see that what Phoronix did is not reproducible.

Re:Dubious (1)

chrb (1083577) | more than 4 years ago | (#28532633)

Here's a post [slashdot.org] linking to some other posts discussing some problems with the Phoronix benchmarking methodology. The same issues seem to be pointed out every time they get a benchmark article published on Slashdot.

So what - speed is not all in a file system (1)

krischik (781389) | more than 4 years ago | (#28531501)

So what - when was still using Linux a working backup (incl. ACL, Xattib etc. pp) was the most important criteria and XFS came up on top. xfsdump / xfsrestore has save the day more then once.

Yet another content-free Phoronix fluff article (4, Informative)

Ant P. (974313) | more than 4 years ago | (#28531587)

Skip TFA - the conclusion is that these benchmarks are invalid.

At least they've improved since last time - they no longer benchmark filesystems using a Quake 3 timedemo.

Re:Yet another content-free Phoronix fluff article (2, Interesting)

lbbros (900904) | more than 4 years ago | (#28533577)

Not wanting to troll, just asking a honest question: why are they invalid? (No, I haven't RTFA)

Re:Yet another content-free Phoronix fluff article (1)

Ant P. (974313) | more than 4 years ago | (#28546319)

Using an outdated version of Btrfs with known performance issues, using different settings for ext3 and ext4. Those are the ones that stand out, but the people in their forums do a good job of ripping apart nearly every benchmark they do.

Performance should not be determinant! (1)

MilesNaismith (951682) | more than 4 years ago | (#28534783)

It doesn't matter how fast it is, if it isn't correct! We as IT professionals should focus more on CORRECTNESS of the terabyes of data we store not how many IO/s as long as it does the job we need. Ensuring correctness should be job #1. Right now in production for me safe means ZFS. When Linux delivers a comparable stable tested filesystem I'll be all over it. Right now it still seems like the 1980's where 99% of people are obsessed over how FAST they can make things. I cringe every time I watch an admin start "tuning" a filesystem to make it faster by flipping off sync and other safety features.

"Phoronix benchmark" is an oxymoron (0)

Anonymous Coward | more than 4 years ago | (#28535165)

Phoronix - conflation of "phoenix" and "moron". I.e., a moron that rises from the ashes, refusing to die.

I'm surprised the filesystem is tested at all (4, Insightful)

Otterley (29945) | more than 4 years ago | (#28536921)

Almost all of their tests involve working sets smaller than RAM (the installed RAM size is 4GB, but the working sets are 2GB). Are they testing the filesystems or the buffer cache? I don't see any indication that any of these filesystems are mounted with the "sync" flag.

Re:I'm surprised the filesystem is tested at all (2, Insightful)

wazoox (1129681) | more than 4 years ago | (#28541271)

Almost all of their tests involve working sets smaller than RAM (the installed RAM size is 4GB, but the working sets are 2GB). Are they testing the filesystems or the buffer cache? I don't see any indication that any of these filesystems are mounted with the "sync" flag.

Yup, obviously they're mounting all filesystems with default settings, which can clearly be misleading. Furthermore, testing on a single 250 GB SATA drive maybe isn't that meaningful. What they're benchmarking is desktop performance, for obviously server oriented FS like XFS, BTRFS and NILFS that simply doesn't make sense.

NILFS2 is great for write-heavy workloads (1)

Jacques Chester (151652) | more than 4 years ago | (#28538223)

At least according to some rough microbenchmarking I've done myself [luaforge.net]. My workload is to write raw CSV to disk as fast as possible. In testing, NILFS2 was nearly 20% faster than ext3 on a spinning disk.

It was also smoother. Under very heavy load ext3 seemingly batched up writes then flushed them all at once, causing my server process to drop from 99% to 70% utilisation. NILFS seemed to consume a roughly constant percentage of CPU the whole time, which is much more in line with what I want.

NILFS2 is not for everyone or for every purpose. But it suits my purpose. As usual, you should do the engineering thing: consider your needs, test the alternatives.

BTRFS or ZFS or .... (1)

bagsta (1562275) | more than 4 years ago | (#28539767)

As far as I can see from the comparison of these FSes, BTRFS is a promising file system for Linux and is under development. Some say that it will be the ZFS of Linux or even better. I think time will say.
Some other say [storagemojo.com], now that Oracle owns Sun, Oracle can change the license of ZFS from CDDL [sun.com] to GPL2 [gnu.org] and port to Linux. But porting ZFS to Linux it's another story [sun.com]...
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...