Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Denial-of-Service Attack Found In Btrfs File-System

timothy posted about 2 years ago | from the at-that-range-a-hammer-works-too dept.

Bug 210

An anonymous reader writes "It's been found that the Btrfs file-system is vulnerable to a Hash-DOS attack, a denial-of-service attack caused by hash collisions within the file-system. Two DOS attack vectors were uncovered by Pascal Junod that he described as causing astonishing and unexpected success. It's hoped that the security vulnerability will be fixed for the next Linux kernel release." The article points out that these exploits require local access.

cancel ×

210 comments

Sorry! There are no comments related to the filter you selected.

Another crazy white guy goes on a rampage (-1)

Anonymous Coward | about 2 years ago | (#42297557)

What's up with these crazy crackers? Yay NRA!

Re:Another crazy white guy goes on a rampage (-1)

Anonymous Coward | about 2 years ago | (#42298737)

as opposed to the black guys murdering each other every day in the hoods?

Who ported btrfs to DOS? (4, Funny)

Nimey (114278) | about 2 years ago | (#42297559)

and should we give him a medal or lynch him?

Re:Who ported btrfs to DOS? (5, Funny)

macraig (621737) | about 2 years ago | (#42297721)

Do I have to choose? Can I hang a medal on him, and then hang him? I'll make the medal 20 pounds to speed up the lynching.

Re:Who ported btrfs to DOS? (0)

Anonymous Coward | about 2 years ago | (#42298253)

You mean Fallen Art [youtube.com] ?

Re:Who ported btrfs to DOS? (-1)

gVibe (997166) | about 2 years ago | (#42298553)

Dumb. I didn't even Smile A Little Then Stop. Because there are some who really don't know -- DOS means Denial of Service, usually from a single origin -- which is the closest relative of the DDoS, ala Distributed Denial of Service which stems from many different origins.

Re:Who ported btrfs to DOS? (-1)

Anonymous Coward | about 2 years ago | (#42298581)

whoosh, you dumb fuck

Re:Who ported btrfs to DOS? (-1, Troll)

gVibe (997166) | about 2 years ago | (#42298659)

anonymous COWARD

Re:Who ported btrfs to DOS? (-1)

Anonymous Coward | about 2 years ago | (#42298827)

Could you post your address, I'd love to send you a flaming bag of dog crap. And you can spend the time waiting for its arrival ranting how many different definitions of crap there are and which one is probably the correct one. So its win-win.

Re:Who ported btrfs to DOS? (-1)

Anonymous Coward | about 2 years ago | (#42299025)

Are you black? gVibe sounds like the kind of name a nigger would hide behind.

Re:Who ported btrfs to DOS? (0)

Anonymous Coward | about 2 years ago | (#42299035)

DoS would be the right acronym then.

Re:Who ported btrfs to DOS? (3, Informative)

maxwell demon (590494) | about 2 years ago | (#42299517)

DOS = Disk Operating System
DoS = Denial of Service

Can we get a real Linux filesystem, please? (5, Interesting)

Anonymous Coward | about 2 years ago | (#42297625)

btrfs is a step in the right direction, but even now, Linux does not have production-level deduplication (which even Windows has, for crying out loud), encryption, snapshots, or something even close to supplanting LVM2.

I just got out of a meeting at my job because we are replacing some old large servers... and because Linux has no stable filesystem with enterprise features, looks like things are either going to Windows, or perhaps Solaris x86 (which is expensive.)

This doesn't mean to suck Sun's teat for ZFS access... but at least try to come close to what even NTFS or even ReFS offers...

Re:Can we get a real Linux filesystem, please? (1)

Anonymous Coward | about 2 years ago | (#42297723)

What's Sun?

Re:Can we get a real Linux filesystem, please? (0)

Anonymous Coward | about 2 years ago | (#42297741)

zfs on linux is probably more stable than brtfs is.

Re:Can we get a real Linux filesystem, please? (3, Informative)

grumbel (592662) | about 2 years ago | (#42298087)

I have seen the userlevel ZFS crash multiple times, it's also slow as hell. It's still worth it if you are short on storage and want to reduce the size of your backup, but I wouldn't exactly call it ready for production.

Re:Can we get a real Linux filesystem, please? (3, Informative)

dbIII (701233) | about 2 years ago | (#42298219)

Kernel level probably is ready, but not on 32bit (big hassles there but probably not a big deal to most) and on 64 bit there are some memory usage problems and performance seems to suck when there's a dozen or so hosts keeping connections to files on ZFS open via NFS at the same time. There's still a way to go before ZFS on linux gets to where it is on FreeBSD but it's still early days, and for many usage patterns it looks like it is ready for production.

Re:Can we get a real Linux filesystem, please? (1)

Anonymous Coward | about 2 years ago | (#42298343)

I have seen the userlevel ZFS crash multiple times, it's also slow as hell. It's still worth it if you are short on storage and want to reduce the size of your backup, but I wouldn't exactly call it ready for production.

I think parent is talking about this, not the userlevel FUSE-based ZFS:
http://zfsonlinux.org/ [zfsonlinux.org]

Re:Can we get a real Linux filesystem, please? (5, Informative)

Anonymous Coward | about 2 years ago | (#42297855)

ZFS on FreeBSD or FreeNAS is great. Easily saturates gigE with a simple mirror of recent 7200rpm disks. It scales up from there, and FreeBSD is pretty rock solid.

Re:Can we get a real Linux filesystem, please? (0)

Anonymous Coward | about 2 years ago | (#42298455)

Yes, it's great. Still can't shrink pools. Still uses 5GB of ram per TB of disk for dedup at 64k blocksize. But it's great.
And just FYI, fucking FAT32 can saturate GbE with a *single* 7200rpm drive.

Re:Can we get a real Linux filesystem, please? (3, Informative)

LordLimecat (1103839) | about 2 years ago | (#42299005)

FAT32 is going to be faster than a LOT of filesystems precisely because it lacks features like dedup, any notion of real ACLs, and, oh, I dont know, data integrity. Thats why if you want a really fast RAMDisk, you dont use NTFS or ReFS, you use FAT16 or FAT32.

Re:Can we get a real Linux filesystem, please? (0)

Anonymous Coward | about 2 years ago | (#42299319)

RAM is cheap. Typically, shrinking pools is not required, i don't know of ANYONE who had a shrinking need for storage capacity.

Re:Can we get a real Linux filesystem, please? (4, Interesting)

Anonymous Coward | about 2 years ago | (#42297893)

btrfs is a step in the right direction, but even now, Linux does not have production-level deduplication (which even Windows has, for crying out loud), encryption, snapshots, or something even close to supplanting LVM2.

I just got out of a meeting at my job because we are replacing some old large servers... and because Linux has no stable filesystem with enterprise features, looks like things are either going to Windows, or perhaps Solaris x86 (which is expensive.)

This doesn't mean to suck Sun's teat for ZFS access... but at least try to come close to what even NTFS or even ReFS offers...

Hear hear! Backup admin here, just want to add before the unwashed masses of armchair Linux admins show up, one example of an enterprise filesystem feature is the NTFS change journal. It makes the file system scan as part of an incremental backup run in constant time.

It's sad on other systems with large numbers of files to schedule subdirectories for different times of day to deal with scanning overhead.

Re:Can we get a real Linux filesystem, please? (5, Informative)

Tough Love (215404) | about 2 years ago | (#42297977)

NTFS doesn't have snapshots. Instead it relies on volume shadow copies, with known severe performance artifacts caused by needing to move snapshotted data out of the way when new writes come in. Btrfs, like ZFS and Netapp's WAFL, use a far more efficient copy-on-write strategy that avoids the write penalty. The takeaway: I would not go so far as to claim Microsoft has an enterprise-worthy solution either. If you want something with industrial strength dedup, snapshots and fault tolerance, you won't be getting it from Micorosft.

Re:Can we get a real Linux filesystem, please? (4, Insightful)

jamesh (87723) | about 2 years ago | (#42298631)

NTFS doesn't have snapshots. Instead it relies on volume shadow copies, with known severe performance artifacts caused by needing to move snapshotted data out of the way when new writes come in. Btrfs, like ZFS and Netapp's WAFL, use a far more efficient copy-on-write strategy that avoids the write penalty. The takeaway: I would not go so far as to claim Microsoft has an enterprise-worthy solution either. If you want something with industrial strength dedup, snapshots and fault tolerance, you won't be getting it from Micorosft.

What nonsense. VSS is the snapshot solution for NTFS, and of course it uses copy-on-write. Microsoft VSS backup architecture is years ahead of Linux... LVM is kind of cool but if you have a single database spread across multiple LV's then you can't snapshot them all as an atomic operation so it becomes useless. MS VSS does this, and always has.

I'm normally a Linux fanboi but when you sprout rubbish like this I have no hesitation in correcting you.

Re:Can we get a real Linux filesystem, please? (3, Informative)

Anonymous Coward | about 2 years ago | (#42298839)

Tried to find some more information on this. First discovery: VSS stands for "Volume Shadow copy Service", not "Visual SourceSafe", as was my first association. :)

AFAICT he's saying pretty much what Microsoft is saying [microsoft.com] :

When a change to the original volume occurs, but before it is written to disk, the block about to be modified is read and then written to a "differences area", which preserves a copy of the data block before it is overwritten with the change. Using the blocks in the differences area and unchanged blocks in the original volume, a shadow copy can be logically constructed that represents the shadow copy at the point in time in which it was created.

The disadvantage is that in order to fully restore the data, the original data must still be available. Without the original data, the shadow copy is incomplete and cannot be used. Another disadvantage is that the performance of copy-on-write implementations can affect the performance of the original volume.

Do you have a newer reference?

Re:Can we get a real Linux filesystem, please? (1)

Anonymous Coward | about 2 years ago | (#42299249)

This is a tricky issue. If you keep all old file where in their original sectors and write changes in new places, your files get fragmented to hell. Only your original snapshot is contiguous, while your current data is scattered about your disk. This may work fine if you have dozens of spindles making up your volume, or for an SSD, but it's not going to work for a regular HDD.

What you'll end up with is fast write performance and horrible read performance. Since most files are read far more often than they're written, it's generally better to make the current data contiguous and the rarely-used snapshots fragmented.

Of course, it's probably best to write the new data where it's convenient and later on do some defragmentation to put the data where it's fastest to read.

dom

Re:Can we get a real Linux filesystem, please? (2)

Tough Love (215404) | about 2 years ago | (#42299519)

If you keep all old file where in their original sectors and write changes in new places, your files get fragmented to hell.

Microsoft's "shadow copy" doesn't work at the file level, it works at the block level, so it doesn't know anything about files. Btrfs and its ilk try to leave some empty space distributed across the volume, so copy-on-write can leave the copies in fairly reasonable places. After the copy is committed, the original space can be freed, so the next update won't mess things up too badly either. Snapshots mess this up because the original space doesn't get freed. But then, snapshots are always messed up, there is no such thing as a perfect snapshot strategy with respect to disk seeking. Incidentally, with flash you don't care about that any more, there is no seek time.

Anyway, yes, with a crappy copy-on-write (like Netapp's) you get horrible read fragmentation. With an intelligent implementation, it isn't so bad. Note that Btrfs is turning in good benchmarks, including read performance in mixed read/write loads.

Re:Can we get a real Linux filesystem, please? (5, Informative)

Tough Love (215404) | about 2 years ago | (#42299481)

VSS is the snapshot solution for NTFS, and of course it uses copy-on-write

Well. Maybe you better sit down in a comfortable chair and think about this a bit. From Microsoft's site: When a change to the original volume occurs, but before it is written to disk, the block about to be modified is read and then written to a “differences area”, which preserves a copy of the data block before it is overwritten with the change. [microsoft.com]

Think about what this means. It is not a "copy-on-write", it is a "copy-before-write". Gross abuse of terminology if anybody tries to call it a "copy-on-write", which has the very specific meaning [wikipedia.org] of "don't modify the destination data". Instead, copy it, then modify the copy. OK, are we clear? VSS does not do copy-on-write, it does copy-before-write.

Now let's think about the implications of that. First, the write needs to be blocked until the copy-before-write completes, otherwise the copied data is not sure to be on stable storage. The copy-before-write needs to read the data from its original position, write it to some save area, then update some metadata to remember which data was saved where. How many disk seeks is that, if it's a spinning disk? If the save area is on the same spinning disk? If it's flash, how much write multiplication is that? When all of that is finally done, the original write can be unblocked and allowed to proceed. In total, how much slower is that than a simple, linear write? If you said "on the order of an order of magnitude" you would be in the ballpark. In face, it can get way worse than that if you are unlucky. In the best imaginable case, your write performance is going to take a hit by a factor of three. Usually, much much worse.

OK, did we get this straight? As a final exercise, see if you can figure out who was talking nonsense.

Re:Can we get a real Linux filesystem, please? (4, Insightful)

jamesh (87723) | about 2 years ago | (#42299661)

VSS is the snapshot solution for NTFS, and of course it uses copy-on-write

Well. Maybe you better sit down in a comfortable chair and think about this a bit. From Microsoft's site: When a change to the original volume occurs, but before it is written to disk, the block about to be modified is read and then written to a “differences area”, which preserves a copy of the data block before it is overwritten with the change. [microsoft.com]

Think about what this means. It is not a "copy-on-write", it is a "copy-before-write". Gross abuse of terminology if anybody tries to call it a "copy-on-write", which has the very specific meaning [wikipedia.org] of "don't modify the destination data". Instead, copy it, then modify the copy. OK, are we clear? VSS does not do copy-on-write, it does copy-before-write.

Now let's think about the implications of that. First, the write needs to be blocked until the copy-before-write completes, otherwise the copied data is not sure to be on stable storage. The copy-before-write needs to read the data from its original position, write it to some save area, then update some metadata to remember which data was saved where. How many disk seeks is that, if it's a spinning disk? If the save area is on the same spinning disk? If it's flash, how much write multiplication is that? When all of that is finally done, the original write can be unblocked and allowed to proceed. In total, how much slower is that than a simple, linear write? If you said "on the order of an order of magnitude" you would be in the ballpark. In face, it can get way worse than that if you are unlucky. In the best imaginable case, your write performance is going to take a hit by a factor of three. Usually, much much worse.

OK, did we get this straight? As a final exercise, see if you can figure out who was talking nonsense.

I concede that the terminology used by the MS article is misused. I don't think you're thinking the performance issues through though. You start with a file nicely laid out linearly on disk, and you take a snapshot so you can make a backup. Now you make a modification to the middle of the file and what happens? Suddenly the middle of the file is elsewhere on disk, and in the case of LVM this is invisible to the filesystem so no amount of defragging is going to fix it. This situation persists long after you have taken your backup and thrown the snapshot away. Of course this doesn't matter for flash but we're not all there yet. If BTRFS does snapshots using copy-on-write (correct definition) then this will be a problem too, although if BTRFS is smart enough it should be able to repair the situation once the snapshot is discarded.

VSS's way leaves the original data in-order on the storage medium. The difference area is likely on a completely different disk anyway so the copy-on-write (MS definition) could not be performed any other way.

Re:Can we get a real Linux filesystem, please? (1)

Tough Love (215404) | about 2 years ago | (#42299545)

LVM is kind of cool but if you have a single database spread across multiple LV's then you can't snapshot them all as an atomic operation so it becomes useless.

You're also wrong about that. You can concatenate multiple logical volumes as a single logical volume and snapshot that atomically.

Re:Can we get a real Linux filesystem, please? (1)

jamesh (87723) | about 2 years ago | (#42299681)

LVM is kind of cool but if you have a single database spread across multiple LV's then you can't snapshot them all as an atomic operation so it becomes useless.

You're also wrong about that. You can concatenate multiple logical volumes as a single logical volume and snapshot that atomically.

OK this is news to me. When I last asked about that it couldn't be done but that was a few years go. Google doesn't tell me how I can concatenate (say) my database lv and my logs lv (separate vg's because separate spindles), snapshot them, then un-concatenate them... a link would be appreciated.

Re:Can we get a real Linux filesystem, please? (0)

Anonymous Coward | about 2 years ago | (#42298967)

NTFS doesn't have snapshots. Instead it relies on volume shadow copies, with known severe performance artifacts caused by needing to move snapshotted data out of the way when new writes come in. Btrfs, like ZFS and Netapp's WAFL, use a far more efficient copy-on-write strategy that avoids the write penalty. The takeaway: I would not go so far as to claim Microsoft has an enterprise-worthy solution either. If you want something with industrial strength dedup, snapshots and fault tolerance, you won't be getting it from Micorosft.

What are you replying to, what does this have to do with change journals or backups? VSS does use COW, WTF is this...

Re:Can we get a real Linux filesystem, please? (1)

belrick (31159) | about 2 years ago | (#42299387)

Btrfs, like ZFS and Netapp's WAFL, use a far more efficient copy-on-write strategy that avoids the write penalty.

WAFL doesn't do copy-on-write. Copy-on-write means a write to a block in a file requires the original block to be read, written elsewhere for the snapshot, then the new block written in the original location. That's exactly what WAFL doesn't do. WAFL writes all changed blocks for multiple files in big RAID stripes, updating pointers to current copies and leaving snapshot pointers pointing to old copies of the updated files. Very efficient for writes, but changes almost all reads, random or sequential (within a file) into random reads (within the filesystem) because file blocks get scattered according to write order, not location of the block within the file. That's why they want lots of spindles in an aggregate and they love RAM cache and flash cache.

But since you say that copy-on-write avoids the write penalty I think you know what is does but simply don't know that it isn't copy-on-write.

Re:Can we get a real Linux filesystem, please? (1)

Tough Love (215404) | about 2 years ago | (#42299573)

Btrfs, like ZFS and Netapp's WAFL, use a far more efficient copy-on-write strategy that avoids the write penalty.

WAFL doesn't do copy-on-write. Copy-on-write means a write to a block in a file requires the original block to be read, written elsewhere for the snapshot, then the new block written in the original location. That's exactly what WAFL doesn't do. WAFL writes all changed blocks for multiple files in big RAID stripes, updating pointers to current copies and leaving snapshot pointers pointing to old copies of the updated files. Very efficient for writes, but changes almost all reads, random or sequential (within a file) into random reads (within the filesystem) because file blocks get scattered according to write order, not location of the block within the file. That's why they want lots of spindles in an aggregate and they love RAM cache and flash cache.

But since you say that copy-on-write avoids the write penalty I think you know what is does but simply don't know that it isn't copy-on-write.

We both know what we're talking about, we just disagree on terminology. Properly, a "copy-on-write" doesn't modify the original destination. [wikipedia.org] Nobody should ever use the term "copy-on-write" to describe the algorithm that is properly "copy-before-write". The strategy that leaves the original destination untouched and updates pointers to point at the modified copy is correctly called "copy-on-write", but because the terminology has been so commonly abused by the likes of Microsoft and their followers, it is better to be clear and call that "redirect-on-write".

Finally, Netapp gets massive read fragmentation because they suck, not because it can't be avoided.

Re:Can we get a real Linux filesystem, please? (0)

myxiplx (906307) | about 2 years ago | (#42299607)

Unless you go for ReFS, which is Microsofts new file system available in Server 2012. It's still new, but looks to have all the best features of NTFS, ZFS and Btrfs rolled into one.

Re:Can we get a real Linux filesystem, please? (0)

Anonymous Coward | about 2 years ago | (#42297933)

... and because Linux has no stable filesystem with enterprise features, looks like things are either going to Windows, or perhaps Solaris x86 (which is expensive.)

What kind of enterprise features you would need from a file system?

Re:Can we get a real Linux filesystem, please? (2)

smash (1351) | about 2 years ago | (#42297993)

Data integrity for one?

Re:Can we get a real Linux filesystem, please? (1)

Anonymous Coward | about 2 years ago | (#42298209)

That's what you should expect from your storage array. And that is what you get with real storage arrays.

A filesystem level approach to the problem can only be a bandaid, at best part of a larger solution.

Re:Can we get a real Linux filesystem, please? (0)

Anonymous Coward | about 2 years ago | (#42298943)

If your file system doesn't preserve data integrity, nothing on the storage layer can rectify that. I think you're actually talking about error resilience -- and I'll disagree that handling storage errors is a "bandaid". Even assuming that it is feasable and cost-effective to put indefectible storage on important machines, the most common case is always going to be commodity disks.

Re:Can we get a real Linux filesystem, please? (1)

smash (1351) | about 2 years ago | (#42299309)

1. No storage array does it properly. 2. You can BUILD a ZFS storage array with de-dup, compression, self-healing, etc. for cheaper than you can buy a Netapp or EMC. A filesystem approach is the only way to ensure end-to-end data integrity, correcting tranmission errors between the host and the storage, etc.

Dedupe doesn't belong in a filesystem (-1)

Anonymous Coward | about 2 years ago | (#42298057)

If I store two copies, I want two copies. I don't want the filing system thinking it knows better than me. 'Enterprise features', is just a marketing phrase, dream something up, pretend it's a MUST HAVE to MBA's in corps, sell them it, I then have to turn off that feature because it's unwanted before rolling out the next system.

What I need from a filesystem is NO MORE FRIKKING GIMMICS.

Re:Can we get a real Linux filesystem, please? (2, Interesting)

Anonymous Coward | about 2 years ago | (#42298097)

Wouldn't it be cheaper and just as effective to use FreeBSD or FreeNAS for your data? if you're considering either Windows or Solaris then obviously you don't need a specific operating system. I would think FreeBSD (or even ZFS on Linux) would suit your purposed better 9and with less expense) than Windows or Solaris.

Re:Can we get a real Linux filesystem, please? (5, Informative)

maz2331 (1104901) | about 2 years ago | (#42298139)

ZFS on Linux does exist as a kernel module that is pretty stable and works well. http://zfsonlinux.org/ [zfsonlinux.org] -- it was put out by Lawrence Livermore National Lab, but can't be included with the kernel distros due to GPL / CDDL license compatability issues.

Re:Can we get a real Linux filesystem, please? (2, Informative)

Anonymous Coward | about 2 years ago | (#42298223)

Linux has production level encryption, snapshots, and LVM2. What are you talking about?

Unless you have very specific uses, deduplication should be done at your storage array really. It's not a high priority to implement in the filesystem. (No, your anecdote does not make it a high priority).

Re:Can we get a real Linux filesystem, please? (-1, Flamebait)

blade8086 (183911) | about 2 years ago | (#42298611)

I think he was talking about how he is a clueless moron.

Re:Can we get a real Linux filesystem, please? (2)

WWJohnBrowningDo (2792397) | about 2 years ago | (#42298357)

Did you guys look at FreeBSD?

Re:Can we get a real Linux filesystem, please? (1)

Marxdot (2699183) | about 2 years ago | (#42298555)

Why should deduplication and snapshots (and even encryption, I suppose) be done by filesystems themselves? Why require a repetition of effort in implementing every filesystem? Also, ZFS is an insane thing written by people who don't seem to understand that keeping a good separation of concerns can lead to a rather slick set of general tools that can be used on almost any fs.

Oh, right, 'enterprise features'. That certainly sets the alarm bells ringing.

Re:Can we get a real Linux filesystem, please? (2)

Agent ME (1411269) | about 2 years ago | (#42298821)

If snapshots are handled by the filesystem, then it could be possible to snapshot a specific directory or file rather than a whole partition for example. Snapshots in the filesystem also prevents stuff like changes to space that was free when the snapshot was taken from being unnecessarily remembered.

Re:Can we get a real Linux filesystem, please? (1)

blade8086 (183911) | about 2 years ago | (#42298601)

LVM has snapshots and DM has encryption.

And since when is deduplication a 'critical' enterprise feature?

e.g. who else has it other than ZFS in the unix world without having an expensive addon product etc?

(other than DragonFlyBSD's hammer, which unfortunately corporate weenies have testicles too small to deploy)

maybe critical for your application - but this doesn't mean its mega-lagging behind.

Re:Can we get a real Linux filesystem, please? (-1)

Anonymous Coward | about 2 years ago | (#42298851)

Most of these posts are typically from people who have no idea what they're talking about. They must have some kind of anti-Linux affiliation, I would think.

Because, by some strange coincidence, all the features that ZFS has which they believed Linux did not have, were the most important ones that any enterprise could possibly have (even though many of these features did, in fact, already exist in Linux -- e.g., snapshots and volume management). When btrfs gained these features (even though Linux already had them long before btrfs), then these idiots would suddenly forget about those, and move on to some other thing "which is the most important thing for any enterprise".

Honestly, all they're doing is going to wikipedia, looking at the feature set comparison between ZFS or NTFS, and comparing with BTRFS.

Re:Can we get a real Linux filesystem, please? (0)

Anonymous Coward | about 2 years ago | (#42299577)

Bullshit, I use all three weekly.

I like ZFS, it's not a religion.
NTFS has tons of features which no one uses :)
BTRFs was fine, except some bug with KVM

No (2, Interesting)

ArchieBunker (132337) | about 2 years ago | (#42298769)

Instead of picking a filesystem and moving forward people will moan and cry and eventually split into a few different groups with beta level implementations. Sound on Linux is a great example. Two completely different sound drivers that both work half assed. What's the word with XFS these days?

Re:Can we get a real Linux filesystem, please? (0)

Anonymous Coward | about 2 years ago | (#42299171)

There are dozens of stable Linux filesystems. Pick one. Millions of people use them. Most of what you call the internet uses them. 99.99% of the all of the worlds supercomputers use them. "Oh it doesn't blah." No sparky, that's a lie. Linux doesn't keep redundant data in the first place. No good deduplication software? Don't put duplicate data on the system in the first place! Clearly there are windows shills around not wanting the best, instead wanting the mickeysoft crap (once again). Lie to yourself, don't push your craphead lies onto us.

Re:Can we get a real Linux filesystem, please? (1)

guruevi (827432) | about 2 years ago | (#42299277)

Solaris and it's derivatives can be had for free. You don't HAVE to buy it and it's derivatives like OpenIndiana are very stable.

Re:Can we get a real Linux filesystem, please? (0)

Anonymous Coward | about 2 years ago | (#42299395)

Oh, because deduplication is now the new Black ?
So one error on the can disk corrupt more than one file ?
Cool...

CRC (2)

RedHackTea (2779623) | about 2 years ago | (#42297627)

My knowledge of file-systems is minimial. But since it's a CRC [wikipedia.org] attack, can you just turn off the ability of Btrfs to check errors (if that's possible)? However, I'm sure data corruption would then ensue.

Anyway, I'm glad I always use ext4/3. I thought about trying ZFS [wikipedia.org] at one point, but decided that using Solaris as a non-server OS is pointless. Does anyone still use Solaris?

Re:CRC (1)

Mike Domanski (1700470) | about 2 years ago | (#42297739)

From the linked blog:

Directories are indexed in two different ways. For filename lookup, there is an index comprised of keys:

Directory Objectid | BTRFS_DIR_ITEM_KEY | 64 bit filename hash

The default directory hash used is crc32c, although other hashes may be added later on. A flags field in the super block will indicate which hash is used for a given FS.

Sounds like btrfs uses a CRC as a hash. I assume it's a performance optimization, but using CRC as a hash is insane.

Re:CRC (0)

Nimey (114278) | about 2 years ago | (#42297779)

It is insane, yes. md5 shouldn't be /that/ computationally intensive, and even though it's not secure enough for cryptography anymore it should still be good enough for this.

Re:CRC (-1, Offtopic)

dcwanda (2731571) | about 2 years ago | (#42297951)

Berita Unik [blogspot.com] dari Wanita Tercantik [blogspot.com] Di Dunia.

Re:CRC (1)

Anonymous Coward | about 2 years ago | (#42297965)

For short messages like filenames, MD5 takes 70 times as long to compute as CRC... And since the published attacks on MD5 lets you create collisions pretty cheaply, you could still do the same attack.

If anything you'd use a construct like SipHash, but SipHash requires a secret key and a 64-bit output isn't really collision resistant anyway.

Re:CRC (1)

Tough Love (215404) | about 2 years ago | (#42298107)

a 64-bit output isn't really collision resistant anyway

Plenty good enough for a hashed directory key, which doesn't need to be crypticographically secure, just to have good distribution and random results affected as much as possible by all input bits. The size of the output is not the dominant factor, the quality of the input mixing is.

Re:CRC (2)

maxwell demon (590494) | about 2 years ago | (#42299593)

Or just use a RB tree instead of a linear list for hash collisions, then you get only O(log n) instead of O(n) worst case search performance.

To quote Wikipedia:

Instead of a list, one can use any other data structure that supports the required operations. For example, by using a self-balancing tree, the theoretical worst-case time of common hash table operations (insertion, deletion, lookup) can be brought down to O(log n) rather than O(n). However, this approach is only worth the trouble and extra memory cost if [...] one must guard against many entries hashed to the same slot (e.g.[...] in the case of web sites or other publicly accessible services, which are vulnerable to malicious key distributions in requests).

While a file system is not generally publicly available (actually it may be, if e.g. used on an FTP server), it is still shared.

Re:CRC (0)

Anonymous Coward | about 2 years ago | (#42297849)

Does anyone still use Solaris?

That's a brand of cooking oil, right?

Re:CRC (1)

maxwell demon (590494) | about 2 years ago | (#42299599)

Does anyone still use Solaris?

That's a brand of cooking oil, right?

No, it's a novel by Stanislav Lem.

Re:CRC (0)

Anonymous Coward | about 2 years ago | (#42297975)

My knowledge of file-systems is minimial. But since it's a CRC [wikipedia.org] attack, can you just turn off the ability of Btrfs to check errors (if that's possible)? However, I'm sure data corruption would then ensue.

Anyway, I'm glad I always use ext4/3. I thought about trying ZFS [wikipedia.org] at one point, but decided that using Solaris as a non-server OS is pointless. Does anyone still use Solaris?

Using Solaris 11 in a non-server role is not any worse an experience than say a Linux desktop from a couple years ago, or Debian... :P

Operating systems should not be a popularity contest. Solaris isn't a perfect OS, but there is a lot worth learning from it, like "stable interface" doesn't have to mean "principal developer graduated from college and is no longer maintaining it."

Re:CRC (0)

Anonymous Coward | about 2 years ago | (#42298163)

In a number of companies, they are scaling from old school iron (SPARC, POWER) to dense, x86 blades.

Netbackup is a good example of this. If I use its media server deduplication, it requires one filesystem for the disk pool. So, being able to move disks in order to make that one filesystem as useful/big as possible is important. I did this under RedHat, but then found out that they required additional money for an officially supported XFS extension (yes, I could load in CentOS modules, but then I would have an unsupported production cluster.) In this case, Solaris and ZFS came in extremely handy.

Of course, Windows Server 2012 and its disk pools are very useful, but Netbackup has not blessed that yet.

I would also say, that even though I have multiple certs with Linux, it does not have anywhere near the production features of Solaris or AIX.

For example, one LPAR I had was meant to be completely locked down due to handling incoming log traffic. With a few trustchk invocations, any attempts to replace binaries with unsigned ones will result in errors, and with any type of HIDS in place, will be detected as soon as one of the binaries is attempted to be executed. Heck, one can even lock down root so anything running as UID 0 just runs as a plain old user... and any system changes have to be done by shutting the LPAR down, booting from another root volume group, and doing them there.

Requires local access (5, Funny)

Anonymous Coward | about 2 years ago | (#42297733)

no more dangerous than a fork bomb or filling up /tmp or trying to compile open office.

Re:Requires local access (5, Informative)

cryptizard (2629853) | about 2 years ago | (#42298081)

Sort of, but at least you can recover from those attacks by restarting or booting from an external source to clean up your filesystem. The second attack here leaves you with undeletable files because the file system code responsible for deleting cannot handle the multiple hash collisions. There is no way to recover from that until a patch is pushed out that fixes the problem.

Re:Requires local access (2)

blade8086 (183911) | about 2 years ago | (#42298583)

Which, without the over sensationalized BS that is this story, will probably be in about a week tops.

And since BTRFS is not in any 'enterprise' Linux Distributions, means that it will pretty much be available
immediately since everyone running it in critical production environments will probably be running
pretty bleeding edge linuxen

Re:Requires local access (1)

Anonymous Coward | about 2 years ago | (#42299021)

Requires local access

Well, it requires the ability to create named files. That could happen through a Wiki upload page, by extraction of an archive to a temporary folder for processing, etc.
And unlike filling up /tmp, this will not be stopped by setting a quota.

Nice! (3, Interesting)

gweihir (88907) | about 2 years ago | (#42297971)

"Algorithmic Complexity Attacks" like this one have long been known, but rarely been documented publicly. One good example to point out why hash-randomization is a good idea!

Re:Nice! (0)

blade8086 (183911) | about 2 years ago | (#42298627)

Yah dude, you're so totally spot on - noone [nist.gov] at all documents this!

Re:Nice! (3, Funny)

Anonymous Coward | about 2 years ago | (#42298757)

Words, they mean nothing! Take 'rarely' for example, who gives a shit, I'll read it as 'never' same thing.

This just in... (0)

Anonymous Coward | about 2 years ago | (#42298001)

Letting an asshole have write access to your filesystem can lead to a fucked up filesystem.

Re:This just in... (1)

FranTaylor (164577) | about 2 years ago | (#42298749)

Are you saying that google's file systems are corrupt?

Nice this was found before BTRFS goes stable (5, Insightful)

Anonymous Coward | about 2 years ago | (#42298029)

Hopefully more people start fuzzing btrfs so it is that much better when it is declared stable.

Re:Nice this was found before BTRFS goes stable (0)

Anonymous Coward | about 2 years ago | (#42298759)

...It's shipping with OpenSUSE right now...as the default.

Who cares? (1)

UltraZelda64 (2309504) | about 2 years ago | (#42298221)

Unstable software that is still under heavy development is actually unstable. Who would've guessed?
I think that based on this ingenious discovery, we should all switch over to it by next week.

Fix the title (0)

Anonymous Coward | about 2 years ago | (#42298243)

So the FS is vulnerable to an attack. The attack is not in the FS. That's pretty misleading.

Can I install btrfs on windows? (0)

Anonymous Coward | about 2 years ago | (#42298473)

I mean, just to keep up. You know, with that totally great POS you guys think you know how to use. Luser=Linux user

Re:Can I install btrfs on windows? (1)

Anonymous Coward | about 2 years ago | (#42298605)

Yeah, I'll send you the installer. What's your e-mail address?

Re:Can I install btrfs on windows? (0)

FranTaylor (164577) | about 2 years ago | (#42298753)

How's that windows 8 UI experience coming along?

Good god man (2)

tomp (4013) | about 2 years ago | (#42298521)

"Denial-of-Service Attack Found In Btrfs File-System" didn't happen. A vulnerability was found. That's a big deal, no reason to obscure it.

Re:Good god man (1)

blade8086 (183911) | about 2 years ago | (#42298649)

No, actually, this is NEITHER a DOS Attack, nor a vulnerability. It is a *bug*

But oh so much better to douschebag promote yourself by being the super terducken 31337 hax0r sekuritah expert
by mislabeling it and having it get picked up by the tech press.

Attack? (2)

Decameron81 (628548) | about 2 years ago | (#42298681)

An attack was found in the filesystem? What's that supposed to mean?

Re:Attack? (1)

dr2chase (653338) | about 2 years ago | (#42299009)

Carefully chosen file names (a lot of them) can DOS file system performance. Whether this could be escalated to a network vulnerability, hard to say -- if an attacker over the net can figure out a way to induce particular file names on the server, that would be worse.

It's a little sad that people are still forgetting about this failure mode of hash tables and hash functions; either there's got to be a randomizing secret swizzled in, or a better (more nearly cryptographically strong) hash function, or both.

Re:Attack? (0)

Anonymous Coward | about 2 years ago | (#42299077)

...or you could use trees, which don't have these problems at all.

Re:Attack? (1)

dr2chase (653338) | about 2 years ago | (#42299121)

True, but good random numbers (good hashes) have interesting and powerful statistical properties.

Re:Attack? (0)

Anonymous Coward | about 2 years ago | (#42299461)

mmm. Well, it seems to me -- and I'm just your average programmer -- that trees have only one downside, and that is, they cost a little more space. But in these days of ridiculously easy to obtain memory, I just don't think that matters for most problems.

Whereas hashes are a hack where the primary advantage is that they can map a large, but sparsely populated, problem space into a small one, until they can't, and then they begin to suck, and they may generate huge quantities of suck, depending on just how the small end of the map is being insulted.

I don't see (but feel free to enlighten me, I mean it when I say I'm just an average programmer) how hashes having interesting and powerful statistical properties makes amends for the fact that they can suck really bad, when compared against a technique (trees) that... doesn't suck. :)

Re:Attack? (1)

dr2chase (653338) | about 2 years ago | (#42299543)

Read about universal hash functions (the writeup on wikipedia is not that bad). They're not a hack.

You don't necessarily use a small space, either -- a 64-bit hash is not normally regarded as a small space, thought it is often smaller than the bit size of what is hashed into it.

Two problems with trees are that you need to define a comparison (you can often concoct one, but they're not always given to you) and though memory is cheap, *probes* into memory are not. If a hash function can get you there in 1 step with high probability, that's interesting.

Re:Attack? (0)

Anonymous Coward | about 2 years ago | (#42299825)

Just to be clear, I didn't mean hack in the pejorative sense (I'm old) I meant it in the "holy shit, that's clever" sense WRT "mangle, hit target."

I also meant larger to smaller space. Not small in the sense of, well, small. Sorry.

If a hash function can get you there in 1 step with high probability, that's interesting.

Yes, but "1 step" means several machine ops, typically. Hashing's 1-step also has to be followed by a full test, because otherwise, for all you know you're looking at a collision -- even if the probability is very high, you still must test. Following a tree can also get you there in one step, with the same terminal compare, presuming the leaf is on the trunk. If it's not, locate time grows in very small steps without in any way impacting anything else's locate time. A probe into a tree is a series of indexing operations, no more, until the leaf is located or the twig is bare. That is really efficient. Each probe can be done with just a few machine instructions even with an old-school processor. A modern one might be able to do it faster, I actually don't know what the instruction sets really look like any more, but I can sure tell you that old-school basic indexing ops are pretty much 1:1 with tree traversal requirements. Furthermore, the trees don't tend to be deep; and when they are deep-ish, they don't tend to be wide on the final twig. A per-directory tree design is amazingly solid and won't fall prey to similar-ish filename problems (or really, any other kind of problem I can think of at the moment.)

I've written assemblers that used hash tables to map from a large but known mostly empty space to a small space... just to get the speed. I just screwed around until the hash gave me unique results for every mnemonic in progressively smaller tables, and then optimized and used that hash. So I certainly have uses for them. But in a filesystem namespace, hashes look to me like they spend more time making a mess than they do helping you out, and that this could happen at any time, volume nearly empty or volume full, etc. whereas a tree-based, per-directory filesystem namespace comparison is pretty straightforward, even with wide characters; nothing much to concoct there.

Re:Attack? (1)

maxwell demon (590494) | about 2 years ago | (#42299623)

The two approaches are not mutually exclusive. A hash is an array of containers. Usually people use linear lists as containers because it's the simplest, and hash collisions are considered rare so the O(n) characteristics shouldn't matter. But when hash collisions may be intentionally caused, it's obvious that you should use a container more suited to your problem. Just think about what container you'd use if you weren't able to use a hash table, and then use that same container for the hash table array entries.

Or in short, make your hash table an array of balanced trees instead of linked lists. That way you get O(1) typical behaviour (assuming a good hash function) and O(log n) worst case (which includes malicious attacks).

Re:Attack? (0)

Anonymous Coward | about 2 years ago | (#42299835)

Worst case for a tree is O(n), not O(log n). A 6 branch tree traversal takes, on average, 6x as long as a 1 branch search.

Re:Attack? (0)

Anonymous Coward | about 2 years ago | (#42299119)

So ... a vulnerability was found.
VULNERABILITY. ATTACK. Different words. Different meanings.

Re:Attack? (0)

Anonymous Coward | about 2 years ago | (#42299129)

bash> touch "An attack"

Attack vector (1)

aNonnyMouseCowered (2693969) | about 2 years ago | (#42299259)

Indeed, the title makes you think that BTRFS was trojaned or worse is malware.

Re:Attack? (1)

Noughmad (1044096) | about 2 years ago | (#42299687)

An attack was found in the filesystem? What's that supposed to mean?

I'm not sure, but it sure sounds like Mr. Reiser had something to do with it.

Fr;ost Pist (-1, Troll)

Anonymous Coward | about 2 years ago | (#42299185)

problems that I've play parties the to its laid-back And showe8. For is wiped off and corporate
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?