Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Linux 2.6.37 Released

samzenpus posted more than 3 years ago | from the new-and-improved dept.

Open Source 135

diegocg writes "Version 2.6.37 of the Linux kernel has been released. This version includes SMP scalability improvements for Ext4 and XFS, the removal of the Big Kernel Lock, support for per-cgroup IO throttling, a networking block device based on top of the Ceph clustered filesystem, several Btrfs improvements, more efficient static probes, perf support to probe modules, LZO compression in the hibernation image, PPP over IPv4 support, several networking microoptimizations and many other small changes, improvements and new drivers for devices like the Brocade BNA 10GB ethernet, Topcliff PCH gigabit, Atheros CARL9170, Atheros AR6003 and RealTek RTL8712U. The fanotify API has also been enabled. See the full changelog for more details."

Sorry! There are no comments related to the filter you selected.

But... (-1)

Anonymous Coward | more than 3 years ago | (#34769502)

But does it have a fix for CmdrTaco's micropenis?

Re:But... (0)

SpokeBot (884906) | more than 3 years ago | (#34769588)


Re:But... (-1)

Anonymous Coward | more than 3 years ago | (#34769918)

Admitting you have intimate knowledge of and interest in the size of another man's penis: not the burn you seem to think it is.

Re:But... (0)

Anonymous Coward | more than 3 years ago | (#34773204)

Now you know why your mom's phone was busy when you tried to call.


Anonymous Coward | more than 3 years ago | (#34769530)



Anonymous Coward | more than 3 years ago | (#34770060)

I didn't see kitchen sink. Was kitchen sink mentioned? Man I can't wait till they put the kitchen sink in there.

Kernel locking (4, Interesting)

iONiUM (530420) | more than 3 years ago | (#34769550)

Well I'm glad they officially fixed the kernel lock. Out of curiosity, how long until Ubuntu or Debian sees this integrated into their line? A year? Not trolling, I only started using Ubuntu recently, so I'm curious.

Re:Kernel locking (2, Informative)

Anonymous Coward | more than 3 years ago | (#34769610)

Ubuntu will ship with this kernel in their next release Natty in April

Re:Kernel locking (2)

dsavi (1540343) | more than 3 years ago | (#34774218)

Actually I believe they will ship with 2.6.38.[1] []

Re:Kernel locking (0)

Anonymous Coward | more than 3 years ago | (#34769626)

good question. My debian box was running 2.6.26. "Was" because I updated (via backports) to 2.6.32 to fix some kernel bugs that had been fixed in 2009. So, maybe a couple years.

Re:Kernel locking (4, Interesting)

Cyberax (705495) | more than 3 years ago | (#34769632)

Ubuntu in about 6 months, 2.6.37 should be in the 11.04 release.

In Debian Stable - in about 2 years (in the next release).

Six Months? (0)

Anonymous Coward | more than 3 years ago | (#34769900)

Doesn't the .04 mean April?

Re:Six Months? (1)

Teun (17872) | more than 3 years ago | (#34769974)

Ssht, leave some for imagination ;)

Re:Kernel locking (0)

Anonymous Coward | more than 3 years ago | (#34770206)

I would say that maybe 2.6.38 will be on Ubuntu 11.04

2.6.38 will be a must have!

Re:Kernel locking (0)

Anonymous Coward | more than 3 years ago | (#34772040)

You could compile it yourself, or is that tricky in Debian and Ubuntu?

Re:Kernel locking (2)

TheRealGrogan (1660825) | more than 3 years ago | (#34772558)

It's a bit tricky in all of those so called "easy" distributions but not impossibly difficult. It's probably best to use the distributor's kernel sources and methods, but you don't have to if you make some changes to the system first. You have to be careful that you're not using features they've patched in to their kernels if you do that though.

I have Ubuntu 10.10 (actually Xubuntu with XFCE) on my netbook, and I have been using a regular kernel ( currently) without using an initrd or anything. This requires changes to fstab to use normal device nodes rather than the UUID. In Ubuntu I was previously using their sources and the alternate "Debian" method of building them. []

On redsplat distros (Fedora, RHEL) it won't boot without an initrd unless you manually create some hard device nodes in /dev first. (This is assuming it's still possible... I haven't tried one in a long time)

In all of those "easy" distros it sucks out of the box though... you have to install all the tools you'll need. (In *buntu that's usually just build-essential and libncurses5-dev for kernel)

Distros like Slackware, Gentoo, Arch and similar are pretty much meant for the user to roll their own kernels.

ANOTHER version? (2, Funny)

Anonymous Coward | more than 3 years ago | (#34772938)

Geez... it's a shame Lunis and the boyz couldn't be bothered to program it correctly the first time around.

Re:Kernel locking (3, Informative)

turbidostato (878842) | more than 3 years ago | (#34769644)

"Well I'm glad they officially fixed the kernel lock. Out of curiosity, how long until Ubuntu or Debian sees this integrated into their line?"

Don't know about Ubuntu but since Debian is already "frozen" towards its next release (codename "squeeze") you can bet it will be about two years from now, more or less.

Of course, you will see it much sooner on their development lines, "Testing", "Unstable" and/or "Experimental".

Re:Kernel locking (2)

lederhosen (612610) | more than 3 years ago | (#34769654)

In debian it will be available soon in unstable, the RC is already available in experimental.

Package linux-image-2.6.37-rc7-686

        * experimental (kernel): Linux 2.6.37-rc7 for modern PCs
            2.6.37~rc7-1~experimental.1: i386

Re:Kernel locking (1)

petermgreen (876956) | more than 3 years ago | (#34771074)

In debian it will be available soon in unstable
Afaict the general policy in debian is to only upload stuff to unstable if it's targetted at (note that targetted at doesn't nessacelly mean will be in) the next stable release. It seems unlikely that debian would try to push a new kernel version at this point in the release cycle.

So I wouldn't expect it to hit unstable until after squeeze releases.

Re:Kernel locking (1)

Bjrn (4836) | more than 3 years ago | (#34769658)

And what about Fedora?

Re:Kernel locking (2)

inode_buddha (576844) | more than 3 years ago | (#34769744)

Check the "rawhide" repositories. Fedora tends to track the -rc kernels fairly closely with near-daily builds. Therefore it is likely that fedora will have this in rawhide within a day if not already.

Re:Kernel locking (1)

arth1 (260657) | more than 3 years ago | (#34769802)

Fedora 15 has a planned release date of May 10.
It will most likely have 2.6.37 (they're currently at .36, but several people have made .37-rc versions, and the deadline for version bumps and features isn't up yet).

Re:Kernel locking (0)

Anonymous Coward | more than 3 years ago | (#34770430)

How about Mandriva?

Re:Kernel locking (1)

macemoneta (154740) | more than 3 years ago | (#34771876)

As mentioned, the 2.6.37 kernel is in Fedora rawhide now, and it works fine with the current (Fedora 14) release if you want (or need) to run it: []

The official Nvidia driver installer compiles and runs cleanly against it, making early use easier.

Re:Kernel locking (2, Informative)

Anonymous Coward | more than 3 years ago | (#34769674)

It's not a complete removal of the BKL. There are still some drivers (and a couple of filesystems I think) that have not been converted. Selecting those in a config as a built in or standalone will re-enable the BKL. IIRC, the plan to have those corrected or obsoleted is scheduled for .38

Re:Kernel locking (4, Funny)

korgitser (1809018) | more than 3 years ago | (#34769798)

And that would be the .38 special

Re:Kernel locking (1)

El_Oscuro (1022477) | more than 3 years ago | (#34772610)

Please, if you need some serious firepower, you need the totally compatible .357 magnum

Still some BKL in kernel (1)

Dennis Sheil (1706056) | more than 3 years ago | (#34769998)

This is my understanding of this as well. The Big Kernel Lock has not been completely removed from the kernel, however, it is now possible to choose a kernel configuration option that will compile a kernel without the BKL. This necessitates a BKL-disabled kernel not being able to use some of the modules which still depend on the BKL. However, none of the core modules depend on the BKL any more, and kernel developers are still working on removing the BKL from the handful of less important modules which depend on the BKL.

Re:Kernel locking (1)

DissociativeBehavior (1397503) | more than 3 years ago | (#34769752)

According to the article all the critical code paths don't use this lock so you shouldn't see any performance improvements.

Re:Kernel locking (3)

kbielefe (606566) | more than 3 years ago | (#34769808)

It already is [] , for very liberal definitions of "integrated." :-)

Re:Kernel locking (1)

Kjella (173770) | more than 3 years ago | (#34774066)

If you're just doing it for the newer version and don't need to change the code or config it's easier to grab the debs from the Ubuntu Kernel PPA [] and install.

Re:Kernel locking (1)

micheas (231635) | more than 3 years ago | (#34769904)

Debian experimental should have it within a week or two.

Debian is "Frozen" for the release of squeeze, so 2.6.37 will probably never make it into Debian unstable as it is likely that 2.6.38 will make it out the door before squeeze is released. (just a gut guess with no supporting evidence for the dates.) If my guess is right 2.6.38 will more likely make it into Debian than 2.6.37.

Ubuntu will push a new kernel out in about three months, I don't know if it will be 2.6.37

Re:Kernel locking (1)

Beardo the Bearded (321478) | more than 3 years ago | (#34770058)

I've got the RC for Ubuntu and have used it for a month or two. It fixed the networking, so that after sleep it would actually connect to the network. For some reason, the driver was programmed to fill with random values so it wouldn't sync unless you rmmod / modprobe ath9k.

On the downside, the RC removes control of the backlight. What I'd like to see next is fixing the kernel so that the new generation of touchpads is recognized, or maybe something like ndiswrapper for other drivers.

Re:Kernel locking (2)

somenickname (1270442) | more than 3 years ago | (#34770134)

This kernel should trickle down to 10.04 LTS as well. One of the big complaints about the 8.04 LTS version was that hardware gets released so rapidly that a 3-5 year old kernel isn't going to support a lot of it. Even right now the Ubuntu 10.10 kernel (2.6.35) is in the 10.04 repos that are enabled by default.

Re:Kernel locking (2)

diegocg (1680514) | more than 3 years ago | (#34770170)

Note that the performance advantages of that change are zero. It's an aesthetical thing, so distros are not in eager of shipping it.

Re:Kernel locking (4, Interesting)

Bootsy Collins (549938) | more than 3 years ago | (#34770210)

Would someone mind explaining (for those of us who have some C experience, but aren't kernel hackers) what the Big Kernel Lock is? In particular, is this something that will impact the desktop user?

Re:Kernel locking (1)

Anonymous Coward | more than 3 years ago | (#34770384)

It was a hacky way of providing SMP support that is less efficient than a non-blocking method. As things stand it will have no effect for almost anyone since all the core modules since they haven't depended on the BKL in a while.

Re:Kernel locking (5, Informative)

Pr0xY (526811) | more than 3 years ago | (#34770406)

It's a fairly simple idea. In any place that two threads of execution (be them real threads or interrupts or whatever) could possibly access the same resource at the same time, locking must be used to ensure data integrity and proper operation of the code. The "Big Kernel Lock" is a system wide "stop the world" lock. This is a very easy way to make the code "correct" in the sense that it will work and not corrupt data. But the downside is that while this lock is held... everything else must wait. So you better not hold it for very long and while it is easy to get correct, it has pretty bad performance implications.

A better solution is a fine grained lock just for that resource, so the only threads of execution which need to wait are ones that are actually contending for that resource. The downside here is that it is much more complicated to get correct. So when implementing this, you have to be *very* careful that you got it right.

The BLK has been in the process of being removed and has been phased out of the vast majority of the kernel for a while, this change is simply enabling a build in which it doesn't even exist if you don't build any of the older drivers which don't use more fine grained locking.

Re:Kernel locking (4, Interesting)

Kjella (173770) | more than 3 years ago | (#34774104)

And this is one example of why the kernel doesn't have a stable ABI. You can bet tons of unmaintained third party drivers would use the BKL, so you could never get rid of it. From what I've understood purging it from every driver has been a pretty big job and only possible because all the drivers are in the kernel tree.

Re:Kernel locking (5, Informative)

phantomcircuit (938963) | more than 3 years ago | (#34770452)

The Big Kernel Lock is a Symmetric Multi Processing (SMP) construct. In order to make kernel operations atomic you lock the entire kernel. This works as a good initial locking mechanism because it's relatively easy to implement, it avoids issues like deadlocking very well.

The problem with the BKL is that it locks the entire kernel, even if processes are calling functions totally isolated from each other only one can be in the kernel at a time.

In practice the BKL hasn't been a big deal for a while now since the important (performance wise) parts of the kernel have had finer grained locking.

So It's pretty unlikely to have much effect if any on desktop users.

Re:Kernel locking (1)

Anonymous Coward | more than 3 years ago | (#34770462)

SMP support was added in 2.2 and the hard lock was a way to transition toward fine locking later. it only allows one thread to be in kernelspace at a time and as a result, causes problems when you try to scale the system. In other words, SMP gave the kernel support for parallel processing which is incredibly useful for running multiple tasks at the same time.

Re:Kernel locking (0)

Anonymous Coward | more than 3 years ago | (#34770508)

It's pretty much what it says it is :
One big lock (as in concurrent programming locks, mutexes, etc) that is used by about any part of the kernel.
It makes it easy to make the os concurrency "safe" but scales poorly with more than 2 processors (each time any driver requires the lock, ALL the other parts that are being executed on other cores have to be put on hold until the lock is released)
It's being replaced by fine-grained locks which are well... small locks that only protect independent parts of the code

You have pretty much the same problem in interpreted languages like python or ruby, see Global Interpreter Lock

Re:Kernel locking (4, Informative)

Yossarian45793 (617611) | more than 3 years ago | (#34770514)

The BKL was a hack added in Linux 2.0 to support multiprocessor machines. It was ugly but expedient (like most engineering solutions). Over time, multiprocessor support in the kernel has gotten much better, and the BKL has become less important, up until now when it's so unimportant it can be removed entirely.

Nobody, especially not desktop users, will notice any change from its removal.

Re:Kernel locking (2)

AntiGenX (589768) | more than 3 years ago | (#34770634)

Put simply, when you have many independent bits of code competing for finite shared resources/time within the kernel (this is different than code just running in user space), you have to put locks on them so that only 1 thread can access them at a time. Once a lock is released then the another thread gets a turn. With a big lock, only one lock exists for every resource. Although a thread may only need access to a single resource, all of the resources get locked.

The alternative is to implement more fine-grained locks on each resource or set of resources. This allows two threads that are using different kernel resources to potentially execute in parallel. The danger is it's more complex and requires careful coding to avoid deadlock or race conditions. That help? []

Re:Kernel locking (-1)

Anonymous Coward | more than 3 years ago | (#34770762)

The "Big Kernel Lock" was how the kernel developers held you down while they raped your ass. Nowadays they prefer to drug you instead.

Re:Kernel locking (1)

mvar (1386987) | more than 3 years ago | (#34770278)

Somehow this question gives a strange deja vu

Re:Kernel locking (2)

bzipitidoo (647217) | more than 3 years ago | (#34770366)

That's one reason I switched to Arch Linux. Updates make it in a lot faster. That's particularly nice for the browser.

Hoped I wouldn't experience downsides from that, but I have. I'd say Linux still isn't ready for the Year of Linux on the Desktop. These sorts of problems show the wisdom of holding off on kernel updates. I want Firefox updates right away. But kernel updates? Maybe not.

Kernel 2.6.34 and 35 had a problem in the e1000 driver, which seemed to affect only very specific motherboards, like in one of my computers. No networking was an especially inconvenient problem. I haven't stumbled over a neat way to downgrade in Arch, apart from a new installation from an old snapshot, so I simply used other computers until 2.6.36, and that fixed the issue. Going back to 2.6.33 or earlier wasn't a good option either-- those had other problems. Too many dependencies. Such as forcing an undo of a partial conversion from HAL to udev. I read that HAL is a mess, and will be abandoned for udev. Possibly the biggest change for version 1.9 of Xorg was a switch from HAL to udev.

Most of the nagging little problems are with the desktop. Don't know if reading from full sized CD-Rs has been fixed. Been fooling with lighter weight GUIs, specifically LXDE, and the automounting never has worked well. Why don't I just use KDE or Gnome? Don't like the sluggish feel of their interfaces. Shutdown on another of my computers has been random since around 2.6.30-- sometimes it works, and sometimes not. Used to always work. I have some ancient trackballs that connect to a 9 pin serial port. No luck getting Xorg to use such devices. Still can't ditch the proprietary NVidia driver for Nouveau. Am wondering when distros will offer btrfs as one of the options. Reiserfs is pretty old now, and dying, and ext4 hasn't thrilled me. But that age is nothing compared to the fact that FAT is still the easiest way to share files with Windows.

Re:Kernel locking (2)

Korin43 (881732) | more than 3 years ago | (#34770806)

Arch does have an LTS kernel (although "long-term" in Arch is like 6 months), which you can use if the current version is broken. I used it for a while when wine + some kernel version caused World of Warcraft to not work. Hope this helps if you problems in the future.

pacman -S kernel26-lts

Re:Kernel locking (1)

UnknownSoldier (67820) | more than 3 years ago | (#34771408)

Agreed. I've been using Linux off and on (started with Slackware ~1.0, kernel 0.99, yeah _that_ long ago) The running joke is that it is always $(Year)+X for the year of the Linux desktop -- seriously doubt it will ever happen for the masses. Windows + Mac OS X are just too entrenched and "good enough." As a programmer/geek, if I don't have time to dick around getting Linux to work, the rest of the non-computer illiterates don't have a hope. It is easier to just recommend OS X to non-computer people.

Always cool to see "the little OS that could" continue to make progress though. :-) Drivers are Linux's strength (older hardware) and weakness (newer hardware.)

Re:Kernel locking (2)

deek (22697) | more than 3 years ago | (#34772516)

Reiserfs is pretty old now, and dying, and ext4 hasn't thrilled me.

What features have you enabled in ext4? I'm running it on one of our servers, and I like the performance very much. It's even a debian stable machine, although I had to use a kernel from "testing" to properly enable ext4.

I've got extents, uninit_bg, and dir_index enabled, amongst other features. If you converted from ext3, then you probably don't have these options enabled. Even if you created the ext4 filesystem from scratch, some older versions of mke2fs won't enable these ext4 features by default.

Try giving ext4 another go. Maybe you'll like it more the second time around.

Re:Kernel locking (1)

bzipitidoo (647217) | more than 3 years ago | (#34773782)

Maybe I will. I didn't do anything special with ext4, which means I got the default settings, whatever they are. Don't think I had noatime. Anyway, seemed like ext4 wasn't as fast as a fresh Reiserfs partition. Ext4 rattles the hard drive more, judging from the noise. No, I don't have any hard data from carefully controlled tests to know whether ext4 really is less efficient. But I do know ext2 with journaling (aka ext3) isn't that great. Rather use xfs than ext3. Like its predecessors, ext4 is still fundamentally block based.

And I'm perpetually short of hard drive space, which may sound crazy in these days of 1TB drives, but split that across several OSes, install massive games, apps, video files, and so forth and space goes quick. So I like tail packing in the file system, data compression, and anything else that saves space. I know, I know, that only gets a little more room, but I like to have it.

I've been holding out for btrfs. Not sure it's mature enough that I want to make the jump just yet, but I'm hoping it will be soon.

Re:Kernel locking (1)

afabbro (33948) | more than 3 years ago | (#34773124)

Kernel 2.6.34 and 35 had a problem in the e1000 driver, which seemed to affect only very specific motherboards, like in one of my computers. No networking was an especially inconvenient problem.

And kind of an unnecessary one, considering that the e1000 module code is available for free download on Intel's web site, is GPL, and is very easy to build. My box with an e1000e may be running 2.6.18, but the e1000e module is Intel's latest 1.2.20.

Re:Kernel locking (4, Insightful)

Yossarian45793 (617611) | more than 3 years ago | (#34770454)

For those that aren't aware, the BKL (big kernel lock) hasn't caused any issues except purist angst for a very long time now. All of the performance critical kernel code was fixed to use fine grained locking years ago. This change is just to satisfy the people who are offended by the architectural ugliness of the BKL. In terms of performance and everything else that matters, the removal of the BKL has absolutely no impact.

Re:Kernel locking (1)

TheLink (130905) | more than 3 years ago | (#34774160)

In terms of performance and everything else that matters, the removal of the BKL has absolutely no impact.

How about reliability and stability? So they've reached the stage where stuff _definitely_ doesn't need the BKL?

Re:Kernel locking (1)

pinkeen (1804300) | more than 3 years ago | (#34771336)

Distro's with rolling releases - probably not more than a week or two. I've read somewhere that Ubuntu is considering change to rolling releases model too.

Re:Kernel locking (1)

dmuir (964412) | more than 3 years ago | (#34773220)

No, Ubuntu will not be changing to a rolling release, has never planned to, and probabily never will.

Re:Kernel locking (1)

davester666 (731373) | more than 3 years ago | (#34773494)

Fixed? How about total frickin anarchy running lose in your kernel. With no giant lock wielding a banhammer, kernel modules of all kinds will run rampant over anything and everything.

It will be complete chaos.

Re:Kernel locking (0)

Anonymous Coward | more than 3 years ago | (#34774220)

already available if you want to install. if you don't know how you probably shouldn't.

Btrfs (1)

Anonymous Coward | more than 3 years ago | (#34769946)

It's nice to see kernel improvements for Btrfs, but how is Btrfs progressing? It seems like it is constantly under heavy development (completely understandable), but have the guys behind Btrfs released some sort of working version, even if it doesn't do much?

Re:Btrfs (4, Informative)

tibman (623933) | more than 3 years ago | (#34770056)

It's running on my server at home, so i hope so ; )

Re:Btrfs (1)

tehpola (1645995) | more than 3 years ago | (#34770310)

Likewise, I just built a new computer and I put two 1TB drives in there and mounted them together as my /home using Btrfs. I haven't played around with it too much yet but so far it's been running buttery smooth. This is under Ubuntu 10.10 with a stock kernel so I'm not exactly at the cutting edge of kernel releases.
Something I haven't worked out yet but was hoping to figure out was getting file cloning working (copy-on-write) in places where I want distinct files (no linking), but don't want to waste storage if I don't wind up changing one.

Re:Btrfs (3, Interesting)

0100010001010011 (652467) | more than 3 years ago | (#34770744)

Before I committed ANY data to ZFS I sure as heck "played around with it" in virtual machines until I was comfortable doing about anything with it.

"Pull" one of the drives. What happens?
dd if=/dev/random of= to your disk in random places (skip/seek), what happens to your data.
Pull all of the drives and replace it with a larger one.

How are the user tools for btrfs? zpool & zfs are fairly well documented and have very simple short commands.

Does it automatically share over nfs/samba like you can with ZFS on Solaris?

Re:Btrfs (1)

tehpola (1645995) | more than 3 years ago | (#34771252)

I see what you mean, but this is just a personal computer. I wouldn't be using this in any sort of production systems (nor am an IT guy).

What appealed to me about using Btrfs was that I could dynamically add more disks as I was inclined to upgrade, I could get data striping, snapshots, data deduplication (not implemented yet though), and since I was planning on running Linux, there was no practical solution for using ZFS. The fact that I couldn't pull the drive or that corruption might take things down doesn't concern me too much because if I wasn't using Btrfs I'd be using Ext4 and I wouldn't imagine it would be any better.

The documentation could probably use a little fleshing out, but in the little I've done (very little) setting things up, it was pretty straight-forward. As I said, I have two drives in my pool. When I installed Ubuntu, I only selected one for /home as Btrfs and then later added the second and rebalanced. Both adding and balancing were two simple commands (that I can't remember off the top of my head but were something like 'btrfs add' and 'btrfs balance')

I don't know about any automatic sharing, though, sorry. I'm sure it'd be pretty simple to set it up, but it might not be optimized for those cases.

Re:Btrfs (1)

glwtta (532858) | more than 3 years ago | (#34771580)

The fact that I couldn't pull the drive or that corruption might take things down doesn't concern me too much because if I wasn't using Btrfs I'd be using Ext4 and I wouldn't imagine it would be any better.

Corruption-wise Ext4 better as hell be better than Btrfs, since it was released as stable over two years ago.

Re:Btrfs (2)

profplump (309017) | more than 3 years ago | (#34773542)

You could just turn on LVM, which has been stable (and even the stock config in some distros) for years now and gives you dynamic volume allocation, data striping and snapshots with any filesystem. There are reasons to like ZFS/btrfs/etc., but the things you're asking for are easily available with much older, better-supported, better-documented solutions.

Re:Btrfs (1)

vandy1 (568419) | more than 3 years ago | (#34770814) [] has the answer: cp --reflink=always ... You will need a non-ancient version of coreutils for this to work. Cheers, Michael

Re:Btrfs (1)

KiloByte (825081) | more than 3 years ago | (#34772608)

You want cp --reflink=auto instead, so it works even if you issue the command across filesystems or on non-btrfs. There's no reason not to plop alias cp='cp --reflink=auto' into your .bashrc.

Re:Btrfs (2, Interesting)

Anonymous Coward | more than 3 years ago | (#34770066)

I've been using it for nearly 2 years without any issues whatsoever (I haven't even had to fiddle with it to keep it going). It's been in the kernel for over 1 year.

They're well beyond "some sort of working version, even if it doesn't do much". Give it a try.

Re:Btrfs (3, Informative)

Korin43 (881732) | more than 3 years ago | (#34770820)

The problem with Btrfs isn't that it doesn't work (it works fine and has for years). The problem is that it's not very fast right now (most benchmarks I've seen show it slightly behind other major file systems in most tasks), and most things don't make use of the cool things it does.

Re:Btrfs (1)

Rich0 (548339) | more than 3 years ago | (#34772394)

The problem with btrfs is that it is unstable and feature incomplete, and doesn't even have an fsck yet.

I've even gotten it to panic with loopback devices, ext3 conversions, and mirroring.

BTW, I define unstable as lots of patches in every kernel release. It just is very new for any real production work.

It is probably good enough for casual use on straight partitions without use of its more exotic features. I would keep good backups though.

Re:Btrfs (3, Informative)

loxosceles (580563) | more than 3 years ago | (#34772440)

Compared to zfs, which is the only other quasi-mainstream filesystem that has copy-on-write (which gives snapshots for free) and data checksums, btrfs is almost always faster. Most of btrfs's slowness is due to those two features. Comparing ext4 speed to btrfs speed is not fair unless you disable both with -o nodatacow. []
see page 4 for btrfs random write performance, which blows away both zfs and ext4 on hdds.

Without COW and checksumming, btrfs is closer to ext4. Even so, except for applications that are reading or writing massive amounts of data, I'd rather have data integrity and SSD wear leveling and free read-only snapshots instead of maximum speed.

Re:Btrfs (1)

TheLink (130905) | more than 3 years ago | (#34774182)

see page 4 for btrfs random write performance, which blows away both zfs and ext4 on hdds.

I'd rather have data integrity and SSD wear leveling and free read-only snapshots instead of maximum speed.

1) There are SSDs with decent random write performance now.
2) Shouldn't the SSD wear leveling be done by the SSDs themselves?

Re:Btrfs (1)

Adam Jorgensen (1302989) | more than 3 years ago | (#34774072)

Another problem is the fact that it has very poor crash recovery options. I losted a whole home partition thanks to the BTRFS getting corrupted on some level or other due to a power outage during as read operation, of all things. Not cool.

Alright! This is BIG!! (-1)

Anonymous Coward | more than 3 years ago | (#34770022)

I can't wait until 2.6.38!!!

Ceph is really cool (4, Informative)

Lemming Mark (849014) | more than 3 years ago | (#34770064)

Ceph is a really cool bit of technology. It distributes storage redundantly across multiple machines, so you can store lots and lots of data and not lose any if one of the hard drives explodes. It should distribute the load of serving that data too. You can have a network filesystem based on this already, now they've added support for virtual block devices (i.e. remote disks) over it.

If you combine that with virtualisation (the Kernel Newbies changelog mentions that there's a patch for Qemu to directly use a Ceph-based block device) then you can do magic stuff. e.g. run all your services in virtual machines with their storage hosted by Ceph. Provide a cluster of virtualisation hosts to run those VMs. If a physical box needs maintenance, live-migrate your VMs off it without stopping them, then just yoink it from the cluster - the storage will failover magically. If a physical box explodes, just boot the VMs it was running on other nodes (or, combined with some of the hot-standby features that Xen, VMware, etc have started to offer, the VMs are already running seamlessly elsewhere as soon as the primary dies). If you need more storage or more processing, add more machines to the cluster, get Ceph to balance the storage and move some VMs around.

Not everyone is going to want to run Ceph on their home network but if you have a need for any of this sort of functionality (or even just an enthusiasm for it) then it's super cool. Oh yes and Ceph can do snapshotting as well, I believe. Ace.

Re:Ceph is really cool (1)

Anonymous Coward | more than 3 years ago | (#34770294)

that sounds like a lot of work to look after for a home network....
i wonder if it ever goes wrong ...

Re:Ceph is really cool (1)

loxosceles (580563) | more than 3 years ago | (#34772860)

Not as much work as dealing with traditional RAID arrays. I end up having to RMA and/or replace a few disks a year, and while software raid keeps me from losing data in almost every case, it's a PITA. I'd much rather have that stuff managed automatically by a distributed filesystem. The next external disk enclosure I build is going to be part of a ceph filesystem or something similar. Then I'll reformat my recently built external 6TB raid-6 array and add those to the unified (e.g. ceph) filesystem as well.

In the last year and a half (all of these are SATA, none more than 3 years old):

- An older seagate in my windows box had its power connector catch fire.
- A 1TB WD green drive started failing SMART self-tests soon after installation.
- The other 1TB WD green drive (twin of the one that failed) has a few offline uncorrectables, but I'm not in the mood to replace it yet.
- Both of a pair of 1.5TB seagates (7200.10) are on the fritz; the controller returns errors and resets one or the other every few hours - neither drive gets kicked out of the raid array, but it freezes disk access to the array for 10-20 seconds which has been extremely aggravating. I'm in the process (slowly) of moving data from them onto a newly built external 6TB raid-6 array.
- One of a pair of new 1TB WD blacks in my (several month old) main workstation last week started making horrific scraping noises and failing SMART checks. (I'm replicating that raid-1 array to a third replacement disk as I type this, and then the broken one is getting RMA'd).

It's too much management overhead to deal with these failures piecemeal. The future for large low-maintenance storage arrays is to use something like ceph or Isilon's (commercial) OneFS to automatically manage redundancy to avoid data loss, while allowing full use of disks of any size.

Re:Ceph is really cool (-1)

Anonymous Coward | more than 3 years ago | (#34771152)

LOL. Who gives a shit? All that stuff is useless if the underlying OS is unstable, insecure, and unusable.

I don't use Linux anymore and not one of the technical people I know do either. ITS OS X ALL THE WAY BABY!

What's new (5, Informative)

Troll-Under-D'Bridge (1782952) | more than 3 years ago | (#34770262)

The link in the story just points to the list post announcing a new major version of the Linux kernel. Note that the changes listed in the post are for changes from the last release candidate (-rc8) and not from the last major kernel release (2.6.36). For an overview, it's better to head over to Kernel Newbies [] . It even has a section which summarizes the "cool stuff", major features that the new kernel brings.

Interestingly, the overview appears to overlook what I believe is a major feature introduced in 2.6.37: power management for USB 3. I may have to do some more digging through the actual kernel changelogs. Maybe the change was reverted during the last few candidate releases, but I remember reading about it in H-Online [] , particularly this part:

The XHCI driver for USB 3.0 controllers now offers power management support (1, 2, 3, 4); this makes it possible to suspend and resume without temporarily having to unload the driver.

(In the original, the parenthetical numbers are links to the kernel commits.)

Power management for USB 3 would have been the most important new feature for me. Without it, you have to resort to a number of ugly hacks to hibernate or suspend a laptop or a motherboard with USB 3 enabled. (Turning off USB 3 in the BIOS is a hardware hack that allows you to bypass the software hacks.)

Ooops (2)

Troll-Under-D'Bridge (1782952) | more than 3 years ago | (#34770376)

I missed the second link in the story, which does point to Kernel Newbies. (Blame it on my browser which doesn't color links in red or some other obscene color.) However, my comment about USB 3 still stands. I'm still trying to find a "news" source that highlights the new XHCI power management feature. Failure to hibernate/suspend because of non-working USB 3 power management is an issue that's been discussed in a number of forums.


Anonymous Coward | more than 3 years ago | (#34770400)

Includes the "magic 200-line" task grouping patch? (2)

sagaciousb (1379425) | more than 3 years ago | (#34770568)

Looking through the changelog I couldn't find anything immediately evident about whether or not the "200-line kernel patch which does wonders" was included in this release or not. Here is the related original post [] . Anybody know if it will be in there for certain? I may have to remove my Ubuntu alternative workaround outlined in the subsequent article [] before doing an upgrade.

Re:Includes the "magic 200-line" task grouping pat (4, Informative)

sciencewatcher (1699186) | more than 3 years ago | (#34770778)

That patch is scheduled for 2.6.38. This article details 2.6.37. This article is about the end of the pipeline, the article you linked to is about the beginning of the pipeline that kernel development is.

Gurus: will this affect high-SMP workstations? (3, Informative)

isolationism (782170) | more than 3 years ago | (#34771512)

I have dual-processor Xeon with six cores each, meaning there are effectively 24 threads (2 physical * 6 cores * 2 hyperthreading) and the system will lock up for SECONDS at a time during large IO operations. The file system is XFS over an 8-disc hardware RAID10 on 15K RPM drives. Seems to be most noticeable when copying to/from the network, although I'm not convinced the network is the problem here. For such a high-end machine these stalls are unbearable; I had (a lot) less difficulty with only 4 cores and less/slower drives in a hardware RAID 0.

Re:Gurus: will this affect high-SMP workstations? (0)

Anonymous Coward | more than 3 years ago | (#34771860)

Are you starting your IO operations from the console? If yes, there's one major fix in 2.6.37 that could help in your scenario:

IO issues (1)

Sami Lehtinen (1864458) | more than 3 years ago | (#34772708)

Fuse-NTFS-3G makes my system really crawl when there is USB-disk IO. UI and everything else is totally stalled for long periods. I'm not Linux guru, but I just confirm that I'm also expering IO issues even when system load should be pretty low. "Linux Bender 2.6.32-27-generic #49-Ubuntu SMP Thu Dec 2 00:51:09 UTC 2010 x86_64 GNU/Linux". I guess some tweaking might be required to fix this issue.

Re:Gurus: will this affect high-SMP workstations? (0)

Anonymous Coward | more than 3 years ago | (#34772754)

I will assume you're using a software RAID? Try a hardware RAID controller. The buffering alone will solve your problem, not to mention the dedicated processor doing the disk I/O.

Re:Gurus: will this affect high-SMP workstations? (2)

drinkypoo (153816) | more than 3 years ago | (#34773152)

Unless it's a very expensive controller, performance will go down. OTOH, XFS has shown itself to be a bit of a bear... and problem-ridden. I just had it eat my superblock. Took more than an hour for xfs_repair to find the backup. Onward to ext4!

Re:Gurus: will this affect high-SMP workstations? (1)

Billly Gates (198444) | more than 3 years ago | (#34773928)

I have never admin-ed SGI machines with XFS admittedly, but from what I was told XFS can never be stable on x86 architecture.

During a power loss a normal machine will have corrupted ram contents before turning completely off. There are protections built into the filesystem like exts or UFS journaling that undue partial transactions. XFS is much faster because it is in real time and doess less integrity work. SGI have power capicitators to correctly finish a write during a powerless. They are very nice machines and XFS was built around nice controllers with expensive hardware that can do the things like I mentioned. Unless x86 hardware does the same sorts of things it will always be less reliable when running XFS.

Re:Gurus: will this affect high-SMP workstations? (1)

drinkypoo (153816) | more than 3 years ago | (#34773948)

That defeats the whole fucking purpose of a journaling filesystem, which should do the right thing in the case of hardware failure (barring SOME TYPES OF disk failure.) That basically means XFS is shit.

Re:Gurus: will this affect high-SMP workstations? (0)

Anonymous Coward | more than 3 years ago | (#34773168)

I remember seeing a bug report about poor performance with large amounts of RAM. Ie, in the 32 GB range. Having allot of IO in one thread would block IO in other threads, even if the IO was on a separate ... channel or whatever it is called. Sorry, I'm not a guru :)

So bro, how much ram you got?

The Big Nasty IO Bug (1)

zx2c4 (716139) | more than 3 years ago | (#34773982)

You're probably running into this long standing IO bug [] , which despite complaints for many years, has still not been properly diagnosed. A big mystery, evidently.

Not enough information (1)

Sits (117492) | more than 3 years ago | (#34774358)

You don't say which kernel you are using so it could be the problem you are seeing has been fixed by a previous kernel. However it is unlikely the removal of the BKL will make a difference to you if you're using 2.6.36 since most subsystems were already using fine grained locks of their own before this. There might be another different change in 2.6.37 that helps but I'd say its unlikely...

PPP in ipv4? (1)

B5_geek (638928) | more than 3 years ago | (#34771668)

So, does this mean I don't need to play around with rp-pppoe or pppoe-conf (or equivalent) for DSL setup/configuration? And if yes, how then?

Re:PPP in ipv4? (1)

ESD (62) | more than 3 years ago | (#34774142)

Unfortunately not. PPPoE runs the PPP protocol over Ethernet, not over an IPv4 connection (which in turn usually runs over Ethernet)
This will probably be more useful for creating tunnels between different IPv4-connected hosts, such as for VPNs.

task groups patch (1)

Anonymous Coward | more than 3 years ago | (#34773154)

Did Mike Galbraith's per-TTY task groups patch make it? I can't find any reference to it in the release notes.

Versioning? (1, Interesting)

fuzza (137953) | more than 3 years ago | (#34773902)

So, what's the deal here - have they pretty much abandoned the old "odd minor releases for development, no new features in stable versions" plan, or what?

Re:Versioning? (2)

shish (588640) | more than 3 years ago | (#34774438)

Yeah, they abandoned that a few years ago, current plan is something along the lines of "six weeks development, two weeks bugfixes-only, release, repeat", incrementing the third part of the version number each time (ie there are no plans for the "2.6" part to ever change AFAIK)
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?