Linux Kernel 2.6.31 Released 374
diegocgteleline.es writes "The Linux kernel v2.6.31 has been released. Besides the desktop improvements and USB 3.0 support mentioned some days ago, there is an equivalent of FUSE for character devices that can be used for proxying OSS sound through ALSA, new tools for using hardware performance counters, readahead improvements, ATI Radeon KMS, Intel's Wireless Multicomm 3200 support, gcov support, a memory checker and a memory leak detector, a reimplementation of inotify and dnotify on top of a new filesystem notification infrastructure, btrfs improvements, support for IEEE 802.15.4, IPv4 over Firewire, new drivers and small improvements. The full list of changes can be found here."
Linux audio (Score:3, Insightful)
there is an equivalent of FUSE for character devices that can be used for proxying OSS sound through ALSA
That quote shows how much of a train wreck Linux audio is.
Comment removed (Score:5, Insightful)
Re: (Score:3, Insightful)
Yes, because userspace sound daemons were invented by ALSA. We didn't have these with OSS, not at all....
Re:Linux audio (Score:4, Informative)
Re:Linux audio (Score:4, Insightful)
You don't need them with OSS on FreeBSD and Solaris (for example), or on Linux with the out-of-tree OSS 4 implementation
You don't need them in ALSA either, because dmix is implemented in the ALSA library, not as a userspace daemon.
It's amazing the increible amount of FUD that has been spread about these topics...
Re: (Score:3, Interesting)
There were only need on Linux OSS because Linus refused to do audio mixing in the kernel. This means the resource sharing and hardware abstraction the kernel _should_ be doing was delegated to user-space.
Re: (Score:3, Interesting)
The real fix would be to make PulseAudio use OpenAL optionally, so that cards that have accelerated mixing can be made to use it. I don't see the point though - not only are modern CPUs more than powerful enough to do it in userspace, they can't possibly have per-card defects while doing it.
Now that we do have PulseAudio it's best to trim as much fat and necrotic code from the kernel as possible. If the remaining realtime issues can be resolved, for which there is much experimental literature, it'll be perf
Re: (Score:3, Interesting)
I like the features of pulse audio, especially per-application volume control, but it is not worth 200 ms latency to get it.
Cheers
Re:Linux audio (Score:5, Informative)
So just use ALSA!
The situation on Linux is that there used to be OSS, and now there is ALSA. ALSA works fine, for pretty much everybody. There are a few legacy apps which use OSS because no one is updating them, and obviously, it would be nice if they would play nice. Pulseaudio is a bit strange, but nothing requires it's use, and IMHO there is no real reason for it to be used unless you want to do somewhat strange things (that you generally can't do on any other type of OS). Don't use pulseaudio if you don't want to; if your distro forces it on you, use a sane one.
This scary graph [adobe.com] and related ideas tends to get mentioned in connection with this: this conflates libraries, sound servers, and drivers to some extent. One could draw a similar graph for windows, featuring programs using the Quicktime library, the WMP library, MME, DirectSound, WASAPI and various other APIs and libraries (and I haven't even gone into the changes to the audio driver model). WMP would have plenty of in arrows from applications using its libraries, and plenty of out arrows because it supports more than one API. And don't forget that there are still legacy applications which need to be the only app playing audio, just like on Linux.
Here is why I can't be bothered to learn enough about the driver layer to give examples: "UAA is intended to be a complete replacement for developing WDM Audio Drivers; however, in some cases it may be necessary for an otherwise UAA-compliant audio device to expose capabilities that cannot be done through UAA. Windows will continue to fully support audio drivers that use the PortCls and AVStream drivers. [wikipedia.org]
Audio technology has evolved, lots. Having backward compatibility requires that things get slightly complex. Everybody is doing this. I think Linux is doing it rather well, although certain distros make some odd choices.
OSS made it impossible to play more than one stream at once on a lot of hardware.
Re:Linux audio (Score:5, Interesting)
OSS made it impossible to play more than one stream at once on a lot of hardware.
With a standard configuration, alsa does also, you have to load the dmix module in your config to act as a software mixer on cards that don't do hardware mixing (most on board bits).
This is where the userspace demons enter it all, most of them just started out as another layer that does software mixing, but every man and his dog came up with his own invention.
As for just using alsa, that's great if you don't mind not having certain functionality, some of the sound demons do add some nice features (jack is the only one I've found worth using though). It could be argued the driver layer shouldn't have to deal with some of that advanced functionality though, another reason why these demons were made.
Re: (Score:2)
With a standard configuration, alsa does also
All the relevant desktop distros enable dmix by default...
Re: (Score:3, Interesting)
OSS made it impossible to play more than one stream at once on a lot of hardware.
That was OSS 3. OSS 4 apparently allows you to do this on all hardware and is apparently much nicer than ALSA. It's also open source again. I read a good article about this situation a while ago but can't find it now.
Re:Linux audio (Score:5, Insightful)
Re: (Score:3, Interesting)
Yet they are curiously silent when commercial entities release derivative works under their own license.
Re: (Score:3, Insightful)
Insane Coding's Insane Plan for Networked Audio (Score:3, Insightful)
The article you're probably thinking of is State of Sound in Linux Not So Sorry After All [blogspot.com].
Good article, but I've got to point out one really bad piece of misinformation this damn fool made...
"If users need remote sound (and few do), one should just be easily able to map /dev/dsp over NFS, and output everything to OSS that way, achieving network transparency on the file level as UNIX was designed for (everything is a file), instead of all these non UNIX hacks in place today in regards to sound."
<sigh> OK, let's break this down. First, you can't export a character device over the network v
Re: (Score:3, Interesting)
Recently upgraded my motherboard to a new Gigabyte model which had on board HDMI and other audio for HDCP viewing. Needless to say, the standard ALSA packages for Ubuntu failed rather miserably to work. After several days of fighting with connectors, config files, reboots, re-installations and silent refusal to work, I only managed to get sound working by compiling ALSA from source. Of course, this now means that ALSA must be recompiled every time I upgrade the ke
Re: (Score:3, Informative)
OSS was okay
It really wasn't, depending on your needs of course. If oss were good enough alsa would not have been invented.
Pretty much all cards are handled by alsa in the kernel back end of things, that is pretty standardised etc, the whole problem is the sound server or userspace demon that handles mixing and other bits. PulseAudio was a band aid with horrible latency, Only professional apps tend to support jack. aRts and esd at least seem to have died out when most popular kde and gnome distro's both went to pulseau
Re: (Score:2)
Please define "works". After that, imagine that some people have other needs and what the definition of "works" is in their case. Think about multiseat setups, have one users X session play audio on the front speakers and another users X session play on the rear speakers, and they have separate master volume controls. Think about using one microphone to record to several different applications at the same time. Think about logging in to a remote computer and the audio of the applications you start play on y
Re: (Score:2)
It still is. I use it on archlinux after alsa farked up unexpectedly and it works fine.
Well, actually, not quite, the mixer is broken. But I like to pretend it isn't.
OSS is the unix way (Score:3, Informative)
If ALSA is so great, why did it never get copied out side of Linux?
Anyone else prefer having proper file interfaces for things like, Unix should do?
If I want to write sound I write to
If I want to write sound out to a second sound card, I write to
Now I use ASLA, because the Linux support is all geared that
Re: (Score:2)
not every app out there is written to use the dmix plugin.
The dmix plugin is used transparently in the ALSA libs, apps don't need to be rewritten to use it...
Re: (Score:3, Insightful)
I agree but the situation is getting better and better. Pretty much every distribution has standardized on Pulseaudio and while it caused lots of problems in the beginning, and it still causes some problems on certain setups (especially with legacy, badly coded applications/games/emulators), it is a good API and it IS the future of Linux desktop audio, whether you like it or not. When this transitional period we are currently in is over, everyone will be much better off.
Re: (Score:2)
As someone who does pro audio production (and has to reboot into OS X to do most it properly) that sounds like a threat to me. We've waited long enough, can we please just get back to OSS. There is no good reason not to at this point.
Re:Linux audio (Score:5, Insightful)
PA is for desktop audio. For pro audio production you'll run JACK and have PA output its audio to JACK instead of directly to ALSA. That way your pro audio apps will get their super low latency and all of the apps that can get away with 50ms latency will play through PA to JACK. You get the best of both worlds.
Re: (Score:3, Informative)
How about keeping PA and JACK but skipping the ALSA layer, then
But how should PA/JACK talk to the sound card? ALSA (not counting the userland plugin system) is pretty much an API and a collection of drivers.
(especially since non-Linux systems apparently manage without it)?
Because they use something else instead of ALSA, like OSS, Core Audio or Direct Sound.
Re:Linux audio (Score:4, Insightful)
For some time Alsa was the "new tech". Now PulseAudio. By the time it stabilizes, there will be something else.
Re: (Score:3, Interesting)
it is a good API and it IS the future of Linux desktop audio,
The future of linux audio it may be.. but good is questionable, no person using their linux machine for synths or midi would touch pulseaudio with a ten foot pole, jack is far superior with a lot less latency, but only applications designed for pro audio use tend to utilize it.
When this transitional period we are currently in is over, everyone will be much better off.
The latency incurred by pulse audio is horrendous, for youtube or movies that's fine, for gaming it's questionable, for music production it's nasty. These days completely removing pulseaudio and getting it all going again is quite an
Re: (Score:3, Insightful)
As I wrote above: For pro audio production you'll run JACK and have PA output its audio to JACK instead of directly to ALSA. That way your pro audio apps will get their super low latency and all of the apps that can get away with 30-50ms latency will play through PA to JACK, at the same time even.
With the very latest versions of Pulseaudio combined with a realtime kernel (soon to be merged into the mainline kernel), Pulseaudio won't give you much latency at all. It also uses MUCH less CPU than JACK so it's
Re:Linux audio (Score:4, Interesting)
What IS the PA latency, and the Jack latency? Jack seems idiotic; just use the sound card directly. Seriously, consider this: JACK -> ALSA, you can go directly to ALSA anyway (I haven't had a sound server for years). Do the mixing in your application and output it to ALSA. If real-time performance is an issue, don't run multiple apps at once. Record separate tracks versus a (monitored) metronome(!) when doing music, and then merge them with Audacity or GLAME.
"Professional" audio amateurs seem to all be n00bs, using their recording device (computer) for playback whereas real "professionals" use monitors, metronomes, visual cues, and master tracks. You monitor your metronome, monitor yourself, monitor the playback track, whatever; and record a separate track. Then later you digitally merge those together. QED. Whatever stupidity relies entirely on your computer being able to low-latency its way out of a paper bag for you to get any work done is a huge engineering error.
Re:Linux audio (Score:5, Funny)
it is a good API and it IS the future of Linux desktop audio,
It may be a good API, but it's not a good implementation. But yeah, I can agree that glitchy, high latency audio is the future of Linux desktop audio.
Re: (Score:3, Funny)
You know what? Maybe in the future Jack will implement the Pulseaudio API and be able to function as a drop-in replacement to Pulseaudio. It's not THAT unfeasible. Also, the PA implementation is getting better and the latest versions don't have that high latency if run on a -rt kernel with realtime privileges. A bit buggy under certain conditions, yes, but that will be fixed in the future.
Re:Linux audio (Score:4, Insightful)
A bit buggy under certain conditions, yes, but that will be fixed in the future.
Except this is the exact excuse we get countless times when audio, video, etc don't work in Linux. Just give us more time! We swear it'll work in the future! Then you wait 6 months and all that previous work is scrapped and something new is built. Then we're told again: Just give us more time! We swear it'll work in the future! Lather, rinse, repeat.
Re:Linux audio (Score:5, Insightful)
"Pretty much every distribution has standardized on Pulseaudio" is the very definition of regress. What you said was getting better and better? I installed Debian unstable on my laptop, with KDE desktop, and it also installed and enabled this trainwreck called "PulseAudio", which serves as only purpose to disable the audio of an already working system. Sound has worked for me in Linux since forever, never had any problems with it until PulseAudio came around.
During the early days I had been using a sound card with hardware mixing. Back then even Windows wasn't coping well with several streams and a card supporting only one, so what OSS offered back then was good enough for me, and on par with other operating systems. Then came ALSA, which offered dmix and dsnoop to do it software. Now, dsnoop has never worked for me, but I don't know any other operating system that supports such a feature so I guess I don't have much ground to complain.
Then PulseAudio came around, and that is the first time when I had any problem with sound on Linux. Sound started to be skippy, jumpy, choppy, and not working in some applications. Why would anyone think that PulseAudio would be a good idea? Now, don't get me wrong, I like PulseAudio, I even use it for some tasks. Namely playing music from my laptop on the soundcard of my desktop. But thanks to the brilliant idea that PulseAudio should be used everywhere I couldn't really do that anymore, because I had to eradicate PulseAudio to have sound again, so I couldn't use it for *my* needs. Fuck me.
Disclaimer: I'm not sure I'm chronologically correct above, the sound might have been in a better state in Windows than in Linux during OSS times, I just mean that I was already used to being able to play only *one* sound at a time when I first came to use Linux, so it seemed pretty normal thing to me.
Re: (Score:2)
Yes, this transitional period is pretty harsh to us who wants to run some older software that use bad coding practices. :( Thankfully, it will only get better as applications are fixed and backwards compatibility interfaces, like CUSE+ossp mentioned in the article summary, get better.
Yo dawg (Score:3, Funny)
We heard you like your audio to work, so we put a sound API in your sound API, so you can have silence whilst you listen!
Re: (Score:3, Funny)
It's no worse than video is, really. The four-second PulseAudio lag* matches nicely with the lack-of-vsync-based tearing in X.**
Actually, I take that back: video is worse. At least with PulseAudio I can see how it's eventually supposed to work if it didn't crash periodically. The clusterf_ck that is video playback doesn't look like it'll get fixed anytime soon, what with the six-party fight between all the various components.
You can really tell that the bills for Linux's development are being paid by ser
Re:Linux audio (Score:4, Insightful)
What bothers me here is that I read "Oh, change this, do that turn this knob and sound will work for you." Then it works until there's a new kernel update (I use Ubuntu) and it breaks again. Or it just stops working after too many applications use it.
Then you read how fabulous PulseAudio is and how wonderful it is, but it just plain does not work. By working, it should work every time, all the the time without knob turning. It's embarrassing that in this area, Windows 95 is superior to Linux in almost every respect.
All this effort is put into chrome polishing the kernel for faster SMP with 64 CPU systems and the dang box can't even play music without having some sort of brain failure.
Re: (Score:2)
Why mod this insightful? Just because ALSA proxies OSS, does not mean it HAS to. It is your choice, and choice is part of Linux philosophy. ALSA works fine with its own hardware drivers, without OSS involved at all. Which it usually does too. You are complaining that somebody gave you an option to use a soundcard with OSS-only mixer with ALSA applications. Where is the logic in that? It is like complaining that PulseAudio should be removed and buried because you don't use it, even though many find it conven
Re: (Score:2)
Please see the OSS implementation in FreeBSD for a lesson in how sound should be done.
Yeah, FreeBSD. And instead, why not take a look at how OS X and Windows (Vista and ahead) implement their sound systems? Hint: Both mix audio in userspace, and Pulseaudio is the closest thing to them in Linux land.
But hey, what do OS X and Windows know about desktops and professional sound systems? Nothing. That's why we all should follow the lead and use cutting-edge technology like OSS and in-kernel sound mixing.
.
IPv4 over Firewire? (Score:4, Interesting)
I guess I really wasn't into linux until the last 3-4 years, but hasn't OS X done this since the start? And I think my XP machine at work tries to use Firewire as a network adapter.
What took so long, honest question.
Re: (Score:3, Insightful)
Re:IPv4 over Firewire? (Score:4, Interesting)
Networking over USB would be awesome. Link 2 PC's with USB cable and voila! Hell, even being able to mount an internal drive that way on the other machine would be cool. Anything like that in the works (haven't checked)?
Re:IPv4 over Firewire? (Score:4, Interesting)
How was the parent modded troll, it's completely valid!
It's a good idea. There have been networking over USB devices (by which I mean plugging both machines USB ports into the device, not "merely" a USB ethernet adaptor). The problem with doing this with USB, rather than Firewire is that USB has a really strong concept of "host" and "device". The cables are made to only plug into certain combinations of endpoints because, sadly, only certain combinations of endpoints can possibly work. You can't plug the host controller of one PC into another, since they're only expecting to talk to devices, not another controller. This is in contrast to Firewire, which is peer-to-peer and (in principle) anything could talk to anything over it.
The unfortunate consequence is that you don't just get to do networking over a nice, cheap cable as you do with Firewire. You actually need a little device box in between so that both hosts can believe they're talking to a peripheral, not another host. This approach, on its own, wouldn't let you plug in "remote" devices either so you'd have to set some other protocol up (plenty of existing options here) to talk to devices at the other end. You have to be a bit careful because most devices would barf horribly if there are multiple users - uncontrolled shared access to a disk device is a good way to lose all your data, for instance.
Although it's fun to do IP over Firewire, I'm not familiar with exactly how it's implemented. What intrigues me is the prospect of running increasingly sophisticated high-performance protocols over Firewire. As I understand it you can basically get remote DMA access to the "other end's" memory. This obviously has severe security implications but it could be quite nice in a mutually-trusting cluster. There are various protocols (e.g. used by Infiniband) for having communications over remote DMA. I wonder if anyone could put together an "infiniband lite" that just ran over Firewire. It'd be cool, though I don't know if it would be particularly useful ;-) (plus it would lack the user accessible networking Infiniband has)
Re:IPv4 over Firewire? (Score:4, Informative)
[...] only certain combinations of endpoints can possibly work. You can't plug the host controller of one PC into another, since they're only expecting to talk to devices, not another controller.
Not so with Linux. You can enable the "USB gadget" driver in the kernel. Now if you have a device connector in your system, it can act like any other device. That is how Linux on small devices connects to their hosts via USB. And actually, the way they communicate is plain and simple TCP/IP. :)
Re: (Score:3, Informative)
Yes, that's true, I should have mentioned that. But you still can't plug a normal host controller into another, regardless of the software stack you're running. You need special hardware that most PCs won't have that implements the device end of the channel. I think that it's something of a shame that PCs don't include this hardware but I imagine it wouldn't be that useful, given they all include ethernet ports these days.
Re: (Score:3, Informative)
A basic firewire controller with pass-through is cheaper than a 2-port 1000baseTX switch
In this day and age, what do you use a 2-port switch for? You don't know that plugging the cable directly between cards just works?
Re:IPv4 over Firewire? (Score:4, Informative)
I haven't looked at the official changelog or the code yet, but I'm just as confused as you about that item. Moreso perhaps, as I have used IPv4 over firewire with two linux machines before. That was probably 5 years ago or so.
Re: (Score:2, Informative)
Re:IPv4 over Firewire? (Score:5, Informative)
From the changelog
*The new firewire driver stack is no longer considered experimental, and distributors are encouraged to migrate from the ieee1394 stack to the firewire stack
*Added IP networking to the new FireWire driver stack
It does add up. It's just added to the new stack, old stack has it already.
Re: (Score:3, Informative)
You've never seen a MiniDV camera, then?
Re: (Score:2)
How the heck can you reliably pump video from a video camera to a PC? While USB3 has similar bandwidth, the overhead of the protocol, especially when there are other USB items in the chain, slow down the real throughput.
I know, . Anyone pull up a good comparison on google?
Re:IPv4 over Firewire? (Score:5, Interesting)
However, the trend in camera tech, at least at the consumer level, is making that increasingly irrelevant. Flash and HDD based camcorders are gradually devouring DV camcorders in the lower end market. Pretty much all the HDD or flash based cameras(at least the ones that cost less than the computer they are connected to) just show up as USB mass storage devices, with one or more video files on them. Drag and drop and go. Unlike DV, where the transfer requires that X megabits per second make it from point a to point b, on time, or you'll get glitches, mass storage just requires that all the bits get from point a to point b before the user gets bored. USB still isn't quite as good as firewire at doing that; but the difference in performance is small, and the difference in price/convenience is large.
Once you get away from the real time streaming requirements of DV, to which firewire is well suited, transferring video is just a special case of connecting an external hard drive. Firewire is better there; but only modestly, which isn't really good enough to survive on the price sensitive end of things.
70% drivers! (Score:3, Interesting)
Lots and lots of driver work. Over 70% of all of the 2.6.30 to 2.6.31 patch is under drivers/, and there's another 6%+ in firmware/ and sound/. That's not entirely unusual, but it does seem to be growing. My rough rule of thumb used to be "50% drivers, 50% everything else", but that's clearly not true any more (and hasn't been for a while - we've been 60%+ since after 2.6.27
I personally think this is a real pity. So much time is being spent on getting drivers implemented that new features and other kinds enhancements are being pushed back.
Re:70% drivers! (Score:5, Insightful)
Re: (Score:3, Insightful)
I'd argue that drivers should be modular and have no business being directly in the kernel in the first place - but that's just me.
Re:70% drivers! (Score:4, Informative)
I'd argue that drivers should be modular and have no business being directly in the kernel in the first place - but that's just me.
$ find /lib/modules/2.6.27.7-smp/kernel/drivers/ -type f|wc -l
1499
Looks like you're in luck!
Re:70% drivers! (Score:4, Insightful)
Great - now if I compile these lovely drivers will they work on my buddy's (or more importantly, a user's) system running kernel 2.6.1? 2.6.22? 2.6.31? 2.4.5?
Dividing the source and binary out into separate files doesn't make it modular. The infrastructure to move the binaries around needs to be in place so that a driver can be loaded with little regard as to kernel version.
Re: (Score:2)
Re:70% drivers! (Score:5, Insightful)
Re: (Score:3, Informative)
Ah so now you're arguing that they should freeze the interface, and prevent any more improvements.
Read http://www.mjmwired.net/kernel/Documentation/stable_api_nonsense.txt [mjmwired.net]
Re: (Score:3, Insightful)
Because he is. Everyone can't be expected to be running the same version of the kernel I'm not running 2.6.1, but I do believe that my home system is running 2.6.28. Tieing driver development to a specific kernel release is insane. Look at the hoops Nvidia has to jump through to get drivers out. On Windows I download a driver - it works. For the last 14 years we've basically had three groups of drivers - Windows 9x, Windows 2k/XP, and Windows Vista/7. Outside of those broad groupings a manufacturer co
Re: (Score:3, Insightful)
Everyone can't be expected to be running the same version of the kernel
Why not? Upgrades are free.
Look at the hoops Nvidia has to jump through to get drivers out.
The only hoop they really need to jump through is opening their source.
Re:70% drivers! (Score:5, Insightful)
I'd argue that drivers should be modular and have no business being directly in the kernel in the first place - but that's just me.
I don't anyone ever argued that drivers should not be modular, in fact that's why there's kernel modules. I'm guessing you're talking about one of the two general flamewars:
1) Monolithic kernel or microkernel
2) Stable ABI for drivers
The first is about making the kernel into a big message-passing daemon, which it turns out has a performance penalty and ultimately doesn't have big enough benefits because a kernel panic and a major subsystem hang/crash both are ugly and if the hardware is left in a borked state it might not really help.
The other is a stable ABI, which has been suggested about 234,533,458 times to date. My only real comment to that is that seeing how crappy many Windows drivers are, do you honestly want them making blobs for a 1% operating system which will get about as much priority, support and bugfixes? Drivers based on specs or donated source almost always suck less.
Re: (Score:3, Interesting)
The first is about making the kernel into a big message-passing daemon, which it turns out has a performance penalty
More accurately, it has a performance penalty on uniprocessor systems. On SMP systems, using a system such as the one used by Xen with lockless ring buffers in shared memory, it provides a performance gain.
The other is a stable ABI, which has been suggested about 234,533,458 times to date. My only real comment to that is that seeing how crappy many Windows drivers are, do you honestly want them making blobs for a 1% operating system which will get about as much priority, support and bugfixes?
A stable ABI is less important than a stable API (which Linux doesn't have either), which reduces the overhead of maintaining device drivers a lot and makes regressions much harder to introduce accidentally.
Re:70% drivers! (Score:4, Insightful)
Far more likely is that companies will behave exactly like they do in Windows - not bother updating their drivers for new versions. There is a lot of old hardware that simply doesn't work in Vista etc because there's no incentive for the company to fix the drivers.
Whereas if it's in the kernel, all the drivers can be fixed at the same time.
Re: (Score:3, Insightful)
It would also mean that drivers would be stuck on 32-bit x86. Great.
Re:70% drivers! (Score:4, Insightful)
That would make it easier to manage as the number of them increases.
One of the great advantages of Linux is that improvements can be implemented for lots of devices at once.
I guess a lot of manufacturers will release binary-only drivers, but even if they are buggy, that doesn't leave in any worse state than we are already - those manufacturers aren't releasing drivers for Linux in the first place.
It means people will look at the packaging, notice that it says Linux support, and then find out that it doesn't actually work, and will then blame Linux. Manufacturer-written drivers are fairly universally crap.
Re: (Score:2)
I totally agree, but it seems the linux devs don't want to have to maintain a stable driver ABI. I think that's a pretty silly position to take; hopefully they'll change their minds at some point.
Re: (Score:2)
Re:70% drivers! (Score:4, Insightful)
I personally think this is a real pity. So much time is being spent on getting drivers implemented that new features and other kinds enhancements are being pushed back.
I would assume that the people writing drivers and the people doing core stuff are not the same people, so there's no "pushed back". Ideally you'd have driver writers employed by all the various hardware manufacturers, while core stuff likely only interests a much smaller group of companies that live higher in the stack (probably just system and support vendors).
Re: (Score:3, Insightful)
Well driver problems are the real problem with Linux. It always has been. When push come to shove comparing Linux with other OS's even the Linux Zealots admit that it is a driver problem. Most kernel features will not directly effect us like driver issues. Once Linux fixes its driver problems then it should focus on getting more features in... However in the mean time, the kernel should be improved on what the kernel is supposed to do Manage Hardware interface with software. And Drivers help with the
Re:70% drivers! (Score:5, Insightful)
Insightful?! You couldn't be more clueless.
How do you propose to fix the driver problem? The only way that gets fixed is when every hardware manufacturer writes their own drivers. That would only happen if Linux attained something like 10% market share.
In recent history (the last year or two) the majority (50.1-60%) of all commits to the kernel are drivers/driver update.
Also you forget that there isn't some company that dictates what work gets done on the kernel. There are many developers who work on areas they are want to work in. Are you telling me that linux should reject the FS brtfs because there is a non-name piece of hardware that isn't working yet?
Most kernel features will not directly effect us like driver issues.
Wrong again, My new hardware which I bought off newegg last week works fine in linux (yes I do a quick google search to make sure anyone isn't bitching about something major not working, but anyone who uses linux knows to do that). Because it works, any feature such as a file-system, scheduler improvement, or desktop memory management in low memory situations will improve my experience much more than adding a driver that I won't ever need or use.
Re:70% drivers! (Score:5, Informative)
There may have been a driver problem years ago, but today the problem is pretty much limited to graphics. And DVB, but the competition is doing just as badly there. Overall, Linux driver support is more complete than at least that of Windows Vista.
Re:70% drivers! (Score:4, Informative)
Not really, the driver people aren't really the same as those who would be researching new and exciting ways to do what we already do. For quite a long time now driver development has been the majority of what the linux kernel development is.
Of course, every now and then they make something new like mac80211, but all that really achieves is more efficient code re-use and testing, which is always good but is still just driver development.
All the simple things an operating system kernel has to do hasn't changed over the last ten or so years, just the hardware has. Operating system theory was pretty much perfected back in the 60's
Re:70% drivers! (Score:5, Insightful)
What evidence have you got that suggests driver development means other development is pushed back? Do you think the EXT4 developers take time off to write device drivers?
Lots of driver development means Linux has lots of driver developers. That probably suggests that hardware manufacturers actually try to get their stuff supported.
Re:70% drivers! (Score:4, Informative)
The fact that, with a modern Linux distro, I can plug in pretty much any hardware at least a year old and have it just work no questions asked is a pretty damn spiffy feature.
enhancements are being neglected (Score:2)
I though the main bugaboo was the lack of hardware support in Linux. What other features and enhancements are being neglected?
Re: (Score:2)
Using CUSE for sound devices is The Right Way (Score:5, Informative)
Before we all moved on to worrying about PulseAudio it was traditional for us to complain about legacy apps using OSS, the difficulties associated with wrapping them, the nastiness associated with OSS emulation being implemented in the kernel, etc. Those apps won't have gone away.
Previous attempts to emulate OSS using ALSA have included the aoss tool, which I believe did some mildly ungodly tricks to intercept calls that would usually go to the OSS APIs. It didn't always work, for me, as it depends on what the (often weird and proprietary) app is doing to access the OSS API in the first place. PulseAudio has to provide a tool to help you redirect legacy OSS apps to talk to PulseAudio instead. It's all Made Of Ick.
CUSE (character devices in userspace) allows a userspace program to provide a character device node in /dev and implement it using custom code, rather than relying on an in-kernel driver. When apps open the device node they'll *really* be talking to the userspace daemon implementing the device emulation, rather than to an in-kernel driver (though, of course, the kernel will be involved in relaying the communications through the device interface). This is very similar to what FUSE does for filesystems. The neat thing here is that weird tricks to catch OSS accesses by applications are not needed - the OSS device can simply be "faked" by the real sound daemon. Because it's implemented at device level, it doesn't matter what nasty hacks the OSS application is doing to access the soundcard - you'll *always* be able to grab its sound output from the fake device and do the right thing. No more running legacy apps with an OSS-related wrapper - and no more having the wrapper fail to work!
The end result should be that sound Just Works, even for awkward proprietary apps. CUSE will not automagically fix this on it's own, though - we need to wait for the sound daemons like PulseAudio to catch up and implement the emulation. This might also allow OSS emulation to be removed from the kernel, which AFAIK also supports some variant of OSS-on-ALSA.
Re: (Score:2)
Re: (Score:3, Interesting)
Putting sound mixing in userspace has advantages and disadvantages. The first advantage is that it moves code out of the kernel, which is usually a good idea because bugs in the kernel can crash the entire system. The other advantage is that it can use the normal scheduler. This is less important now, but sound mixing used to be very processor-intensive compared with the total system load and if it's in the kernel it can't be easily preempted by userspace tasks. The big disadvantage is that, rather than
Re: (Score:3, Interesting)
It's not going to replace things like Jack (and OSS4 if that's available to you) but I don't think it's trying to.
It's trying to replace those weird LD_PRELOAD wrappers you have to use to make OSS-only apps speak to ALSA / PulseAudio. CUSE should be used to remove the need for LD_PRELOAD wrappers, making it more robust and simpler to use legacy OSS apps that can't use ALSA or PulseAudio directly. Regardless of what you think about replacing OSS, the current situation is pretty much pessimal: tricking appl
Yay! (Score:5, Funny)
Improved acronym support!
Numbers higher than the last version!
greater infusion processor link array warp drive systems!
Remote device fun (Score:2)
How long until the APIs provided by CUSE are used to implement an arbitrary-character-devices-over-network protocol? That would be pretty cool and useful. Should be doable, from what I understand of how it works.
The description of the in-kernel changes on LWN's article on the subject (http://lwn.net/Articles/308445/) made it sound like the infrastructure could also be used for stuff like network filesystems whose /dev contains *remote* character devices (currently NFS device nodes are always serviced by l
PPS (Score:2)
The LinuxPPS project is an implementation of the Pulse Per Second (PPS) API for GNU/Linux version 2.6.
native xfi support (Score:4, Informative)
Finally ALSA adds in kernel support for creative X-Fi after 4 years, fuck creative.
Re:i'd just like to (Score:5, Funny)
Give it up already, Richard.
Re: (Score:3, Insightful)
There really is a Linux, and these people are using it, but it is just a part of the system they use.
And that part is exactly what is being discussed here.
Linux is normally used in combination with the GNU operating system: the whole system is basically GNU with Linux added, or GNU/Linux. All the so-called "Linux" distributions are really distributions of GNU/Linux.
Or more properly, KDE/Xorg/GNU/Linux.
Re: (Score:2)
You fed the troll. You lose.
Re: (Score:2)
dumbass... TFA is _specifficaly_ refering to the kernel. if people want to use "linux kernel" to refer to linus' piece of code they're free to do so, just like some people call MS's operating system "microsoft windows", so just shut the fuck up.
Re: (Score:2)
Support for what? (Score:5, Interesting)
From your own article:
Jeff Ravencraft of Intel said that he expects the final specification to be announced in San Jose, Calif., on November 17.
Wait, so I'm supposed to be upset that Microsoft didn't ship experimental drivers for an unratified standard in their new OS?
Re: (Score:2)
Wait, so I'm supposed to be upset that Microsoft didn't ship experimental drivers for an unratified standard in their new OS?
What's really telling is that the previous comment got modded +4 Informative without anyone noticing this at all.
Re:Support for what? (Score:4, Interesting)
Re: (Score:3, Informative)
Re: (Score:3, Funny)
Yes, you are. This is slashdot.
Re: (Score:3, Insightful)
It's quite difficult to get decent performance from USB2, because the design of the host controller is such that the CPU needs to be involved in the transfer. If the CPU doesn't respond fast enough (perhaps because you're trying to do actual work with the machine), performance suffers. The problem is less with quad-core CPU's, since you can just dedicate a core to USB when doing transfers.
Firewire and SATA controllers can handle transfers pretty much on their own. Supposedly USB3 can do the same.
Re:is btrfs ready for regular desktop use ? (Score:4, Informative)
Re: (Score:3, Funny)
Nilfs? Nerds I'd Like to .. erm ... oh dear.