Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Does Linux "Fail To Think Across Layers?"

kdawson posted more than 7 years ago | from the one-advantage-of-the-cathedral dept.

Linux Business 521

John Siracusa writes a brief article at Ars Technica pointing out an exchange between Andrew Morton, a lead developer of the Linux kernel, and a ZFS developer. Morton accused ZFS of being a "rampant layering violation." Siracusa states that this attitude of refusing to think holistically ("across layers") is responsible for all of the current failings of Linux — desktop adoption, user-friendliness, consumer software, and gaming. ZFS is effective because it crosses the lines set by conventional wisdom. Siracusa ultimately believes that the ability to achieve such a break is more likely to emerge within an authoritative, top-down organization than from a grass-roots, fractious community such as Linux.

Sorry! There are no comments related to the filter you selected.

Hey! (-1, Troll)

MightyMartian (840721) | more than 7 years ago | (#19004439)

Hey, we have this hip name for our development strategy, and it makes us waaaaay kewl, and you chaotic Commie Linux bastards will never get it, 'cause we're the best and you're not think "holistically" and diving through the "layers". Now excuse me, because I think I crapped myself.

Re:Hey! (1, Funny)

ZSO (912576) | more than 7 years ago | (#19004901)

Please remove yourself from the internet.

Re:Hey! (5, Interesting)

Daniel Phillips (238627) | more than 7 years ago | (#19004961)

Lovely biting sarcasm aside, to be honest, our storage layering in Linux leaves much to be desired. As witness the slow pace of improvement of the volume manager in recent years. This does not prove that layering is bad, but it suggests that our current conception of layering sucks pretty badly. For example, we are burdened with a ridiculously complex interface between application programs and kernel-level volume management support. Managed volumes live off in their own name space. Why can't I say "/dev/hda, you are now snapshotted, shazam"?. No, instead I have change my system over to use /dev/mapper/snapshotted-hda or some such nonsense. Similarly, we are unable to manage all block devices using the same administration interface. No higher level raid integrated with the volume manager, instead this is a separate subsystem that fights a lot with LVM. Partition support hopelessly misfactored and broken. It goes on and on. Nothing unfixable but lots of unfixed brokenness. Compared to this mess, Sun's massive layering violations seem like a breath of fresh air.

But the thing to do is fix our broken implementation of layering and not be fooled into thinking that layers are bad. What is bad is exactly as the author here claims: it is bad to have no powerful capability to cross layer boundaries so that applications see a simple, powerful model instead of the current situation, where one's face is constantly rubbed in the minutae of layering administrivia. ZFS actually has layering, it just bypasses some traditional Unix subsystems and takes care of the functionality itself. But is wrong to conclude that this must therefore be the optimal approach just because it improves on the mess that preceded it. If ZFS internal interfaces are worth using, then they are worth using as core interfaces, not ZFS-only interfaces. Translated into Linux terms, the implication is that it is high time to get busy and rectify some of the serious deficiencies in our storage model. Not by mashing all the layers together, but by teaching them how to get along more efficiently and powerfully, and not be so layered that important things don't even work.

Note: perhaps the biggest design distinction between Linux and other Unixen is that, internally Linux is all just one big flat function space where anything can call anything else and share any data. This is said to be a reason why Linux is more efficient than, say, the Mach kernel with its microkernel layering. If being all one big hairball of functions is good for memory management, vfs, scheduling and so on, then why is it not also good for volume management? I don't know the answer to this, but I do know that we have plenty of bogus layering in our storage stack that has really slowed progress in recent years and needs a good dunging out. Any nonbogus layering can stay.

Re:Hey! (1, Informative)

Goaway (82658) | more than 7 years ago | (#19005175)

You have no clue at all what this article is about.

Merit (2, Informative)

ez76 (322080) | more than 7 years ago | (#19004463)

There is some merit to what Siracusa is saying, at least on gaming and multimedia fronts.

Windows was a hamstrung peformer for graphics until NT 4.0 saw rearchitecture [microsoft.com] which placed key portions of the OS (including 3rd-party graphics drivers) at a much lower level.

Re:Merit (2, Insightful)

pionzypher (886253) | more than 7 years ago | (#19004581)

Agreed, but is that not also its achilles heel? Kernel space drivers have the ability of taking down the whole system where userland drivers do not.

Re:Merit (4, Insightful)

FooAtWFU (699187) | more than 7 years ago | (#19004801)

Indeed. Whatever downsides layers have, they keep things sane. If you're going to make a mess of things, at least with layers you have an organized mess. There's a reason that Linux is more secure than Windows.

Re:Merit (1)

26199 (577806) | more than 7 years ago | (#19004681)

...and then Vista moved them back?

Re:Merit (1)

maxume (22995) | more than 7 years ago | (#19004981)

Hardware has limped its way to being a little bit faster in the meantime.

Hard to dis (0, Insightful)

Anonymous Coward | more than 7 years ago | (#19004773)

It's really hard to argue with the claim that Linux is a fundamentally flawed failure of an operating system. It's a nice free tech toy, sure, but when it comes to being an accepted and realistic product, there are a great many reasons to look elsewhere.

Efforts like Ubuntu, while admirable, are really just polishing a turd. No matter how much paint you slap on it, it's basic nature will not change.

Someone has to come up with an alternative to Lunis's "work on the kernel and let everything else go to hell" development strategy. Redhat or SUSE could have done so much, but ultimately failed once they started pandering to "the community".

Re:Hard to dis (0)

mrsteveman1 (1010381) | more than 7 years ago | (#19005141)

Yea your right Linux is just not ready for anything serious, surely if I were looking for a mainframe system for a government agency or perhaps a bank, i would naturally look to IBM. Whats that? IBM runs Linux on its mainframes?

Surely thats just complicated tech stuff, it'll never get used in a home device.....er wait....Linksys......Tivo........hmmmmm

By the way, Ubuntu is crap and will be forever if they keep wasting time on worthless parts of the system, like Gnome, or PS3 integration (its in the bugtracker). Redhat has always been crap, but Suse is the closest we have ever come to a finished usable Linux system, and it's relatively new in novel's hands. By the time vista is capable of running itself without screwing up randomly (i've used it extensively), Suse will be matured to the point that it will compete with vista for all but the most niche uses like games (Which are worthless anyway).

games are useless until... (1, Interesting)

Anonymous Coward | more than 7 years ago | (#19005313)

...you look at the grease that makes the world go round, and that is money, and games make lotsa money and garner a huge interest with hundreds of millions of people. Moolah, loot, it's there for the taking.

Personally, I don't game, but it would be naive of me to not notice how much interest there is, and how much it has pushed new tech like advanced video cards, new processors like the cell, etc.

In fact, outside the corporate desktop with the emphasis on business apps, it's games that drive all the other desktops, they are one of the holy triumvirate-games, messenging and media playback that home PCs get used for a lot.

Granted, we have consoles for games, but I don't think games on the PC are quite dead yet.

Re:Hard to dis (4, Informative)

init100 (915886) | more than 7 years ago | (#19005291)

It's a nice free tech toy, sure, but when it comes to being an accepted and realistic product, there are a great many reasons to look elsewhere.

You're right, that's why nobody is using Linux for real systems [top500.org] .</sarcasm>

Re:Merit (0)

Anonymous Coward | more than 7 years ago | (#19005379)

Windows was a hamstrung peformer for graphics until NT 4.0 saw rearchitecture which placed key portions of the OS (including 3rd-party graphics drivers) at a much lower level.
...and there it became a source of BSODs previously not possibly in the old-fashioned "non-thinking-across-layers" thinking. Huzzah!

authoritative, top-down organization (2, Interesting)

catbutt (469582) | more than 7 years ago | (#19004465)

This is like comparing a monarchy with anarchy, without acknowledging that there are in-between solutions that have advantages of their own. Democracy (and representative democracy) being one example.

Not saying the linux development community should be a democracy with everything voted on or whatnot, just saying that there may be creative approaches that have yet to be explored. You'd think smart people with a penchant for game theory would be working on it.

Food for thought.

Democracy Sucks. (3, Interesting)

LibertineR (591918) | more than 7 years ago | (#19004603)

Which is why America is a Representative Republic and NOT a Democracy.

With Democracies, you end up with the tyranny of the majority, regardless of whether the minority opinion is the correct one. Under a Republic form, a large enough minority can plug up the works and force negotiation with the majority before a final solution is agreed upon.

The Linux Development community needs representative decision making, there are too many voters, hence, almost no direction or real progress towards a cohesive goal. Nothing will change without true leadership, and sadly, accountability.

You cant measure progress without accountability for failure. Socialism has not worked in ANY form, and it wont work for Linux either.

Re:Democracy Sucks. (1)

aardvarkjoe (156801) | more than 7 years ago | (#19004805)

The Linux Development community needs representative decision making, there are too many voters, hence, almost no direction or real progress towards a cohesive goal.

Your argument assumes that the "Linux Development community" (whatever that is) has, needs, or wants a common goal. There has been project after project which were supposedly to unify the "linux community" or "open source community," but historically every single one has fallen apart when it became obvious that the majority of people that the project tried to represent didn't care about the same goals or ideals.

Re:Democracy Sucks. (1)

catbutt (469582) | more than 7 years ago | (#19005401)

That is why I would suggest consensus building tools. It is true that not everyone has the same goals. But some would be willing to cede one thing they slightly care about if they could get something they cared a lot more about. When looked at that way, I'd be willing to bet it is a lot more cohesive than you may think.

Re:Democracy Sucks. (2, Interesting)

poopdeville (841677) | more than 7 years ago | (#19004807)

Which is why America is a Representative Republic and NOT a Democracy.
With Democracies, you end up with the tyranny of the majority, regardless of whether the minority opinion is the correct one.


Yes, a tyranny of the minority is clearly better.

Hint: The only correct opinion regarding the state is the will of its subjects.

Re:Democracy Sucks. (1)

catbutt (469582) | more than 7 years ago | (#19005251)

The only correct opinion regarding the state is the will of its subjects.
But measuring that will is harder than you may think. For instance, when there are more than 3 options, plurality voting (i.e. select the one that gets the most votes) is completely broken, as it unfairly rewards the choice that is the most different from other choices (that is, it is subject to vote splitting).

And as what the previous poster called "tyranny of the majority", typical voting will weigh the votes of everyone equally, which doesn't work well for things where, for instance, a slight minority feels very strongly about something and a slight majority barely cares. Representative democracies mitigate this, by allowing a higher degree of negotiation ("ok, I will agree to vote your way on this if you will vote my way on that").

Re:Democracy Sucks. (1)

Stewie241 (1035724) | more than 7 years ago | (#19005439)

And... hopefully... the minority can make long-sighted, informed decisions to meet the ideals that the subjects want. I might ultimately want X, but in the meantime, I might make an earlier decision which will make it impossible for me to get to X because I don't necessarily know any better. The theory, at least, is that intelligent, informed leaders can understand the direction that the subjects want to go and make the right decision to get there.

Same reason why IT departments ultimately (should) make technology decisions. Management wants X. They may want some other technology in the meantime, but must rely on IT to direct them as to whether or not they can adopt the technology and still get X.

Re:Democracy Sucks. (3, Insightful)

Cyberax (705495) | more than 7 years ago | (#19004829)

Yes, and dollar is not a currency, it's a banknote.

Representative republic is JUST A FORM OF DEMOCRATIC GOVERNMENT.

Re:Democracy Sucks. (1)

Grant_Watson (312705) | more than 7 years ago | (#19004919)

Representative republic is JUST A FORM OF DEMOCRATIC GOVERNMENT.

There's a lot of overlap there, but a republic can include a number of checks against the will of the people, while a true democracy doesn't pretty much by definition.

Re:Democracy Sucks. (1)

umeboshi (196301) | more than 7 years ago | (#19005183)

Representative republic is JUST A FORM OF DEMOCRATIC GOVERNMENT.
Not really, the representation could be divvied up between the 100 most influential families. The progeny of these families would inherit the Senate, each family representing their own geographic region that they control.
Republics have their origins in fascism, and served as a tool to help unify local rulers into larger, more cohesive nation.

Re:Democracy Sucks. (4, Insightful)

cyber-vandal (148830) | more than 7 years ago | (#19004999)

Socialism worked pretty well in the democratic western countries, that's why people aren't dying of cholera/typhoid/starvation in slums anymore.

Re:Democracy Sucks. (1)

catbutt (469582) | more than 7 years ago | (#19005091)

America's form of democracy is certainly representative, which is probably the only practical solution when you've got more than a dozen or so people. I'd argue it is still a democracy, if an imperfect one.

If (in my little fantasy world) the constitution had been written with the input from modern day game theorists and election theorists, I'd think it could be massively improved. For example, our destructive two party system is a simple (and unnecessary) by product of plurality voting. (example: http://karmatics.com/voting/twoparty.html [karmatics.com] )

Re:Democracy Sucks. (2, Insightful)

Kjella (173770) | more than 7 years ago | (#19005117)

Which is why America is a Representative Republic and NOT a Democracy.
With Democracies, you end up with the tyranny of the majority, regardless of whether the minority opinion is the correct one. Under a Republic form, a large enough minority can plug up the works and force negotiation with the majority before a final solution is agreed upon.


Says the only two-party state I know of. Whichever party has 52% this term screws over the other 48% without flinching. If you wanted negotiation, you should look to Europe where we have many smaller parties, shifting coalitions trying to match the will of the people on a case by case basis, not just heads or tails every five years or so.

The Linux Development community needs representative decision making, there are too many voters, hence, almost no direction or real progress towards a cohesive goal. Nothing will change without true leadership, and sadly, accountability.

You cant measure progress without accountability for failure.


You can't rule volunteers by force or majority. 99% of the Linux developers may agree, but I can refuse to in that direction or even work to pull it in a different direction. You can't be held accountable for people unless you can control what they're doing or not.

Socialism has not worked in ANY form, and it wont work for Linux either.

Socialism hasn't worked for any finite resource. If in a socialist country everybody refused to grow food, people would starve. If everyone refused to develop Linux, it would simply come to a halt in its current condition. Everyone would still have "as much" Linux as they want. That was the downfall of communism, they had to force people to keep the wheels turning but Linux doesn't.

If by socialism you mean that "everyone contributes what they want", then it seems to be it's working quite nicely already. Depending on the metrics you use maybe not in the lead, but certainly better than many other OSs from the 70s, 80s and 90s. So well, if you say it can't work abd I see that it does work, I tend to go with reality.

Re:Democracy Sucks. (0)

Anonymous Coward | more than 7 years ago | (#19005189)

Which is why America is a Representative Republic and NOT a Democracy.
Really now? I thought it was an aristocracy.

I think you are almost right, except for the single fact that there is a difference that in a society you have the majority of people that are mainly participating in the system for their own interest. The difference in the open source community is that most of the committers are willing to coöperate and make decisions based on consensus. So tyranny is really not something we can expect there. And let's not forget that each individual has a lower IQ in the open source commune then in society as a whole.

The point I agree on is that consensus is not always the best way to be productive. You may even today say that the open source movement is in a rural stage. This is where fusion with commercialism comes in place. You can have leading people that set the frameworks and point the noses in one direction for a certain project. But you will always need the developers as the main input on how to fill it in. They are the ones that bring the user-feedback right from the start and after version 1.0 to make truly succesful apps.

Socialism really hasn't got anything to do with it because nobody's coding for foodstamps here.

Re:authoritative, top-down organization (4, Insightful)

What Is Dot (792062) | more than 7 years ago | (#19004655)

I totally agree. I think the main problem with Linux based systems (Fedora, Ubuntu, etc.) is that there are so many of them. Diversity is wonderful for free speech, but in the open source community, we have 100 solutions for every 1 problem.
The best solution would be for the Linux Kernel project to say, "Open source developers can do as they please, but we here at the Kernel project encourage developers to contribute to THESE specific projects: Gnome, Open Office, etc...
The open source community is massive, but development will take an eternity until a majority of the community starts to support ONE software solution over it's alternatives.

Re:authoritative, top-down organization (1)

Daniel Phillips (238627) | more than 7 years ago | (#19005205)

The best solution would be for the Linux Kernel project to say, "Open source developers can do as they please, but we here at the Kernel project encourage developers to contribute to THESE specific projects: Gnome, Open Office, etc...

That is not going to happen, but if it did it would not include Gnome [gnome.org] .

Re:authoritative, top-down organization (0)

Anonymous Coward | more than 7 years ago | (#19005341)

Yes, and there are too many political parties as well. If we only had one to choose from, things would be so much better.

In fact, on your argument, Linux is irrelevant, we should all just be using Windows anyway.

Oh, and "it's" doesn't need the apostrophe there.

Idiot.

What's ZFS? (1)

kiyoshilionz (977589) | more than 7 years ago | (#19004487)

If someone could elaborate on what ZFS is, I believe our discussion may prove to stay more on-topic.

Re:What's ZFS? (1)

cyberkahn (398201) | more than 7 years ago | (#19004547)

Here you go ZFS [wikipedia.org] or here [sun.com] .

Re:What's ZFS? (0)

SadGeekHermit (1077125) | more than 7 years ago | (#19004553)

Here's a wikipedia article:

http://en.wikipedia.org/wiki/ZFS [wikipedia.org]

It's basically a filesystem in use by Solaris. I don't know why anybody cares about it, we've got Reiser and ext3, right? Surely adding yet another filesystem to Linux isn't worth mucking up the kernel to support it...

Feh. People are nuts. It's ok, I love watching 'em fight. I've got my lobster fried rice and diet coke right here! Carry on!

Re:What's ZFS? (2, Informative)

Anonymous Coward | more than 7 years ago | (#19004635)

It has some really nice features that are either not in Linux filesystems or not well implemented in Linux filesystems. It's supported by Solaris, FreeBSD, OSX, and possibly some other operating systems, so it'd be handy if it also worked natively in Linux. It could be like FAT32 for people who need to share data between OSes and don't need Windows. Except unlike FAT, ZFS is actually well designed and has "modern" features.

Re:What's ZFS? (2, Insightful)

jonnythan (79727) | more than 7 years ago | (#19004727)

I think you need to read the article you linked to, because ZFS is very very different from ReiserFS and ext3.

Re:What's ZFS? (0)

Anonymous Coward | more than 7 years ago | (#19004967)

Yeah, but ZFS developers don't kill people.

Re:What's ZFS? (4, Informative)

pedantic bore (740196) | more than 7 years ago | (#19004705)

I'll elaborate (slightly) about ZFS if someone else will tell me who John Siracusa is and why I should care what he writes... I couldn't figure that out from TFA.

ZFS is a file system developed by Sun over the past several years. But the important thing is, in this context, that the ZFS design philosophy (never mind the actual design, which isn't what this discussion is about) differs from that of ordinary file system design. Most file systems make strong assumptions about reliability of the underlying block storage facility: there's some gizmo down there, whether it be a disk (for itsy-bitsy systems), a RAID set (for not so bitsy systems), or a SAN, that reliably stores and retrieves blocks with reasonable performance. ZFS doesn't do this. It manages many details of the storage layers -- it does RAID its own way (to get around problems that conventional RAID doesn't solve), and does volume management itself as well.

From the point of view of a UNIX/Linux file system person, this seems very weird. However, these ideas are not really new or revolutionary (there are new things in ZFS, but this philosophy isn't one of them). It pretty much describes how network storage vendors (NetApp, EMC, etc) have been building things all along.

Re:What's ZFS? (0)

Anonymous Coward | more than 7 years ago | (#19005061)

Quoted from Sun Microsystem's page on ZFS.

"ZFS--the last word in file systems.

The breakthrough file system in Solaris 10 delivers virtually unlimited capacity, provable data integrity, and near-zero administration."

The idea, it seems to me, is exactly what Linux needs. Linux, if you can excuse the anthropromorphization, is a very unapproachable operating system for most people. The real questions would be are the developers making sure to keep in mind the ideas that Linux were founded on? Are they going to make sure that it is free and that it will stay free? Will they have an open source? Will it be easily configurable? Lastly, will it really be able to do all that they promise? It sure seems like a step in the right direction. What do you think?

Linux discipline (5, Interesting)

stevelinton (4044) | more than 7 years ago | (#19004527)

Personally, I think the Linux kernel manages these issues quite well, if (by conventional standards) rather inefficiently.

The practice, as I see it is: "The current rules (layering, etc.) are enforced rigourously (at least in Linus' tree) but radical rewrites
of the rules take place relatively often"

So if ZFS really does achieve wonderful things by violating the current layering it WON'T be accepted for Linux's kernel, but, if Linus can be convinced (via an appropriate chain of lieutenants, usually) that the layering is really an obstacle to achieving these things, we might see a completely new layering appear in 2.6.25 or somewhere, into which ZFS can fit. The inefficiency
comes from the number of substantial pieces of work that get dropped because they don't fit in, or were misconceived. A more economically rational system would try to kill them sooner. Also, inefficiency arises from the fact that changing the filesystem layering would require every existing filesystem to be rewritten. Linux is notoriously unfazed by this, but in a commercial world, I suspect this would be too hard to swallow and you'd end up with all your filesystems fitting into the model except one, from whence come bugs and code cruft.

Re:Linux discipline (1)

LuckyStarr (12445) | more than 7 years ago | (#19004869)

Kudos. This is the first sane comment I read thus far. (Please mod up!)

Re:Linux discipline (2, Informative)

Elektroschock (659467) | more than 7 years ago | (#19004907)

"Pawel Jakub Dawidek has ported and committed ZFS to FreeBSD for inclusion in FreeBSD 7.0, due to be released in 2007" (wikipedia)

Total bullshit (5, Interesting)

Werrismys (764601) | more than 7 years ago | (#19004531)

Linux will "support gaming" once games are supported for Linux. Linux has OpenGL, OpenAL, all the illusionary walls are market-made. Linux is a platform to build on without the fear of being obsolete in 2 years. DOS games nowadays run on DosBox, as do early Windows games. Even XP needs tweaks to run Win9X games. How is targeting a moving sucky platform preferable to one that is open? Easy. Games sell for 6 months tops. You get the initial sales, you get the money. After that it's tough shit if it won't work after next Windows Update(tm). I have used Linux since 1994, but work in the IT industry. I am constantly amazed by the amount of BULLSHIT the windows folks put up with. For weird quirks "shit happens" is the most common reply.

Re:Total bullshit (0)

Anonymous Coward | more than 7 years ago | (#19004695)

I'm sort of a fan of old games. New games, I pretty much use a consol. Old games, on computer? sure. Right now, I'm playing Civilizations Call To Power on my Vista Laptop. Works perfectly. Better than it did on 98 and 2k interestingly since it needed to be patched up, but what ever was a problem then isn't a problem in Vista.

I think I can explain (2, Insightful)

Toby_Tyke (797359) | more than 7 years ago | (#19004723)

. How is targeting a moving sucky platform preferable to one that is open?

The moving sucky one has ninety plus percent of the home desktop market. Linux has less than one percent, and I've never seen any credible figures suggesting otherwise. Why target a tiny niche market when you can target a huge one?

And bear in mind, the proportion of linux users who are serious about gaming and do not have access to a windows machine is probably one percent of Linux users. So even if you target windows, ninety nine percent of Linux gamers can play your games anyway.

Re:Total bullshit (2, Interesting)

etymxris (121288) | more than 7 years ago | (#19004741)

Tribes 2 didn't fair well through the changes to threading in libc. Exporting the kernel version as 2.4 seemed to work at one point, IIRC. But last I tried I couldn't get it working at all. It's not true to say that a binary blob (which most games are) will work perfectly through changes to the underlying OS.

Re:Total bullshit (1)

sloanster (213766) | more than 7 years ago | (#19004775)

The OP is spot on - I've been playing 3D FPS for years, as a linux user, and in my experience linux handles gaming nicely. The good performance of the native linux games is ample evidence of that.

The quake 3 arena I bought in 1999 still runs like a champ on my current linux desktop running SuSE 10.2. Other native linux games that run nicely are doom3, quake 4, ut2004, RtCW. ET, etc.

The "barriers" to linmux gaming are not technical at all, they are political, if they exist at all.

Re:Total bullshit (0)

Anonymous Coward | more than 7 years ago | (#19004893)

I agree. Im using relativly old hardware (9200SE Radeon for graphics), and still Linux plays every 3D game i can throw at it in a playable mannor (my favorite being ET, sure, its only 800x600, but i find thats big enough, and the FPS is almost always better then the lag). In fact, with the 7.0 Xorg graphics, my FPS has significantly increased. Linux is a great gaming system, with ET and Cube (and its successor), how can anyone get bored? OK, so maybe some games still need to get more popular, but popularity is the only thing Linux gaming has as a downside, still, both ET and Cube (i think) work in Windows, so its not that big a deal. Personally Linux is the best OS for anyone who wants to play games on semi-dated, low cost hardware.

Re:Total bullshit (2, Insightful)

Sj0 (472011) | more than 7 years ago | (#19004997)

One thing I noticed from your post is that the Windows versions of all those games still run too. I wonder how much of the problem is changing versions of Windows, and how much is just hackish code some developers write?

Re:Total bullshit (0)

westlake (615356) | more than 7 years ago | (#19005009)

The quake 3 arena I bought in 1999 still runs like a champ on my current linux desktop running SuSE 10.2. Other native linux games that run nicely are doom3, quake 4, ut2004, RtCW. ET, etc.

Round up the usual suspects. If a commercial game runs on Linux, it will almost certainly be a shooter, an iD release and something to be found in the bargain bin of Windows PC gaming.

Re:Total bullshit (1)

maxume (22995) | more than 7 years ago | (#19005237)

They're financial. It would be a bit of a trick to release a game while convincing people that you weren't going to support it, so released games need support, which is an ongoing and somewhat unpredictable(for linux as a platform, just doing one distro would be quite a bit simpler) cost.

Re:Total bullshit (4, Informative)

Jeff DeMaagd (2015) | more than 7 years ago | (#19005063)

Do you have a copy of StarOffice from the mid-to-late 90's? Try running that in Linux now. Do you have a copy of MetroX from say, 1998? Try running that in Linux now. Are you still using the original Linux binaries for any games released in the late 90's?

I'm still using a copy of AutoCAD released in 1995 for the Windows 3.1 Win32S API, and it works fine in Windows 2000 and Windows XP except for that it's got the old 8.3 filename limitation. I am still using WordPerfect Suite 8, the current version is 13, I think. I know someone that is still using Corel Draw 7, the current version is 13. All these programs still work fine in XP/2000, and I think that is a splendid record for binaries that were unpatched between Windows updates.

The DirectX architecture has changed between the 9X and the NT lines, but otherwise, the legacy APIS are generally well-preserved and allows very complex software to work without a patch.

Welcome To Reality Open Source (2, Insightful)

Anonymous Coward | more than 7 years ago | (#19004535)

Linux and other open source projects are getting a harsh lesson in what it is like to ship consumer grade software products. No more RTFM! No more 'did you submit a bug report???' No more this bug/problem is not our fault since we don't control such and such library we use.

Project vs Product

Everyone is impressed with how far you've progressed when you are working on a project.

Everyone is pissed off with how much you've left undone when you are working on a product.

Welcome to reality open source developers. Before long you will all be saying "Damn, if I have to work this hard to make a consumer grade software product I might as well be getting paid to do so"

Re:Welcome To Reality Open Source (1)

Agamemnon13 (1097979) | more than 7 years ago | (#19004561)

The developers who said that have all gotten jobs with other companies. The developers who still believe in Open Source principles continue to work diligently on the projects that excite them. If they didn't enjoy it they wouldn't do it.

Re:Welcome To Reality Open Source (1)

secolactico (519805) | more than 7 years ago | (#19005153)

You kinda missed the AC's point (not that I agree with it). They are working on "projects" but not delivering a "product".

Re:Welcome To Reality Open Source (5, Insightful)

howlingmadhowie (943150) | more than 7 years ago | (#19004897)

Linux and other open source projects are getting a harsh lesson in what it is like to ship consumer grade software products.

um, you do know that linux has been the operating system of choice for supercomputers, webservers, special effects production, scientific computing etc. for a number of years now, don't you? because you seem to think that linux, freebsd, openbsd or whatever just suddenly turned up yesterday or something. are you also aware of the fact that a lot of people who write free and open-source software get paid good money to do so?

Re:Welcome To Reality Open Source (0)

Anonymous Coward | more than 7 years ago | (#19005029)

"supercomputers, webservers, special effects production, scientific computing" != "consumer grade"

It's 2007. People shouldn't need extensive training to work their personal computer. Linux will stay stuck with the nerd brigades until the people in charge of its major components realize this.

As you said, its 2007 (1)

Peaker (72084) | more than 7 years ago | (#19005259)

Its not 1998. Try Ubuntu/Kubuntu out.
You don't need to know anything to get those running.
Their installation is far easier than the Windows installation, and most of the common things people do "just work". Those rare occasions that don't "just work" have very simple step-by-step howto's all over the place.

Re:Welcome To Reality Open Source (1)

Stevecrox (962208) | more than 7 years ago | (#19005333)

Your missing his point, hes talking about the home desktop consumer market, most of the applications your describing are high technology market. They expect very different things and its where Linux fails. Your animator or scientist is going to write a program in a set langauge they don't care how it looks or how it works just that it does work. I remember a statement from a physicist along the lines of "tell me which langauge to learn and I'll program my simulations in it, just stop changing the languages!" These people have motivations for using it.

The home desktop market has some very different expectations which his posts describes quite well. Linux has a lot of work to make a consumer grade product, its not insurmountable but its stuff the community isn't doing. I'll use the example of the Divx converter and Virtual dub, Virtual Dub is a fantastic tool which you can do alot with but while geeks may love it no average user is going to use it. Divx converter looks pretty, is easy to understand and is consistant. Its simple things which Linux and open source needs to correct but the imputitus seems to be on more features and bug fixes rather than actually fixing the end user expearence (things like load times, consistant application behavouir, wizards and the 'pretty' factor.)

If you still don't understand and want to rebut the point you don't get it and probably never will. I'm not saying Open source is a lost cause, because it isn't but there are some major usability issues which wouldn't require a great deal of work that need to be done. Open source needs some artists willing to share their work, it needs people to sit down and write wizards, it needs testing by average people.

Linux isn't successful on the desktop because (0, Interesting)

Anonymous Coward | more than 7 years ago | (#19004589)

1. Fonts, they are simply not as good as Windows.
2. Ease of use. Nobody has sat first time users in front of a linux desktop and watch them puzzle over what those multiple desktops do, or how to switch between them.
3. Basic styling problems. Needless flickery redraws of desktops. Uneven and asymetric layouts, huge icons in some places, tiny icons in others. Isometric icons( a classic sign of a programmer drawing an icon instead of an artist drawning icons).
4. Lack of help, I try to save, it fails, where's the link to the help that tells me that this is a security feature and I can only save in some places.
5. I am not interested in your philosophy, assemble me a bundle of software that fits my needs regardless of whether than software fits your philosophy.

If there is one thing I would suggest, get Ubuntu played with by ordinary grandma's so you can see how they get confused. Then get the Firefox guys to look at it, because to me, it's uneven styling, sometimes big crayola friendly styling aimed at kids, sometimes business like.
Knoppix for example, you start it up and look at the "Windo..." icon and wonder why the fuck they chose suck a large font and such a small icon spacing. So big it can't even display the words 'Windows'.

Re:Linux isn't successful on the desktop because (4, Insightful)

peragrin (659227) | more than 7 years ago | (#19004863)

I have spent the last three days teaching someone how to use windows XP when all they used to use was windows 98. Every interface is different. Stop teaching interfaces and start teaching ideas. Stop teaching MSFT word, start teaching word processing. Teach spreadsheets not excel.

I can sit down in front of any computer and begin to figure it out. i wasn't taught windows, I learned about windows from windows. I learned about OS X from OS X. and I figured out how to make a custom kde setup from KDE.

You want to know what I find short comings in them all. They are tied to one group, one development process. I want an OS that has the ease of use of OS X, with the multi-platform binaries of java, and the remote windowing of X. I want to carry my home directory files on an encrypted thumb drive, and load up my files, whether or not the OS is OS X, linux, windows, solaris, plan 9, or what ever else the future may bring.

we have the knowledge and technology to do that today.

Re:Linux isn't successful on the desktop because (1)

bersl2 (689221) | more than 7 years ago | (#19004977)

Every interface is different. Stop teaching interfaces and start teaching ideas. Stop teaching MSFT word, start teaching word processing. Teach spreadsheets not excel.
Needs repeating.

Re:Linux isn't successful on the desktop because (0)

Anonymous Coward | more than 7 years ago | (#19004965)

Don't forget cut/copy/paste problems.

Also related to #3 is the whole issue of non-native GUIs. I.e. Gnome apps on KDE, etc. And plain X apps are just fucking hideous.

Re:Linux isn't successful on the desktop because (5, Insightful)

Anonymous Coward | more than 7 years ago | (#19005067)

"Ease of use. Nobody has sat first time users in front of a linux desktop and watch them puzzle over what those multiple desktops do, or how to switch between them.......If there is one thing I would suggest, get Ubuntu played with by ordinary grandma's so you can see how they get confused."

Just because your grandma is a little slow (okay, ALOT SLOW) does not mean all of them are.

My grandmother WAS sat in front of an Ubuntu box for the first time, and after 5 minutes, she asked me why her windows PC did not have Desktop switching, as it only makes sense, rather than constantly minimizing countless windows. Since she already has Firefox on her PC, there was no great hunt for the Big Blue "E" aka "the internet", and after a short explanation about how she, as a user, has her own little piece of the computer called a HOME FOLDER, and can save all her stuff there, she was set.

I am so tired of this myth that only people with a Mensa I.Q. are capable of understanding how to use a non-windows based system. Granted, she wont be editing config files or writing code, but how many outside the IT industry do that on a regular basis?

Mod me insightful (or fraking obvious, take your pick)

Re:Linux isn't successful on the desktop because (0, Troll)

Sj0 (472011) | more than 7 years ago | (#19005097)

You're right. The problem is there aren't nearly enough homosexuals in the linux community. Plenty use Windows, so it's very easy to find some to design visual elements. In linux it's much harder.

Re:Linux isn't successful on the desktop because (0)

Anonymous Coward | more than 7 years ago | (#19005429)

I thought they used macs. Silly me.

Re:Linux isn't successful on the desktop because (1)

fthomas64 (473342) | more than 7 years ago | (#19005115)

Linux isn't successful on the desktop because when people buy computers, they use the OS and Desktop that came with them. Very, very few computers are sold with Linux preinstalled.

It's unfair to say that Linux has failed on the desktop by citing problems that exist in certain window managers or certain applications; if you want to do that, you need to say that "Linux distribution XYZ" has failed...

That said, I don't have the problems you cite using Ubuntu.

I'm not saying that Linux on the desktop is perfect, I just don't think it's that bad.

Re:Linux isn't successful on the desktop because (3, Insightful)

Hairy1 (180056) | more than 7 years ago | (#19005195)

The real reason Linux isn't popular has more to do with marketing that technical issues. People have always pointed to one feature or another where Linux is weak and say that it won't be viable until the feature is there. The simple fact is that Linux is now ready for the desktop technically. It is the marketing which Linux and more generally open source needs to perfect.

To address the parent:

1. Fonts are not something I even notice a difference in. I can't imagine anyone making a decision on this basis.

2. Linux is now just as easy to use as Windows for the average user. Many devices will be supported without installation of special drivers, and in many respects this experience is easier than windows. For example, my GPS device plugs straight in and works. To use it under Windows I have to keep installing a driver. Not just once but every time I use it. I don't know why. I don't know how to fix it on Windows.

3. Graphics issues - Desktops like Suse and Ubuntu are well integrated with consistent styles. While there is a broader range of layouts than with Windows, this is not a barrier to adoption.

4. Lack or help. I don't know of any software which has effective help; be that Windows or Linux. Linux has man pages of course, but thats too technical. I agree that documentation could be better, but popular applications are generally easy to use without detailed help. The lack of local help is not a big factor, and is mitigated by good online resources such as FAQ's and mailing lists.

5. This last one is odd. You want a "bundle of software that fits my needs". Linux may have been inspired by a philosophy, but there is no suggestion that users must share it. The fact is that under Linux you have access to a huge number of applications out of the box. Under Windows you will need to purchase software piece at a time. I would rather just be able to download a program automatically.

None of these reasons are real reason why Linux is not popular on the desktop. One real reason is gaming support - one of the primary reasons many of my associates say they still have Windows partitions. If only I could play CS on Linux....

Re:Linux isn't successful on the desktop because (1)

3vi1 (544505) | more than 7 years ago | (#19005203)

1. Either you haven't used Gnu/Linux in a couple of years, or your system is misconfigured. The fonts on my systems display every bit as pretty as those on Windows.

2. Sit someone that's never used Windows in front of a Windows box. Same deal. Just because Linux has a lot of features that you don't immediately know how to use does not mean those features are a liability.

3. Flickery redraw of desktops? Have you even seen Linux? I honestly have no idea what you're referring to. My desktop is of the 3D Beryl version... prettier than Vista. As to your other points - almost all icon sizes are configurable, if you'll take the time to look through the options. That's the beauty of Linux - almost everything's configurable.

4. You don't like Linux because it actually enforces security which will only allow you to save to places where you have access? If you don't know how to use Linux, there's a ton of help on the web. There are even friendly folks (not me) that will assist you, even though you're complaining about something you got for free and obviously haven't read a beginners guide to Linux.

5. Go fuck yourself. Assemble your own software that fits your needs, or pay money for someone to do it for you. It is not the communities job to pander to your needs. If you don't like something about Linux, learn to code and fix it, or pay someone else to fix it. Microsoft won't give you their source, so if you don't like something about Windows you can just live with it.

Re:Linux isn't successful on the desktop because (0)

Anonymous Coward | more than 7 years ago | (#19005311)

While all your points are debatable, point 2 in particular is false. Check out the Better Desktop project: http://www.betterdesktop.org/ [betterdesktop.org] .

Well, no. (4, Informative)

c0l0 (826165) | more than 7 years ago | (#19004617)

Alternativ approaches to implementing subsystems of the Linux kernel are often developed concurrently, in parallel, and there's a system you can compare to darwinistic evolution that decides (in most cases) which one of a given set of workalikes makes it into the mainline tree in the end. That's why the Linux kernel itself incorporates, or tries to adhere to, a UNIX-like philosophy - make a large system consist of small interchangeable parts that work well together and do one task as close to perfect as possible.
That's why there are so many generic solutions to crucial things - like "md", a subsystem providing RAID-levels for any given blockdevice, or lvm, providing volume management for any given blockdevice. Once those parts are in place, you can easily mingle their functions together - md works very nice on top of lvm, and even so vice versa, since all block devices you "treat" with one of lvm's or md's functions/features, again, result in a block device. You can format one of these blockdevices with a filesystem of choice (even ZFS would be perfectly possible, I suppose), and then incorporate this filesystem by mounting to whereever you happen to feel like it.
There are other concepts deep down in there in the kernel's inner workings that closely resemble this pattern of adaptability, like, for example, the vfs-layer, which defines a set of reuqirements every file-system has to adhere and comply to. This ensures a minimal set of viable functionality for any given filesystem, makes sure those crucial parts of the code are well-tested and optimized (since everyone _has_ to use them), and also makes it easier to implement new ideas (or filesystems, in this sepcific case).

Now, zfs provides at least two of those already existing and very well working facilites, namely md and lvm, completely on its own. That's what's called "code-duplication" (or rather "feature-duplication" - I suppose that's more appropriate here), and it's generally known as a bad thing.
I do notice that zfs happens to be very well-engineered, but this somewhat monolithic architecture still bears the probability of failure: suppose there's a crucial flaw found somewhere deep down in this complex system zfs inevitably is - chances are you've got to overhaul all of its interconnecting parts massivley.

Suppose there's a filesystem developed in the future that's even better than zfs, or at least better suited to given tasks or workloads - wouldn't it be a shame if it had to implement mirroring, striping and volume-management again on its own?

Take an approach like md and lvm, and that's not even worth wasting a single thought on. The systems are already there, and they're working fantastically (I'm an avid user of md and lvm for years by now, and I frankly cannot imagine anything doing these jobs noticeably better). I'd say that this system of interchangeable functional equivalents, and the philosophy of "one tool doing one job" is absolutely ideal for a distributed development model like Linux'.

It seems to be working since the early nineties. There must be something right about it, I suppose.

Re:Well, no. (1)

Cyberax (705495) | more than 7 years ago | (#19004859)

But there's a problem - sometimes you need to do something across the layers...

He's right you know (0)

Anonymous Coward | more than 7 years ago | (#19004619)

Linux kernel development is an absolute, shameful ad-hoc mess. This is typified by the fact that the kernel devs are dogmatic about refusing to chose a stable abi and stick to it.

In short, Linux wants all the 'perks' of a professional enterprise OS, without accepting any of the responsibilities (for example, stable interfaces which third party developers are able to count on developing in the next version).

Far too many people take linux seriously in the professional world, far more than it deserves. Linux has no direction, no goals, no compatibility (I can run solaris 2.4 binaries on opensolaris; you can't even run FC3 binaries on FC4!) and, in all honesty; half the time the fucking kernels don't even work (this is particularly true of the 2.6 series)!

Linux is a pile of crap; sadly it's a case of 'people adopt the worst technologies'; there are far better systems out there (shit, other than osx or windows any other system would be better); but it's linux which has the mindshare

No wonder our IT industry is complete shit.

Re:He's right you know (2, Informative)

diegocgteleline.es (653730) | more than 7 years ago | (#19004851)

you can't even run FC3 binaries on FC4

You can run RHEL3 binaries in RHEL4 however. And you can happily run Linux 1.0 binaries on the latest linux development snapshot. Thats because Linux DOES have a stable ABI: The syscall interface. That's the REAL ABI the Linux kernel has to support, and it's the one that it's really guaranteed to be stable. What you think as an abi it's not an "abi", it's an INTERNAL ABI. Drivers are not "software built in top of the kernel", they're plugins. And Linux developers do not care about it because linux is open source, in the open source world you can change source easily and it gets usually merged into the kernel. Basically, the Linux kernel gets more benefit from a internal unstable ABI that gets changed when it's needed and that improves all the linux drivers, than getting a stable internal ABI that only benefits a couple of external OSS drivers and another couple of propietary, illegal drivers.

Linux has no direction, no goals

That's what happens when you give everybody freedom to modify your code; everybody extends Linux in unexpected directions, that happen to be the directions the people (profesional world) desires because it's the people (profesional world) who actually develops the features. For example, some people have made Linux scale in machines with way more CPUS [lkml.org] of what your beloved Solaris has ever run, and now other people are adding hard realtime support to the core Linux kernel, which happens to make Linux beat latency records [internetnews.com] on Wall Street servers. It all was unexpected; IT however seems to like it.

Re:He's right you know (1)

speculatrix (678524) | more than 7 years ago | (#19005135)


fine piece of A/C trolling, but I'll shoot a few of your points down anyway

1/ you DON'T have to upgrade your kernel if you don't want to. for example, SUSE back-port bug fixes into the kernel release that each version of suse came with. this keeps updates quite safe. you CAN download a later version if you want, or even build your own. Debian is even more conservative.

2/ linux has goals/direction, but linux is more than a sum of its parts, so you might have to narrow your focus on the aspects of it which are interesting, and follow changes if you wish. noone is forcing you to decide to upgrade!

Well (2, Insightful)

diegocgteleline.es (653730) | more than 7 years ago | (#19004633)

It's not just Andrew Morton, it's basically every core linux kernel hacker that has spoken on the issue.

It's pretty obvious; I don't think that even the ZFS developers will deny it. They'll just say "it's a layering that was worth breaking".

Re:Well (1)

Metabolife (961249) | more than 7 years ago | (#19004715)

New linux slogan: "Think Oreo"

I think the same issue is hurting Reiser4... (5, Interesting)

IpSo_ (21711) | more than 7 years ago | (#19004643)

Reiser4 introduced us to all sorts of interesting capabilities never before seen in a file system (at the time) but I believe this same "layering violation" attitude pretty much put a stop to any of it getting into the kernel. The Reiser guys were forced to pretty much cripple their file system feature wise if they were to have any hope of getting it included in the kernel.

See Reiser4 Pseudo Files [namesys.com] as one example.

I can understand that in certain cases "layering violations" are bad, but Linux kernel developers don't even seem to be willing to experiment or think outside the box at all.

Both sides have valid arguments... I don't think there is any easy solution, but it would be nice to see more forward thinking in the community.

He's right... (1)

msimm (580077) | more than 7 years ago | (#19004671)

Linux as a cli is robust, mature. I work every day via ssh or terminal and I manage a number of servers this way, it's a pleasure.

But when I look at Linux as a viable desktop alternative for the non compsci crowd I tend to cringe. The patchwork that can make Linux so flexible, that *really* puts *you* in charge is the exact thing that makes Linux so unfriendly. Most people don't want tonnes of choice, not because their stupid, but because they don't want to spend a lot of time fussing with their computer. They want one way to do things and they want it to be well thought out and seamless. You still can't get that with a Linux distro. Instead you get choice and/or pieces of the patchwork.

Blah (1)

Peaker (72084) | more than 7 years ago | (#19005217)

Try Ubuntu/Kubuntu.

They don't strip away your choices, but they sure dumb them down so unless you really want to - you are not aware of having to make any decision.

Linux is ready for the desktop - and it is already in many desktops.
Many people don't use it not because its not ready, but because they don't know how to burn a CD, how to boot from it, what "installing an OS" means, and because they are afraid. Not because Linux "is not ready" and any of that nonsense.

An example: speeding up the boot process (1)

greppling (601175) | more than 7 years ago | (#19004739)

I don't know anything about ZFS, but I think his general point may have merit. Consider the problem of speeding up the boot process. This would require interaction between desktop hackers, init hackers, filesystem hackers, etc. etc. Many possible speedups might require layering violations (desktop application making requests about desired file layout on the disk etc.) Due to the technical, social, political structure of Linux this is just unlikely to happen (unless a single distro has enough resources to throw at this particular problem). Consider this rant (that may be too strong a word) by KDE developer Lubos Lunak: Why does Linux need defragmenting? [kdedevelopers.org]

ZFS: the last word in file systems (1, Informative)

Anonymous Coward | more than 7 years ago | (#19004747)

From Sun [sun.com] :

"If you're willing to take on the entire software stack, there's a lot of innovation possible."

Jeff Bonwick
Distinguished Engineer
Chief Architect of ZFS
Sun Microsystems, Inc.

Like DragonFly (0)

Anonymous Coward | more than 7 years ago | (#19004755)

You mean like DragonFly BSD, where they're already porting it. More and more I love that project.

Its easier to handle layers mentally (4, Insightful)

krbvroc1 (725200) | more than 7 years ago | (#19004759)

Layers are both easier to code, to understand, and to test. Layers/boundaries between software are your friend. To some degree that is why the Internet, based upon a layered network model (TCP on top of IP on top of Ethernet) is so diverse.

Layering is what keeps things manageable. One you start getting your software tentacles into several layers you make a mess of things for both yourself and others. Its a tradeoff--complexity/speed vs simplicity/maintainability/interoperability.

That's fine (4, Insightful)

Sycraft-fu (314770) | more than 7 years ago | (#19005071)

But the OSI layers are guidelines that help design things, not rigid levels that must be maintained. They are mixed up ALL the time. As a simple example, see Layer-3 switches. These are devices that simultaneously work at Layer 2 and 3 when it comes to dealing with traffic. They break down the traditional view of a separate router and switch, and they are good at what they do. There's plenty of stuff at the high end that's similar. Many things that are part of the presentation layer are actually done by the application (like encryption in SSH) and so on.

There's nothing wrong with having a layered design philosophy as it can help people decide what their product needs to do, and what it needs to talk to. For example if I am designing an application that works over TCP/IP, I really don't need to worry about anything under layer 4 or 5. However it shouldn't be this rigid thing that each layer must remain separate, and anything that combines them is bad. I don't need to, and shouldn't, take the idea that my app can't do anything that would technically be Layer 6 itself. Likewise in other situations I might find that TCP just doesn't work and I need to use UDP instead, but still have a session which I handle in my own code (games often do this).

Had we stuck to the OSI model as a maximum, rather than a guiding principle, with the Internet, it probably wouldn't have scaled to the point we have now.

Re:Its easier to handle layers mentally (1)

maxume (22995) | more than 7 years ago | (#19005325)

It's still very important to make sure that you have the right layers.

Authoritarianism (0)

Mazin07 (999269) | more than 7 years ago | (#19004803)

Linux is like sending a herd of Roombas into the Louvre and hoping that statistically, most of the floor is covered. They need somebody to run the whole thing and say stuff like, "USB drivers, get your act together. Now!" "We're using Qt. GNOME, shut up." "This is what you're gonna use for HTML rendering. Tough luck." "You guys have five days to make a decent UI before I firebomb your house."

Re:Authoritarianism (1)

Grant_Watson (312705) | more than 7 years ago | (#19005101)

That's what distros are for-- to make those sorts of decisions.

Linux does not think (4, Insightful)

Mad Quacker (3327) | more than 7 years ago | (#19004815)

Open source software gets better because new people want new features to which they contribute. You can't blame Andrew Morton for disliking what ZFS is going to do, this is just how people work. This is why they say you can't teach an old dog new tricks.

That said ZFS is one of the coolest things to happen to your files in a long time. The current disk block device usage is basically the same from the beginning of computing, it is ancient and actually quite stupid. Over decades layers keep getting added to it to make it more robust, but really it's a monstrosity. Partitions are dumb, LVM is dumb, disk block RAID is dumb, monolithic filesystems are dumb. All the current linux filesystems should be thrown out.

I don't want to care how big my partitions are, what level parity protection my disks have, or any of that junk. I want to add or remove storage hardware whenever I want, and I want my files bit-exact, and I want to choose at will for each file what the speed vs protection from hardware failure is. Why shouldn't one file be mirrored, the next be stripped, and the next have parity or double parity protection? Why can't very, very important files have two or three mirrors?

From the current status of ZFS however I think this could be quickly built using GPL 2+ by one or two determined people, and it would involve gutting the linux file systems.

On Apple (0)

Anonymous Coward | more than 7 years ago | (#19004825)

" Siracusa ultimately believes that the ability to achieve such a break is more likely to emerge within an authoritative, top-down organization than from a grass-roots, fractious community such as Linux"

ie. Apple

It sounds cool, but I think I like the layers more (2, Interesting)

DaleGlass (1068434) | more than 7 years ago | (#19004887)

Layers might not be ideal, but they're consistent. The filesystem does its part, RAID/LVM does its own, etc.

ZFS seems to want to take all over the disk subsystem. Why? Is there a reason why it needs its own snapshot capabilities, instead of just using LVM?

These sorts of things always smell fishy to me, due to a feeling that once you start using it, it locks you in more and more until you're doing it all in this new wonderful way that's incompatible with everything else. Even though it's open source, it's still inconvenient.

This approach reminds me a lot of DJB's software: If you try to get djbdns you'll be also strongly suggested to use daemontools as well. The resulting system is rather unlike anything else, and a reason why many people avoid DJB's software.

Re:It sounds cool, but I think I like the layers m (4, Informative)

lokedhs (672255) | more than 7 years ago | (#19005307)

ZFS seems to want to take all over the disk subsystem. Why? Is there a reason why it needs its own snapshot capabilities, instead of just using LVM?
Because there are many things your storage system can do if it has knowledge of the entire stack.

The problem with a "traditional" layered model is that the file system has to assume that the underlying storage device is a single consistent unit of storage, where a single write either succeeds, or it fails (in which case the data you wrote may or may not have been written). This all sounds very good and file systems like ext2 are written based on this assumtion.

However, if the underlying storage system is RAID5, and there is a power loss during the write, the entire stripe can become corrupt (read the Wikipedia article [wikipedia.org] on the subject for more information). The file system can't solve this problem because it has no knowledge about the underlying storage stucture.

ZFS solves this problem in two ways, both of which reuires the storage model to be part of the filesystem:

  1. Each physical write never overwrites "live" data on the disk. It writes the stripe to a new location, and once it's been completely committed to disk the old data is marked as free.
  2. ZFS uses variable stripe width, so that it does not have to write larger stripes than nescessary. In other words, a large write can be directly translated to a write to a large stripe on the sotrage system, and a smaller write can use a smaller stripe width. This can improve performance since it can reduce the amount of data written.
There are plenty of other areas where this integration is needed, including snapshotting, but I hope the above explanation explains that the layered model is not always good.

holistic vs mess (1, Interesting)

nanosquid (1074949) | more than 7 years ago | (#19004985)

UNIX and Linux design is quite holistic: features are often added at various levels of the system in order to make a whole work out. For example, desktop search support had both user and kernel space components, Beryl/Compiz-style interfaces have triggered changes in Gnome, X11, and the kernel, etc.

UNIX and Linux have been careful about avoiding simplistic designs. ZFS is a simple, obvious answer to a problem: just pack all the functionality into one big codeball and start hacking. Microsoft does a lot of the same thing in Windows and Apple in OS X. It gives companies great time-to-market and long, impressive feature lists. It's also creates a mess. Microsoft, Apple, and Sun each have several iterations under their belts where they start off like that, then the system bloats, and finally collapses, causing the company to start over again from scratch or just abandon the market altogether.

Thanks, we know about the kind of "holistic" that these people are implementing, and we don't want it.

(And, frankly, I think Sun isn't really a UNIX company anymore; their system may still be UNIX compatible, but they stopped following the UNIX philosophy long ago.)

a niche player for ever (1)

baomike (143457) | more than 7 years ago | (#19005001)

>

Without being able to reach across layer, linux will never be able to accept viruii/worms like the other well known OS and will therefore remain a niche player. If your browser can infect the kernel the guys in RU just ain't gonna let it happen.

troll? (0, Redundant)

buttle2000 (1041826) | more than 7 years ago | (#19005155)

With a name like Siracusa, one might think so.

Huh? (-1, Troll)

oohshiny (998054) | more than 7 years ago | (#19005163)

Does Linux "Fail To Think Across Layers?"


Linux... thinks? Artificial intelligence in the Linux kernel? Where?

ZFS is effective because it crosses the lines set by conventional wisdom.


ZFS is more effective than its layered Linux equivalents? Where has that been shown, other than in Sun press releases and marketing hype?

Siracusa ultimately believes that the ability to achieve such a break is more likely to emerge within an authoritative, top-down organization than from a grass-roots, fractious community such as Linux.


Oh, I get it: he is referring to the "break" where companies like Sun periodically have to throw out their systems because they have become too bloated and unwieldy. Yes, Sun is quite familiar with that kind of "break": NeWS, NFS, SunOS, SunView, AWT, ... That must be why Sun keeps getting their butt kicked by open source.

Too bad he can't move to Soviet Russia anymore; they sure achieved a lot of similar "breaks" through an "authoritative, top-down organization", too; maybe North Korea will still take him in. As for me, I prefer the layered and free market approach of Linux.

Anyone read what Andrew Morton actually said? (4, Insightful)

Anonymous Coward | more than 7 years ago | (#19005171)

"I mean, although ZFS is a rampant layering violation and we can do a lot of
  the things in there (without doing it all in the fs!) I don't think we can
  do all of it." http://lkml.org/lkml/2006/6/9/409 [lkml.org]

It sounds like his main point was pointing out problems with the current file system, rather than saying ZFS is bad. I bet he simply thinks they should try to implement a much better file system than ext3 without breaking the current layering scheme. I don't see why this is so bad. Why not try it, and if it fails miserably, ZFS is already here.

I think the author of the article took everything out of context and was just looking for some ammo against Linux. His blog post sucked. He just says the same crap that everyone always says. I'm not saying there are no problems, but I don't see how any of the problems relate to Andrew Morton saying the Linux file systems need to be upgraded/replaced.

Revolution and evolution (2, Insightful)

DragonWriter (970822) | more than 7 years ago | (#19005407)

Siracusa ultimately believes that the ability to achieve such a break is more likely to emerge within an authoritative, top-down organization than from a grass-roots, fractious community such as Linux.


Nothing stops an "authoritative, top-down organization" from taking all the open-source work done on Linux, and applying its own methodology to driving it forward; if that's more effective than what everyone else in the Linux community is doing, users will be more interested in adopting what they do with it (and, heck, once the transition occurs, the less-centralized portions of the community will probably follow along and start working on the "Neo-Linux" thus produced.)

Its true that revolutionary, rather than evolutionary, change is probably best driven by a narrow committed group with a shared vision and the skills to realize than a disorganized community. But there is no barrier to that within Linux; and between the occasional revolutionary changes, the evolutionary changes that the community is very good at will still remain important. With open source, you don't have to choose: you can have a top-down narrow group working on revolutionary changes (you can have many of them working on different competing visions of revolutionary changes, which, given the risk involved in revolutionary change, is a good thing), all while the community at large continues plugging away on evolutionary changes to the base system—and if once one of the revolutionary variants attracts attention, begins working on evolutionary improvements to that, too.

so, uh... (0)

Anonymous Coward | more than 7 years ago | (#19005471)

while i was waiting for the year of linux on the desktop, ZFS has actually taken over on the desktop instead?

did i miss something?
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?