×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Raid 0: Blessing or hype?

CmdrTaco posted more than 9 years ago | from the more-of-the-subject dept.

Data Storage 380

Yoeri Lauwers writes "Tweakers.net investigates matters a bit more clearly and decides that AnandTech and Storagereview should think twice before they shout that "RAID 0 is useless on the desktop". Tweakers.net's tests illustrate the contrary"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

380 comments

RAID 0 (0, Redundant)

haRDon (712926) | more than 9 years ago | (#9912914)

Technically it's not RAID :P

Re:RAID 0 (1)

cipher uk (783998) | more than 9 years ago | (#9912939)

Technically you didn't have to tell us that because the article does. I'm sure everyone on slashdot reads the article so there no need repeating it. hmm...

Re:RAID 0 (0)

Anonymous Coward | more than 9 years ago | (#9913045)

Technically it's not RAID :P

technically, you're a twat.

MY EYES !! ITS SUNDAY FFS ! (1, Funny)

Anonymous Coward | more than 9 years ago | (#9912918)

Re:MY EYES !! ITS SUNDAY FFS ! (-1, Offtopic)

haRDon (712926) | more than 9 years ago | (#9912924)

Mod Parent Up!

Let the powers that be know that we don't like the new colour scheme.

Re:MY EYES !! ITS SUNDAY FFS ! (-1, Flamebait)

Anonymous Coward | more than 9 years ago | (#9912959)

mod parent up

and this is not off-topic, where the fuck else can we complain about this motherfucking crappy colour scheme? Or is it that you uptight cock-dribbles just can't stand being wrong?

IF YOU CHANGED THE GODDAMN COLOUR YOU WOULDN'T HAVE TO DEAL WITH THIS SHIT!

Re:MY EYES !! ITS SUNDAY FFS ! (0)

Anonymous Coward | more than 9 years ago | (#9913069)

They are modding you insightful instead of smart grandparent :).

Mod me insightful too, everyone has mod points to throw away apparently hehe.

Re:MY EYES !! ITS SUNDAY FFS ! (2, Informative)

Anonymous Coward | more than 9 years ago | (#9913107)

Just follow this link [slashdot.org]. Same article. Standard colors. It's all in the "it.slashdot.org". Also try it with Apple color scheme [slashdot.org].

I use RAID 0... (3, Informative)

remin8 (791979) | more than 9 years ago | (#9912930)

... for simplicity. It is nice to have one "large" drive (in windows) instead of spreading all of my files across smaller drives. Useless, it is not! Is it really very practical? I don't think so. I havent had a disk fail yet, but when it does I will be glad I have backups!

Re:I use RAID 0... (2, Insightful)

Anonymous Coward | more than 9 years ago | (#9912940)

You wouldn't need to use RAID for this. JBOD would be enough.

Re:I use RAID 0... (4, Insightful)

isorox (205688) | more than 9 years ago | (#9912943)

Sure, lose one drive and you lose everything. There are better ways to store everything on one "drive letter"

Re:I use RAID 0... (4, Informative)

fostware (551290) | more than 9 years ago | (#9912948)

Have you tried mount points in Windows? It's Disk Manager, right click on a drive and choose "Change Drive Letter or Paths..." - although it has to be an empty partition when you do this... It's just like linking drives to mount points in *nix.

Re:I use RAID 0... (3, Informative)

Dog-Cow (21281) | more than 9 years ago | (#9913066)

You can link a non-empty partition. You can even link it to a non-empty directory, just like in Unix, and, just like in Unix, it will hide the usual contents of said directory.

Re:I use RAID 0... (1)

Crizp (216129) | more than 9 years ago | (#9913080)

It does not have to be an empty partition, changing drive letters works with data one the partition. On the last reinstall of Windows, changing drive letters from G: to D: on my 2x60GB ATA133 RAID0 it worked like a charm.

Re:I use RAID 0... (1)

lachlan76 (770870) | more than 9 years ago | (#9912997)

If that's all you want, you should stick with JBOD. It won't be as fast, but if a drive goes, you have a better chance of getting your data back.

it's hype for the most part (1)

p51d007 (656414) | more than 9 years ago | (#9913079)

I put my new mobo on Raid-0, and found as far as
booting is concerned, it wasn't any faster than my
160gig 8 meg cache IDE drive. The new SATA raid drives are 120 gig 8meg cache 120+120=240
The increase, if any that I see is if you are transfering a LARGE number of files sequentially.
Other than that, I didn't really see any benefit.
If I had to reinstall everything, I'd opt for Raid-1, mirroring.

RAID Cost? (1, Interesting)

Anonymous Coward | more than 9 years ago | (#9913102)

How much does raid cost, how come I never heard of it for home users, just for large institutions?

I want RAID now, it sounds like a good idea, esp. if I get one that had redundancy, is it expensive?

what i like about RAID-0... (0, Troll)

cipher uk (783998) | more than 9 years ago | (#9912931)

...is that those who aren't too savvy with computers have fun re-installing windows when they have a raid array. "windows xp says i have no harddrive !!" i don't mind fixing these sorts of 'problems' for friends of friends when all it takes is to put a driver on a floppy and press F6. i also don't mind telling them i had some 'trouble' because of their harddrive setup and being paid for what actually took me like 5 minutes. so there are deffinatly some plus points to RAID-0

Re:what i like about RAID-0... (1)

archen (447353) | more than 9 years ago | (#9913093)

Does XP still require you have a floppy to load RAID drivers? I put a new set of hard drives in my machine and was sort of stuck when it was time to specify additional drivers since my computer doesn't have a floppy. Luckily I pulled one out of my Pentium 133 machine and got it to work. With Intel pushing the "no more floppy" stuff, I'm thinking that I might not be so lucky in 2 years to even have a floppy connector.

Re:what i like about RAID-0... (1)

rikkards (98006) | more than 9 years ago | (#9913127)

I believe anything SATA in general needs the "Hit F6 select Raid Drivers" right now. I have loaded XP on two motherboards with SATA and this was needed for both.

Redundant (0)

Anonymous Coward | more than 9 years ago | (#9912932)

Unlike the topic

Not For Everyone (3, Funny)

SQL Error (16383) | more than 9 years ago | (#9912933)

I'm sure that even here on Slashdot there are some people who aren't running huge multi-threaded database applications on their desktop machines, and for them, RAID-0 probably isn't going to help much.

But for the majority of us normal people who are running huge multi-threaded database applications on their desktop machines, RAID-0 is much nicer than having to manually allocate all of your database extents across your disks. Of course, RAID-10 would be better, but that would involve spending money...

Raid10? (0)

Anonymous Coward | more than 9 years ago | (#9912960)

How much does does raid 0 and raid 10 suppose to cost :P

Re:Not For Everyone (5, Funny)

magarity (164372) | more than 9 years ago | (#9912979)

But for the majority of us normal people who are running huge multi-threaded database applications on their desktop machines

Sorry, most slashdotters are NOT using Longhorn yet.

Re:Not For Everyone (1)

ergo98 (9391) | more than 9 years ago | (#9912999)

But for the majority of us normal people who are running huge multi-threaded database applications on their desktop machines, RAID-0 is much nicer than having to manually allocate all of your database extents across your disks. Of course, RAID-10 would be better, but that would involve spending money.../I>

The number of users that have a database on their desktop that puts space pressure on their harddrive (which are now generally in the 120-200GB range) is absolutely miniscule (and more likely the domain of really poor process shops where they have no development servers, and copy production right to the developers desktops. There is so much wrong there I won't even start).

Sure, RAID 0 is great for data loss! (0, Troll)

aaamr (203460) | more than 9 years ago | (#9912935)

If all you're looking for is speed, fine... but RAID artrays are typically installed not just for performance, but redundancy/data protection.

RAID 0 may provide the former, but the loss of a single disk = bye bye data.

Re:Sure, RAID 0 is great for data loss! (1)

Amiga Lover (708890) | more than 9 years ago | (#9912954)

If all you're looking for is speed, fine... but RAID artrays are typically installed not just for performance, but redundancy/data protection.

RAID 0 may provide the former, but the loss of a single disk = bye bye data.


As opposed to having a single disk which, when it goes byebye preserves your data? I don't think so.

Raid 0 is no different to having a single disk for most practical purposes. If hardware fails, restore from the last night's backups. easy. Where's the problem?

Re:Sure, RAID 0 is great for data loss! (0)

Anonymous Coward | more than 9 years ago | (#9912973)

Any of two drives fails statistically more often then one of one drives. That's the problem with raid 0.

Re:Sure, RAID 0 is great for data loss! (0)

Anonymous Coward | more than 9 years ago | (#9913005)

Maybe I'm just lucky, but I've had one drive fail in the nine years I've been using computers.

If all that time I could have had a speed boost and only had TWO drive failures, that's fine. It's not a big deal.

Maybe if drives failed left right & centre, constantly, and it was always a struggle just to get any computing work done for the drives failing underneath me then it wouldn't be worth it. This is talking home use of course, and not applications where aiming towards 100% reliability is crucial for business.

Re:Sure, RAID 0 is great for data loss! (0)

Anonymous Coward | more than 9 years ago | (#9912974)

As opposed to having a single disk which, when it goes byebye preserves your data? I don't think so.


Yes but if a single disk dies, then you can always recover most of your data from it. If one disk in a RAID array dies, you've lost it all and it's permanently gone

Re:Sure, RAID 0 is great for data loss! (1)

ezzzD55J (697465) | more than 9 years ago | (#9912980)

"Raid 0 is no different to having a single disk for most practical purposes. If hardware fails, restore from the last night's backups. easy. Where's the problem?"

Yes, if you make backups it's no big deal, but drive failures will happen, on average, in half the time of failures on a single drive..

Re:Sure, RAID 0 is great for data loss! (1)

Crizp (216129) | more than 9 years ago | (#9913101)

..and if you got the "tech savvy" to actually set up / decide to use a RAID0, you should know backups are good, mmmkay. If not, you deserve the lesson you get when the drive goes *poof*

Re:Sure, RAID 0 is great for data loss! (1)

kinema (630983) | more than 9 years ago | (#9912990)

"As opposed to having a single disk which, when it goes byebye preserves your data?
Acually with a two drive RAID-0 set you are twice as likely to loose your data as if you were just running one disk.

Re:Sure, RAID 0 is great for data loss! (2, Interesting)

jackb_guppy (204733) | more than 9 years ago | (#9913062)

The same arguement goes for mirrored as well.

Have you ever had a "sick" drive in a mirrored array? When that drive is working, it is giving out bad data that is then being written to both drives during the update/write back. Then you have coruption on two drives instead of one.

The "safe" setup is Raid-5, but if you loose 2 drives you lost all...

A service tech loose his balance while replacing a down drive in a HOT Raid-5. He fell backward while squating pushing in the new drive. He grabed another drive in the same array to stop his fall, and pulled it out... Every bad shutdown for a production system, and a very long recovery.

Now service techs are required to sit in chair when changing a drive below chest hieght.

Re:Sure, RAID 0 is great for data loss! (-1)

Saven Marek (739395) | more than 9 years ago | (#9913055)

> Raid 0 is no different to having a single disk for most practical
> purposes. If hardware fails, restore from the last night's
> backups. easy. Where's the problem?

One problem dude. How are you going to know when to make your backups? Drives can fail at any time, it might be one month after purchase when you've just finished setting up your new RAID and getting settled into it or it could be in ten years. There's no way of knowing. Making backups each night is a waist of energy and time that could be prevented by staying with one reliable disk instead of an unreliable raid array with 2 or 4 the unreliableness goes up exponentially.

So stick with one disk and backups when you think you need them instead of rediculous schemes that require too much energy and time to run

Re:Sure, RAID 0 is great for data loss! (1)

moonbender (547943) | more than 9 years ago | (#9913117)

Making backups each night is a waist of energy and time that could be prevented by staying with one reliable disk instead of an unreliable raid array with 2 or 4 the unreliableness goes up exponentially.

Most places that have a significant IT infrastructure make a daily backup anyway because it's a Really Good Thing to have when your work centers on what you've got on the hard drives. Even the most reliable HD can fail at some inconvenient point, and of course even if all the hard- and software works flawless there is always the significant risk of human error.

That said, most private PCs I know don't get backed up with any kind of regularity.

Re:Sure, RAID 0 is great for data loss! (1)

Slack3r78 (596506) | more than 9 years ago | (#9913082)

It's a matter of probability. When you add drives to an array, your probability of failure goes up exponentially with each drive.

So yes, as far as losing data is concerned, it's the same both ways. However, the chances that you'll lost that data in the first place are much higher.

Re:Sure, RAID 0 is great for data loss! (4, Funny)

neonstz (79215) | more than 9 years ago | (#9912956)

RAID 0 may provide the former, but the loss of a single disk = bye bye data.

Actually, it's bye bye da or bye bye ta.

Re:Sure, RAID 0 is great for data loss! (2, Interesting)

jackb_guppy (204733) | more than 9 years ago | (#9912996)

Actually closer to: 0:bebedt 1:y y aa

Re:Sure, RAID 0 is great for data loss! (0)

Anonymous Coward | more than 9 years ago | (#9913064)

Actually closer to: 0:bebedt 1:y y aa

You set your stripe size to 1 byte? You're a crazy man, I tell you.

Re:Sure, RAID 0 is great for data loss! (1)

azaris (699901) | more than 9 years ago | (#9912977)

If all you're looking for is speed, fine... but RAID artrays are typically installed not just for performance, but redundancy/data protection.

Regular backups and off-site storage are installed for data protection. Even with RAID 5 if IT hits the fan and you have no backups, you're more than likely screwed.

Re:Sure, RAID 0 is great for data loss! (1)

Crizp (216129) | more than 9 years ago | (#9913118)

It's so fun when you have a drive failure and replace the faulty drive. While the drive is being regenerated, a second disk goes.

This happend TWICE in a year at work.

Re:Sure, RAID 0 is great for data loss! (2, Insightful)

i23098 (723616) | more than 9 years ago | (#9912981)

What if you have one large disk?! loss of single disk = bye bye data... RAID-0 (or AID-0 since it hasn't has Redundancy ;-) ) is simply for performance and for a virtual unique large drive. And the article comes to prove just that. Usually desktop users don't have much critical information on their computers (Nothing than can't be saved in a each-time-more-inexpensive DVD) and don't mind every 3 (or more) years to install stuff again. They probably switch computer before one of the disks blow...

I've used RAID 0 in the past (4, Interesting)

Anonymous Coward | more than 9 years ago | (#9912936)

I don't care what tests people have done or what benchmarks they're spouting off, RAID 0 works.

I used to have a system which used relatively cheap 5400 RPM drives in a RAID 0 array. There was a quite noticable difference when not using RAID 0. When using 2 or 4 drives the system was damn fast even though the drives were individually slow.

I don't even read these articles. I know it makes a difference.

Sexism anyone? (0)

Anonymous Coward | more than 9 years ago | (#9912945)

...use their desktop systems differently than the pretty blonde next door who only uses it to check in on her Hotmail account.

Article can easily be ignored. (0, Troll)

Anonymous Coward | more than 9 years ago | (#9912947)

  1. Tweakers.net has a poor reputation amongst serious people here in the Netherlands for cranking out bullshit.
  2. Tweakers.net articles are in dutch. This is either a blatant copy-paste or some just a cheap trick to get some webtraffic boost.
  3. The reputation, reliability and trustworthiness of the tweakers.net community is about on par with the Gartner group, to put it in Slashdot terms.

- Seth

Not Convinced (1)

SuperJason (726019) | more than 9 years ago | (#9912951)

I don't think that I'll ever be convinced either way. They (not any one specific) have been saying different things for years. Your best bet is to probably just buy a fast hard drive to begin with. It will end up being faster, and more reliable.

Re:Not Convinced (1)

Crizp (216129) | more than 9 years ago | (#9913136)

Have you ever tried (R)AID 0?

It's faster. Maybe not for everyone, but for ones that copy big files and have lots of disk access it's faster.

MORE SILLYNESS (0)

Anonymous Coward | more than 9 years ago | (#9912957)

When will the sillyness stop? If you want a huge performance gain then get a 15k SCSI drive instead of doubling your chance of data loss with RAID-0 and IDE drives! Desktop users don't need this performance, I know of several pros with high end multimedia setups and none of them use anything other than seperate IDE drives!

Desktop performance. (3, Interesting)

Zorilla (791636) | more than 9 years ago | (#9912963)

My computer is over three years old (P4 1.7 GHz upgraded to 386 MB of RAM from 128) and I've found that the slowest technological advancement seems to be hard drive throughput. This definitely reveals itself because of the fact that games like Doom 3, Far Cry, and Painkiller are all perfectly playable on my computer, but the latter two games take an unbearably long time to load. When I build my next computer, RAID 0 is one of the things I will be looking at, because I absolutely hate waiting more than 5 seconds for a game to load.

(Yes, I'm aware that only 384 MB of RAM is slowing load times via virtual memory swapping as well)

Re:Desktop performance. (0)

Anonymous Coward | more than 9 years ago | (#9912994)

oops, typo, I realize I actually have 384 MB of RAM. Bugs me when I hear people say, "I have 190 MB of RAM," or "my screen is set to 1020x800"

-Zorilla

Re:Desktop performance. (1)

ergo98 (9391) | more than 9 years ago | (#9913059)

This definitely reveals itself because of the fact that games like Doom 3, Far Cry, and Painkiller are all perfectly playable on my computer, but the latter two games take an unbearably long time to load.

Most modern SATA hard drives read at approximately 50-70MB/second -- do you really think this is the reason those games load slowly? It isn't. They generally load slowly because of the use of compressed objects, or maps that need to be rendered into structures in memory: It's far more likely your CPU that is the limiting factor (it's easy enough to turn on performance monitor and see).

If you haven't tried it, don't knock it. (5, Insightful)

cwm9 (167296) | more than 9 years ago | (#9912964)

A common misconception is that striping beyond 2 drives is "worthless." That simply isn't true: remember that the inside of the drives, close to the spindles, has a transfer rate that is nearly half what it is on the outside cylindars. By striping 4 drives togeather, about half the bandiwdth is wasted near the FRONT of the drive, but near the tail, it's almost all being used. The effect is that the drive feels uniformly quick no matter what part of the drive you are reading from!

I personally jumped from a single drive to a 4-drive SATA raid-0 system, composed of 120GB drives from two different manufacturers.

The system screams.

I can't tell you how nice it is to have my computer boot in half the time... how your system feels like you always wished it would feel. You can add all the memory you want, all the processing power you want, but if you can't feed the computer, it's all pointless.

The only thing I wish now was that my system had a faster and/or wider bus that would allow me to take advantage of all the currently unused bandwidth available from the four drives.

Re:If you haven't tried it, don't knock it. (1)

ergo98 (9391) | more than 9 years ago | (#9913024)

"I can't tell you how nice it is to have my computer boot in half the time... how your system feels like you always wished it would feel. You can add all the memory you want, all the processing power you want, but if you can't feed the computer, it's all pointless...The only thing I wish now was that my system had a faster and/or wider bus that would allow me to take advantage of all the currently unused bandwidth available from the four drives."

Translation - "If my system were tremendously faster, then it would justify the risk and cost of my unnecessary 4-drive RAID-0 array! Don't knock it until you've tried some future computer that actually has a use for this bandwidth!"

There are a few problems with your analysis. Firstly, boot time really isn't that important (yes, even if you're using Windows) - booting from a single disk IDE Windows Server 2003 is awaiting your login in about 7 seconds. Not really a critical amount of time.

For virutally all other activity (I'm currently running Visual Studio 2005 and SQL Sever 2005, IIS, a variety of web services, Mozilla, Outlook, and a plethora of system services) the hard drives are twiddling their spindles, doing absolutely nothing, and when they do it's generally sporatic very-small accesses that are affected by random access astronomically more than by throughput.

Re:If you haven't tried it, don't knock it. (4, Informative)

Slack3r78 (596506) | more than 9 years ago | (#9913071)

Here's the whole thing - I *have* tried it. If your workload involves lots of long, sequential reads, it's a great thing. I've personally got 2 machines running drives in RAID 0 as they get used for working with files in the 1.5-2GB range. It makes a difference here.

The whole point of SR and AT's articles, however, is that for most desktop systems, RAID 0 is pretty much a bad idea. You'll see marginal improvement on more random data sets, but you've spent four times as much, and, more importantly in my mind, your probability of failure has increased from P to P^4.

So really, I can see some applications where RAID 0 can be useful - I fit one of them. But for most desktop systems, it's not worth the cost. For systems with more than 2 drives anyway, it seems like a patentedly Bad Idea(TM). You really should've gone with RAID 5 - you'd still have striping, but you don't risk losing everything to a single faulty drive.

Theoretical versus Actual (5, Interesting)

ergo98 (9391) | more than 9 years ago | (#9912966)

A common theme, revisited several times, in the article is that the other conclusions were wrong because they used low-load testing.

"A safe conclusion would be that a Business Winstone 2004-benchmark alone is not a good starting point when testing RAID 0 performance. On the contrary: to have some reliable tests, we will need to put heavy loads on the array."

In essence, if my understanding is correct, they're saying that the value of a RAID 0 setup is under constant extreme loads, not the loads created by business applications or games. Isn't this entirely the point of the articles in question - That given the sporatic, generally light load of even power users, RAID 0 is not really that beneficial (as random access plays even more of a part than gross throughput)?

Even under perceived heavy I/O loads, the reality is often that the hard disk is under-used - I occasionally compress videos from miniDV to DVD, and my CPU would need a four or five fold increase in speed to even begin to put pressure on the single 7200 RPM hard disk.

Re:Theoretical versus Actual (1)

he who meows (766234) | more than 9 years ago | (#9913057)

It sort of depends on what you call a heavy I/O load. Compression is usually a matter of CPU time, and it can only write out to disk as fast as it can compress the data. Even then, you're only writing to the disk more or less sequentially, not making a lot of parallel read and write operations. This is a bad example.

Re:Theoretical versus Actual (1)

ergo98 (9391) | more than 9 years ago | (#9913068)

Right, and that's my point - the most commonly given justification for RAID arrays on a desktop machine is video compression (as it's really the only thing that uses huge amounts of data), yet it is signficantly more constricted by other limits than it is by disk I/O limits. Other than that there are few examples of huge data usage on the desktop, apart from contrived examples.

Re:Theoretical versus Actual (1)

Alwin Henseler (640539) | more than 9 years ago | (#9913090)

For applications where it matters (latest 3D games), the performance bottlenecks are CPU, memory and video card. Faster disk I/O helps load a game faster, but does nothing to make it run faster.

So, twice the cost, twice the hardware hassle (cabling, power, physical space), twice the noise, twice the power consumption, and for practical purposes less than twice the performance, twice the chance of drive failure, and when a drive fails, twice the amount of data lost.

Add to that the more difficult configuration, and extra hassle when re-installing an OS: is it worth the trouble?

For all but a few home users: probably not. Maybe that's why many PC's only have a single hdd?

Methodology (4, Insightful)

Jeremy Erwin (2054) | more than 9 years ago | (#9912967)

Tweakers.net conludes

And it's not just our benchmark results that support this view: the majority of Tweakers.net readers who at one time or another tried striping, feel that the overall responsiveness of their computer improved when employing RAID 0.


Of course they do. After all, they've spent extra money and time pimping out their rigs.

Re:Methodology (5, Insightful)

Slack3r78 (596506) | more than 9 years ago | (#9913033)

Yeah, I did an absolute double take when I got to that part. They spend an entire article bashing the two of the most methodical sites out there on methodilogy and then try to use a completely unscientific poll as backing evidence to their claim? Let alone a poll that's naturally pre-biased to a particular conclusion. It really puts the validity of the rest of the article into question. If that's acceptable evidence, what other shoddy methods are acceptable to them?

If you've spent the extra money on RAID 0, you're going to believe there's a difference going in. Hell, I've done it myself - I have 2 machines with RAID 0 setups, but that's because they're commonly used for working with multi-gig sized files in photoshop - IE: I actually need the strong sequential speed.

For normal desktop setups, I'd absolutely agree with AT and SR on this one. Unless you're doing massive amounts of large sequential reads/writes, you're just not going to see a difference in speed worth the cost of another drive and the major increase in potential failure and data loss. Remember, by adding that second drive, your chance of failure goes up *exponentially* which is something a lot of hardcore "tweakers" forget.

Backup your data daily if you are using RAID0 (1)

astellar (675749) | more than 9 years ago | (#9912970)

More disks you are using in raid gives you more chances to lose your data. Daily backup is only way to save your work. I force my programmers to backup of their data to our office samba server.

Performance and reliability (1)

rijrunner (263757) | more than 9 years ago | (#9912975)

Interesting article. But, I am not quite sure that they understand the rationale many people have for not using striping on their desktop.

1) Does it matter if you cut .01 seconds off the time it takes to write out the document you just wrote?

2) Does it matter if you have a disk failure and lose all your data on all partitions in the stripe? Everyone at home makes daily backups.

Re:Performance and reliability (1)

FooAtWFU (699187) | more than 9 years ago | (#9913086)

It doesn't matter too much if you lose all your data when "all your data" means a few dozen megabytes of save-games. If you're using the machine for real work, however...

maybe i missed it. (1)

Muerto (656791) | more than 9 years ago | (#9912976)

I didn't notice anyone state the fact that yes, the reads are faster, but the writes are slower. This is the problem with this type of "RAID".. writes take more time than on a single drive. So, you decide whats more important.. buying a bunch of IDE drives that will fail causing you to lose all of your data.. (mulitply your pleasure mulitply your danger) or buy 1 scsi drive and be done with it.

Re:maybe i missed it. (2, Informative)

ezzzD55J (697465) | more than 9 years ago | (#9913008)

Huh, writes slower on raid0? why on earth would that be? writes are just as fast as on a single drive on raid1, and writes are a bit slower on raid4 and raid5 due to parity updates, but that's it.. writes are not slower on raid0.

*shudder* (5, Insightful)

LordLucless (582312) | more than 9 years ago | (#9912993)

Just the thought of using RAID-0 makes me shiver. The only people who should use this are people who keep good backups, and like using them. The speed gains are of little use for individuals, and for the professionals or corporations that might actually want the speed-up, the chances of data-loss are too high.

That's not to say there isn't a purpose for RAID-0 - it teaches people how useful backups are. The hard way.

Re:*shudder* (1)

irc.goatse.cx troll (593289) | more than 9 years ago | (#9913099)

Or for people who replace the contents on the drive fast enough that losing everything wouldn't matter. I've, uh, 'heard about' these magical FTP servers where the fact that they have >2tb of diskspace doesnt make failure matter, because at a gigabit of connection it will have all the current releases as soon as its brought back up, and old releases stop being useful for trading after a week.

Re:*shudder* (1)

AVee (557523) | more than 9 years ago | (#9913108)

the chances of data-loss are too high.

Why? Yes, you double the change of data loss, but if that makes it 'too high' the change of dataloss was pretty high anyway. If you buy somewhat decent disks the your change of getting 2 disks that will run flawless for years is extremely high. But yes, you should not use raid 0 with disks that have a 10% failure rate, but you shouldn't be using those disks anyway...

It works for me... (1)

Anita Coney (648748) | more than 9 years ago | (#9912995)

My RAID-0 drive opens huge files nearly twice as fast. That's useful to me.

E.g., a 488 MB wave file of Velvet Underground's first album opened in 28 seconds on my D drive, but in only 16 seconds on my RAID-0 partition. All the drives are the same, i.e., Maxtor 80gb 7200 drives.

Premiere works a lot faster too.

The only problem is that I have to be extra anal about backing it up. But any insentive to get me to back up my stuff is a good thing, as far as I'm concerned.

Re:It works for me... (0)

Anonymous Coward | more than 9 years ago | (#9913104)

Maxtor drives? Have fun with that. In the field i've seen over 6 fail for one persion (!), and personally i've had 4 fail.

Maxtor sucks, period.

Re:It works for me... (1)

Anita Coney (648748) | more than 9 years ago | (#9913150)

I've been using Maxtor drives since 1995. The only one I've had die occured immediately. I was able to take it back to the store and get a replacement.

Now Western Digital, that's a little different. Newegg had some really good prices on 80 gb WDs, but two of the drives I bought died after about a month. The other two get backed up a lot.

Better Drive Layout (1)

fostware (551290) | more than 9 years ago | (#9912998)

I've always had 2+ drives in my systems.

C: (System) NTFS 100% of Disk0
X: (Swap) Disk1 Part0: 13G FAT32 with 2048MB (min & max) swap file
D: (Storage) Disk1 Part 1 :Rest of Disk1 in NTFS

The idea being that the swap file is at the extreme start of one of the disks. On Server RAID5 it's always the second partition...

part0: Decent size NTFS for System
part1: 3G FAT32 for perm set size pagefile
part2: 20GB NTFS Exchange Store
part3: NTFS rest of the drive for storage

Than again YMMV

Raid From Wikipedia (0)

Anonymous Coward | more than 9 years ago | (#9913001)

Raid article from Wikipedia [wikipedia.org], it is released under the GFDL so is able to be reproduced fully or in part anywhere.

--

In computing [wikipedia.org], a Redundant Array of Independent Disks (more commonly known as a RAID array [wikipedia.org] ) is a system of using multiple hard drives [wikipedia.org] for sharing or replicating data [wikipedia.org] among the drives. The benefit of RAID is increased data integrity [wikipedia.org], fault-tolerance [wikipedia.org] and/or performance [wikipedia.org], over using drives singularly. Put more simply, RAID is a way to combine multiple hard drives into one single logical unit. So instead of four different hard drives, the operating system [wikipedia.org] sees only one hard drive. RAID is typically used on server [wikipedia.org] computers, and is usually implemented with identically-sized disk drives. With decreases in hard drive prices and wider availability of RAID options built into motherboard [wikipedia.org] chipsets, RAID is also being found and offered as an option in higher-end end user computers, especially computers dedicated to storage-intensive tasks, such as video and audio editing.

The original RAID specification (which also used the term, inexpensive instead of independent) suggested a number of prototype RAID Levels, or combinations of disks. Each had theoretical advantages and disadvantages. Over the years, different implementations of the RAID concept have appeared. Most differ substantially from the original idealized RAID levels, but the numbered names have remained. This can be confusing, since one implementation of RAID-5, for example, can differ substantially from another. RAID-3 and RAID-4 are often confused and even used interchangeably.

The very definition of RAID has been argued over the years. The use of the term redundant leads many to split hairs over whether RAID-0 is real RAID. Similarly, the change from inexpensive to independent confuses many as to the intended purpose of RAID. There are even some single-disk implementations of the RAID concept! For the purpose of this article, we will say that any system which employs the basic RAID concepts to recombine physical disk space for purposes of reliability or performance is a RAID system.

History

RAID was first patented by IBM [wikipedia.org] in 1978 [wikipedia.org]. In 1988 [wikipedia.org], RAID levels 1 through 5 were formally defined by David A. Patterson [wikipedia.org], Garth A. Gibson [wikipedia.org] and Randy H. Katz [wikipedia.org] in the paper, A Case for Redundant Arrays of Inexpensive Disks (RAID) [cmu.edu] (http://www-2.cs.cmu.edu/~garth/RAIDpaper/Patterso n88.pdf). This was published in the SIGMOD [wikipedia.org] Conference 1988: pp 109-116. The term RAID started with this paper.

It was particularly ground-breaking work in that the concepts are both novel and obvious in retrospect once they have been described. This paper spawned the entire disk array [wikipedia.org] industry.

[edit [wikipedia.org]]
RAID Implementations
[edit [wikipedia.org]]
Inexpensive vs. Independent

While the I in RAID now generally means independent, rather than inexpensive, one of the original benefits of RAID was that it did use inexpensive equipment, and this still holds true in many situations, where IDE/ATA [wikipedia.org] disks are used.

More commonly, independent (more expensive) SCSI [wikipedia.org] hard disks are used, although the cost of such disks is now much lower--and much lower than the systems RAID was originally intended to replace.

[edit [wikipedia.org]]
Hardware vs. Software

RAID can be implemented either in hardware [wikipedia.org] or software [wikipedia.org].

With a software implementation, the operating system [wikipedia.org] manages the disks of the array through the normal drive controller (IDE [wikipedia.org], SCSI [wikipedia.org], Fibre Channel [wikipedia.org] or any other). This option can be slower than hardware RAID, but it does not require the purchase of extra hardware.

A hardware implementation of RAID requires (at a minimum) a special-purpose RAID controller [wikipedia.org] . On the desktop, this may be a PCI [wikipedia.org] expansion card [wikipedia.org], or might be a capability built-in to the motherboard [wikipedia.org]. In larger RAIDs, the controller and disks are usually housed in an external multi-bay enclosure. The disks may be IDE, SCSI, or Fibre Channel while the controller links to the host computer with one or more high-speed SCSI or Fibre Channel connections. This controller handles the management of the disks, and performs parity [wikipedia.org] calculations (needed for many RAID levels). This option tends to provide better performance, and makes operating system support easier. Hardware implementations also typically support hot swapping [wikipedia.org], allowing failed drives to be replaced while the system is running.

Both hardware and software versions may support the use of a hot spare, a preinstalled drive which is used to immediately (and usually automatically) replace a failed drive.

[edit [wikipedia.org]]
Standard RAID Levels
[edit [wikipedia.org]]
RAID 0

A RAID 0 Array (also known as a stripe set) splits data data evenly across two or more disks with no parity information for redundancy. RAID-0 is normally used to increase performance, although it is also a useful way to create a small number of large virtual disks out of a large number of small ones. Although RAID-0 was not specified in the original RAID paper, an idealized implementation of RAID-0 would split I/O operations into equal-sized blocks and spread them evenly across two disks. RAID-0 implementations with more than two disks are also possible, however the reliability of a given RAID-0 set is equal to the average reliability of each disk divided by the number of disks in the set. That is, reliability (MTBF [wikipedia.org]) decreases linearly [wikipedia.org] with the number of members - so a set of two disks is half as reliable as a single disk. The reason for this is that the file system [wikipedia.org] is distributed across all disks. When a drive fails the file system cannot cope with such a large loss of data and coherency since the data is striped across all drives. Data can be recovered using special tools, however it will be incomplete and most likely corrupt.

RAID-0 is useful for setups such as large read-only [wikipedia.org] NFS [wikipedia.org] servers [wikipedia.org] where mounting [wikipedia.org] many disks is time-consuming or impossible and redundancy is irrelevant [wikipedia.org]. Another use is where the number of disks is limited by the operating system [wikipedia.org]. In Windows [wikipedia.org], the number of drive letters is limited to 24, so RAID-0 is a popular way to use more than this many disks. However, since there is no redundancy, yet data is shared between drives, hard drives cannot be swapped out as all disks are interdependant upon each other.

RAID 0 was not one of the original RAID levels.

[edit [wikipedia.org]]
Concatenation (JBOD)

Although a concatenation of disks (sometimes called JBOD, or Just a Bunch of Disks) is not one of the numbered RAID levels, it is a popular method for combining multiple physical disk drives into a single virtual one. As the name implies, disks are merely concatenated [wikipedia.org] together, end to end, so they appear to be a single large disk.

In this sense, concatenation is akin to the reverse of partitioning [wikipedia.org]. Whereas partitioning takes one physical drive and creates two or more logical drives, JBOD uses two or more physcial drives to create one logical drive.

In that it consists of an Array of Inexpensive Disks (no redundancy), it can be thought of as a distant relation to RAID. JBOD is sometimes used to turn several odd-sized drives into one useful drive. Therefore, JBOD could use a 3 GB, 15 GB, 5.5 GB, and 12 GB drive to combine into a logical drive at 35.5 GB, arguably more useful than the individual drives separately.

[edit [wikipedia.org]]
RAID 1

A RAID 1 Array creates an exact copy (or mirror) of all of data on two or more disks. This is useful for setups where redundancy [wikipedia.org] is more important than using all the disks maximum storage [wikipedia.org] capacity [wikipedia.org]. The array can only be as big as the smallest member disk, however. An ideal RAID-1 set contains two disks, which increases reliability by a factor of two over a single disk, but it is possible to have many more than two copies. Since each member can be addressed independently if the other fails, reliability is a linear [wikipedia.org] multiple of the number of members. RAID-1 can also provide enhanced read performance, since many implementations can read from one disk while the other is busy.

One common practice is to create an extra mirror of a volume (also known as a Business Continuance Volume or BCV) which is meant to be split from the source RAID set and used independently. In some implementations, these extra mirrors can be split and then incrementally re-established, instead of requiring a complete RAID set rebuild.

Note: A representation of a typical RAID 1 array. Data A1, A2, etc. is spread out across two disks, increasing reliability and speed. RAID 2

A RAID 2 Array stripes data at the bit [wikipedia.org] (rather than block) level, and uses a Hamming code [wikipedia.org] for error correction [wikipedia.org]. The disks are synchronized by the controller to run in perfect tandem. This is the only original level of RAID that is not currently used.

[edit [wikipedia.org]]
RAID 3

A RAID 3 Array uses byte [wikipedia.org]-level striping with a dedicated parity [wikipedia.org] disk. RAID-3 is extremely rare in practice. One of the side effects of RAID-3 is that it generally cannot service multiple requests simultaneously. This comes about because any single block of data will by definition be spread across all members of the set and will reside in the same location, so any I/O operation requires activity on every disk.

In our example, below, a request for block A1 would require all three data disks to seek to the beginning and reply with their contents. A simultaneous request for block B1 would have to wait.

Note: A1, B1, etc each represent one data byte RAID 4

A RAID 4 Array uses block [wikipedia.org]-level striping with a dedicated parity [wikipedia.org] disk. RAID-4 looks similar to RAID 3 except that it stripes at the block, rather than the byte level. This allows each member of the set to act independently when only a single block is requested. If the disk controller allows it, a RAID-4 set can service multiple read requests simultaneously. Network Appliance Corporation [wikipedia.org] uses RAID-4 on their Filer [wikipedia.org] line of NFS [wikipedia.org] servers.

In our example, below, a request for block A1 would be serviced by disk 1. A simultaneous request for block B1 would have to wait, but a request for B2 could be serviced concurrently.

Note: A1, B1, etc each represent one data block RAID 5

A RAID 5 Array uses block [wikipedia.org]-level striping with parity [wikipedia.org] data distributed across all member disks. RAID-5 is one of the most popular RAID levels, and is frequently used in both hardware and software implementations. Virtually all storage arrays [wikipedia.org] offer RAID-5.

In our example, below, a request for block A1 would be serviced by disk 1. A simultaneous request for block B1 would have to wait, but a request for B2 could be serviced concurrently.

Note: A1, B1, etc each represent one data block

Every time a data block (sometimes called a chunk) is written on a disk in an array, a parity block is generated within the same stripe. (A block or chunk is often composed of many consecutive sectors on a disk, sometimes as many as 256 sectors. A series of chunks [a chunk from each of the disks in an array] is collectively called a stripe.) If another block, or some portion of a block is written on that same stripe, the parity block (or some portion of the parity block) is recalculated and rewritten. The disk used for the parity block is staggered from one stripe to the next, hence the term distributed parity blocks.

Interestingly, the parity blocks are not read on data reads, since this would be unnecessary overhead and would diminish performance. The parity blocks are read, however, when a read of a data sector results in a CRC error. In this case, the sector in the same relative position within each of the remaining data blocks in the stripe and within the parity block in the stripe are used to reconstruct the errant sector. The CRC error is thus hidden from the main computer. Likewise, should a disk fail in the array, the parity blocks from the surviving disks are combined mathematically with the data blocks from the surviving disks to reconstruct the data on the failed drive on-the-fly.

This is sometimes called Interim Data Recovery Mode. The main computer is unaware that a disk drive has failed. Reading and writing to the drive array continues seamlessly, though with some performance degradation.

In RAID 5 arrays which have only one parity block per stripe, the failure of a second drive results in total data loss.

The maximum number of drives is theoretically unlimited, but it is common practice to keep the maximum to 14 or less for RAID 5 implementations which have only one parity block per stripe. The reason for this restriction is that there is a greater likelihood that a drive will fail in an array when there is greater number of drives.

In a RAID-5 setup, the Mean Time Between Failures (MTBF [wikipedia.org]) value for the array as a whole (ie. the time before two disks die and you lose data) often becomes smaller (ie. the array fails more often) than that of a single disk (since there are more disks that can fail, despite the added redundancy), but as one can often replace a disk before the second one fails, it is usually more difficult to lose data. One should however be aware that many disks together increase heat, which lowers the real-world MTBF.

In implementations with greater than 14 drives, or in situations where extreme redundancy is needed, RAID 5 with dual parity (also known as RAID 6) is sometimes used, since it can survive the failure of two disks.

[edit [wikipedia.org]]
RAID 6

A RAID 6 Array uses block [wikipedia.org]-level striping with parity [wikipedia.org] data distributed 'twice' across all member disks. It was not one of the original RAID levels.

In RAID-6, parity is generated and written to two distributed parity stripes, on two separate drives.

Note: A1, B1, etc each represent one data block

RAID-6 is more redundant than RAID-5, but is very inefficient with low count of drives. See also Double Parity, below, for another more redundant implementation.

[edit [wikipedia.org]]

Nested RAID Levels

Many storage controllers allow RAID levels to be nested. That is, one RAID array can use another as its basic element.

[edit [wikipedia.org]]

RAID 0+1

A RAID 0+1 Array is a RAID array used for both replicating and sharing data among disks. The difference between RAID 0+1 and RAID 10 is the location of each RAID system - is it a stripe of mirrors or a mirror of stripes? Consider an example of RAID 0+1: 6 120GB [wikipedia.org] drives need to be set up on a RAID 0+1 array. Below is an example configuration:

where the maximum storage space here is 360GB, spread across two arrays. The advantage is that when a hard drive fails in one of the RAID 0 arrays, the missing data can be transferred from the other array. However, adding an extra hard drive requires you to add two hard drives to balance out storage among the arrays.

It is not as robust as RAID 1+0 and cannot tolerate two simultaneous disk failures, if not from the same stripe. That is to say, once a single disk fails, all the disks in the other stripe are each individual single points of failure. Also, once the single failed disk is replaced, in order to rebuild its data all the disks in the array must participate in the rebuild.

[edit [wikipedia.org]]
RAID 10

A RAID 10 Array, sometimes called RAID 1+0, is similar to a RAID 0+1 array except that the RAID levels used are reversed - RAID 10 is a stripe of mirrors. Below is an example where 3 collections of 120 GB RAID 1 arrays are striped together to add up to 360 GBs of total storage space:

One drive from each of the RAID 1 arrays could fail without damaging the data. However, if the failed drive is not replaced, the single working hard drive then becomes a single point of failure for the entire array. If that single hard drive then fails, all data stored in the entire array is lost.

Extra 120GB hard drives could be added to any one of the RAID 1 arrays to provide extra redundancy. Unlike RAID 0+1, all the 'sub-arrays' do not have to be upgraded at once.

Proprietary RAID Levels

Although all implementaions of RAID differ from the idealized specification to some extent, some companies have developed entirely proprietary RAID implementaions that differ substantially from the rest of the crowd.

[edit [wikipedia.org]]
Double Parity

One common addition to the existing RAID levels is Double Parity, sometimes implemented and known as Diagonal Parity. As in RAID-6, there are two sets of parity check information created. Unlike RAID-6, however, the second set is not a mere extra copy of the first. Rather, most implementations of Double Parity calculate the extra parity in a different direction. If we were to call traditional RAID parity horizontal, then Double Parity might be calculated vertically, or even diagonally, across a matrix of disks.

Note: A1, B1, etc each represent one data block Drives can be organized into orthogonal matricies, where rows of drives form parity groups, similar to RAID 5, while the columns also keep consistent parity data with each other. If a single drive fails, either its row or column parity may be used to rebuild it. Serveral drives on any one column or row may fail before the array is corrupt. Any group of non-coincident drives may fail before the array is corrupt.
[edit [wikipedia.org]]
RAID 7

RAID 7 is a trademark of Storage Computer Corporation [wikipedia.org]. It adds caching [wikipedia.org] to RAID-3 or RAID-4 to improve performance.

[edit [wikipedia.org]]
RAID S or Parity RAID

RAID S is EMC Corporation's [wikipedia.org] proprietary striped pairty RAID system used in their Symmetrix [wikipedia.org] storage systems. It is similar to RAID-4 in that that it does not stripe data across disks. Instead, each volume exists on a single physical disk, and multiple volumes are arbitrarily combined for parity purposes. EMC originally referred to this capability as RAID-S, and then renamed it Parity RAID for the Symmetrix DMX platform. EMC now offers standard striped RAID-5 on the Symmetrix DMX as well.

Note: A1, B1, etc each represent one data block. A, B, etc are entire volumes.
[edit [wikipedia.org]]
See Also
[edit [wikipedia.org]]
External links

Jesus Christ Mojimba (4, Informative)

Anonymous Coward | more than 9 years ago | (#9913027)

Just post the relevant Wiki information about Raid 0, dont need Raid's life history ;).

RAID 0

A RAID 0 Array (also known as a stripe set) splits data data evenly across two or more disks with no parity information for redundancy. RAID-0 is normally used to increase performance, although it is also a useful way to create a small number of large virtual disks out of a large number of small ones. Although RAID-0 was not specified in the original RAID paper, an idealized implementation of RAID-0 would split I/O operations into equal-sized blocks and spread them evenly across two disks. RAID-0 implementations with more than two disks are also possible, however the reliability of a given RAID-0 set is equal to the average reliability of each disk divided by the number of disks in the set. That is, reliability (MTBF) decreases linearly with the number of members - so a set of two disks is half as reliable as a single disk. The reason for this is that the file system is distributed across all disks. When a drive fails the file system cannot cope with such a large loss of data and coherency since the data is "striped" across all drives. Data can be recovered using special tools, however it will be incomplete and most likely corrupt.

RAID-0 is useful for setups such as large read-only NFS servers where mounting many disks is time-consuming or impossible and redundancy is irrelevant. Another use is where the number of disks is limited by the operating system. In Windows, the number of drive letters is limited to 24, so RAID-0 is a popular way to use more than this many disks. However, since there is no redundancy, yet data is shared between drives, hard drives cannot be swapped out as all disks are interdependant upon each other.

RAID 0 was not one of the original RAID levels.

wow, good cut and paste (0)

Anonymous Coward | more than 9 years ago | (#9913029)

what you cant just be original??

MOD PARENT DOWN (0)

Anonymous Coward | more than 9 years ago | (#9913034)

Score:-1, Very Poor Attempt At Whoring Karma

Idiot? (0)

Anonymous Coward | more than 9 years ago | (#9913042)

I don't think anonymous cowards can Karma whore, unless their NICK is named anonymous coward, you fucking god damn idiot!!!!!!!!!!!!

--
3dinfo@maficstudios.com

your post is not allowed (0)

Anonymous Coward | more than 9 years ago | (#9913171)

> Raid article from Wikipedia [wikipedia.org], it
> is released under the GFDL so is able to be
> reproduced fully or in part anywhere.

GFDL is no free-software licence. Your posting
if copyright infirdgement. You did not include
a copy of the GFDL *within* *the* *document*.

At least wikipedia has no front/back-cover requirements. Otherwise you had to change slashdots starting site to be allowed, too.

Better way to use two drives (1)

twelveinchbrain (312326) | more than 9 years ago | (#9913010)

If you have exactly two disk drives on a PC, you will get far better performance by intelligently choosing which drives hold which partitions. For my home workstation, for instance, I almost always have some program slowly writing 4GB files (archives *ahem* of DVD's), while another drive is busy fetching my program files and every day data. This configuration is much, much faster than if the same drives were on a RAID 0 array, because on a personal workstation, disk seek time is a much bigger factor than the transfer rate.

performance vs. reliability (1)

mjh (57755) | more than 9 years ago | (#9913012)

I won't use Raid-0 on my desktop unless I have a short term need for extra performance. Desktop based hard drives are just too unreliable to lose ALL of your data if you lose one of the striped drives.

In the two computers I have at my house, I've lost 4 IDE hard drives in the last 6 months! Maybe RAID-1, but even then I'd prefer a backup solution instead of a real-time data redundancy solution. (It's hard to restore a file that you *accidentally* deleted from a RAID based solution.)

Until SCSI gets cheaper or IDE gets more reliable, neither of which I see happening any time soon, I am unlikely to use RAID on the desktop in any sort as any sort of long term solution.

Re:performance vs. reliability (2, Insightful)

lachlan76 (770870) | more than 9 years ago | (#9913097)

Until SCSI gets cheaper or IDE gets more reliable, neither of which I see happening any time soon

It's not the interface that makes IDE drives less reliable, it's just that manufacturers want to keep server/workstation drives out of desktop machines for good reason - the 10/15kRPM drives need to be cooled, and as soon as people start to put them in desktop machines, they're gonna get a lot of warranty returns. Thereby lowering their profits further, and removing any advantage that they had.

There are two possible choices:
  1. Make server drives with an IDE interface
  2. Make cheaper drives with SCSI interface, thereby forcing it into the mainstream

#1 has been done by WD with their Raptor drives, but they are still expensive, and have a low capacity to reduce heat.
#2 is unlikely to work unless all the manufacturers do it at once, which isn't going to happen. And, they can't separate the pro and consumer drives as easily as when the consumer drives were IDE and pro were on SCSI.

There just isn't anything in it for the drive makers.

real raid = scsi + raid 5 (1)

Anonymous Chemist (62398) | more than 9 years ago | (#9913013)

what can I say. If you're looking for real raid, get a intel u160 or u320 backplane, some 15K scsi drives off ebay (brand new ~100 to 150 each), For $600 you can get a raid array with scalding performance, and 5 yr warranties on the drives, with the ability to rebuild your drive if you lose a stripe. Of course the controller is a little more expensive, but years of accumulated data is priceless right??

Since I lost a lot of data using a ide raid 0 system, I decided to bite the bullet and go real raid. There is absolutely no comparison.

One things for sure as always the old saying holds true: Buy nice or buy twice.

There is no real alternative to scsi raid yet.

RAID-0 is stupid. (5, Informative)

slamb (119285) | more than 9 years ago | (#9913031)

Here's why no one in their right mind uses RAID-0 on data that they care about:

Unlike other RAID-levels, RAID 0 does not offer protection against drive failure in any way, so it's not considered 'true' RAID by some (the 'R' in RAID stands for 'redundant', which does not apply to RAID-0).

When you have multiple hard drives, it's more likely that one will fail than if you just have one. For the obvious statistical reasons. Plus because of heat problems in many systems.

In a non-RAID setup with multiple hard drives, when one fails, you lose whatever was on that drive.

With RAID-n (for non-zero n), you lose nothing. You say "oh well", put in a spare drive, and send the old one back for replacement. (In the other order if you're cheap.) The array rebuilds itself. Without even shutting down the machine, if you have the hot-swappable drive cages.

With RAID-0, you lose everything on all of your hard drives.

RAID-0 is considerably less reliable than a single hard drive.

Re:RAID-0 is stupid. (0)

Anonymous Coward | more than 9 years ago | (#9913053)

"With RAID-n (for non-zero n)"

It is more accurate to say for n equal to or greater then 1 ;).

Also it would be better to say for any n belonging to the set of intergers equal to or greater then one :P.

P.S. Good post heh.

Re:RAID-0 is stupid. (1)

lachlan76 (770870) | more than 9 years ago | (#9913119)

With RAID-n (for non-zero n), you lose nothing. You say "oh well", put in a spare drive, and send the old one back for replacement...Without even shutting down the machine, if you have the hot-swappable drive cages

The really high-budget people here have a hot-spare setup.

Re:RAID-0 is stupid. (1)

dfghjk (711126) | more than 9 years ago | (#9913156)

The same argument applies for a single disk drive. No one in their right mind uses a single disk drive on data that they care about.

Multiple disk drives increase the chances of disk-related data loss, but failure of a cooling fan does, too. It is incorrect to assume that a two drive RAID 0 is twice as likely to result in data loss as one drive since you need to consider the entire system and the environment it is in.

Now, is RAID-0 is considerably less reliable than a single hard drive? Depends on how you define "considerably". If you have a hot environment with poor airflow and poor power line quality and no UPS, then the answer is no.

Raid 0 on OS X... hardware or software. (5, Informative)

XavierItzmann (687234) | more than 9 years ago | (#9913051)

Since 2002, I have been using the SIIG Raid 0 http://www.siig.com/product.asp?pid=424 [siig.com] card on a 1999 Sawtooth G4 with 0.48TB of internal storage. Hardware-wise, this is an OEM Acard card; also available from Sonnet and Miglia.

No disk failures to date ---I backup weekly with Apple's Backup 2.0

Here are some benchmarks that compare software RAID 0 performance (included free with OS X) vs. hardware RAID 0: http://www.xlr8yourmac.com/OSX/OSX_RAIDvsIDE_Card_ RAID.html [xlr8yourmac.com]

Think of the target (1)

dizzydazed (795256) | more than 9 years ago | (#9913063)

All this talk about losing a drive. phooey. We are talking desktops here, not servers. Be serious, how many of you back up your home systems? How many companies back up the desktop machines? (for that matter, how many properly back up their servers! -- a depressingly low number) The speed boost is nice, but the best part is the single drive. Perfect example, is when using the desktop as a PVR/video editor. Drive *space* is what is needed. Raid 0 is just right for that. I haven't lost a desktop drive in years. Lost 2 SCSI drives so far off my home server, but given that the drives ancient, were given to me free, is no big loss. Annoying to be sure. Granted, Raid 5 protects your data, but many of us cannot spare the extra $$$ for that for desktops. If it was for a company server, then d'oh! of course. For home use, particularly when recording HD content, bigger is where it's at.

Latency (1)

Handyman (97520) | more than 9 years ago | (#9913078)

For the "feel" of a machine, latency or "response time" is the most important factor. When the user requests an action, it is the time between the request and the machine's response that counts. For instance, the almost 2x speedup of booting XP means a 2x decrease in a very annoying latency, and it makes the system feel much faster even if nothing else changes. The numerous *small* latencies in a system also count -- don't you hate it when you click a menu and you have to wait a full two seconds before it pops up? The improvements measured in the benchmarks done by tweakers.net don't do justice to the importance of latency. The user doesn't care whether some background process (e.g. eMule) is fast -- he cares whether if he clicks a button, the result will show up with or without a noticeable delay. So what they should really be measuring is the time between certain checkpoints in a trace, e.g., the time between the point where the user did something (action) and the time when all the necessary data to respond to that action has been read (response).

Note that the 2x speedup can be easily explained. Windows XP optimizes the boot process by automatically generating traces of disk accesses done at boot, and by reordering the accessed blocks on disk so that they can be read in sequentially in the next boot. And striping over two disks theoretically improves sequential read throughput by... yes, a factor 2.

Re:Latency (0)

Anonymous Coward | more than 9 years ago | (#9913148)

For instance, the almost 2x speedup of booting XP means a 2x decrease in a very annoying latency, and it makes the system feel much faster even if nothing else changes. ... Windows XP optimizes the boot process by automatically generating traces of disk accesses done at boot, and by reordering the accessed blocks on disk so that they can be read in sequentially in the next boot.

Oh, please. All Microsoft did was make the desktop appear sooner, when there's still a ton of shit to load. I've got a home-built Athlon XP 2600 system that is running XP Pro, has an assload of RAM, and is kept well-tuned and malware-free, and that fucker takes a good 30-45 seconds after the desktop appears before it actually pays attention to any of my attempts to do anything.

I'd rather have the bootup latency, where I *know* I can't do anything, than have a desktop appear before the machine is actually ready to be used. The tease is infuriating-- more so when I switch over to my old 733MHz G4 and see how the OS X GUI generally feels faster than the Windows GUI on a PC running at three times the speed.

Here's an Idea (0)

Anonymous Coward | more than 9 years ago | (#9913109)

How about someone gives me the ability to virtually RAID 1 my RAID 0 setup? An extra layer with an extra controller, sure...but fault tolerant, yes?

I support desktop RAID 0 boxes... (1)

Aphrika (756248) | more than 9 years ago | (#9913122)

They're used quite frequently in video editing - specifically as scratch disks. Great performance, and no immediate need to back up either frequently or extensively. This is a great use for RAID 0. In this case, the OS isn't on the RAID 0 partition, so a drive failure isn't too much of a headache to solve.

A lot of people seem to be hung up on the 'if one drive fails, you lose everything' problem. Well, take 2 scenarios; 2x80GB drives and 1x160GB drive. Regardless of choice, a single drive failure will mean I lose everything, but one will give better performance, and be cheaper to replace. Which would you choose?

raid zero at home (1)

Mirko.S (696666) | more than 9 years ago | (#9913163)

i see no reason why someone shouldn't use raid0 at home...
well a friend of mine has a 19" raid case and 6 harddrives in it... but all together is 80gb... THATS braindead!
but having 2x 200gb and using raid0 (with vinum for example) for nonrelevant data (mp3s, movies etc.) should be no problem and not worth a discussion.
But i wouldn't store relevant (personal) data on a raid zero.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...