Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

RAID Problems With Intel Core 2?

Zonk posted about 8 years ago | from the arr-aye-eye-dee dept.

284

Nom du Keyboard writes "The Inquirer is reporting that the new Intel Core 2 processors Woodcrest and Conroe are suffering badly when running RAID 5 disk arrays, even when using non-Intel controllers. Can Intel afford to make a misstep now with even in the small subset of users running RAID 5 systems?" From the article: "The performance in benchmarks is there, but the performance in real world isn't. While synthetic benchmarks will do the thing and show RAID5-worthy results, CPU utilization will go through the roof no matter what CPU is used, and the hiccups will occur every now and then. It remains to be seen whether this can be fixed via BIOS or micro-code update."

cancel ×

284 comments

don't worry (4, Funny)

sum.zero (807087) | about 8 years ago | (#15669984)

it's not a bug, just errata ;)

sum.zero

Re:don't worry (5, Funny)

Tackhead (54550) | about 8 years ago | (#15670155)

> it's not a bug, just errata ;)

Where's the bug? My RAID 0+0.999999998 works just fine Intel Core 1.99904274017.

Why aren't you running a dedicated controller...? (5, Insightful)

saleenS281 (859657) | about 8 years ago | (#15669995)

If you're running raid5 it's probably in an enterprise setup. If so, why aren't you running a dedicated controller? The CPU should have little to no impact on the raid subsystem...

Seems odd to me that the inquirer is the only one reporting this. How about a real hardware review site?

Re:Why aren't you running a dedicated controller.. (4, Interesting)

andrewman327 (635952) | about 8 years ago | (#15670040)

I agree with this. For most people, backing up your data every week is a LOT better option for data security. Users who should be using RAID 5 should also have dedicated controllers.


Still, this is a problem for Intel. Their products are supposed to do what they do extremely well under all conditions. I hope that they find a way to fix this admittedly niche problem.

Re:Why aren't you running a dedicated controller.. (5, Insightful)

moggie_xev (695282) | about 8 years ago | (#15670111)

Reading the article it's all about software raid and the performance they get.

The interesting question is what other peices of software that we run will get unexpectedly bad performance.

( I have > 2TB of hardware RAID 5 at home so I was wondering ... )

Re:Why aren't you running a dedicated controller.. (1)

whoever57 (658626) | about 8 years ago | (#15670264)

Reading the article it's all about software raid and the performance they get.
Err, NO! It's about FAKERAID, which is a H/W S/W combo.

Re:Why aren't you running a dedicated controller.. (5, Insightful)

Albanach (527650) | about 8 years ago | (#15670325)

You are correct that RAID isn't a backup solution, but incorrect when you say if you're using RAID5 you should be in a data centre.

What if you have a lot of photos, music or movies - these aren't unusual things these days. I don't want to go rummaging through DVDs to find the picture I want, I want to fire up f-spot and see it there straight away.

RAID5 provides sensible protection against data loss when using consumer hard disks - software RAID5 is readily available on linux and hard disks in the 2-300GB range are easily affordable. You can often pick them up for $50 after rebates. So I can get a TB of storage for a few hundred dollars, but to use hardware RAID5 would probably double the cost. Fine if you're an enterprise, but not fine if you're using it at home.

Re:Why aren't you running a dedicated controller.. (5, Insightful)

myz24 (256948) | about 8 years ago | (#15670516)

I agree, it seems on slashdot (and actually, some of my friends) that you're an idiot if you're not running RAID but your equally dumb if you're running RAID5 because it's not a backup solution. It's as if there can't be any gray area in the matter. People make it seem like RAID5 has no purpose or benefit and everyone should just be using striping+backup. To me, the point of RAID5 or other redundancy RAID setups is it's your first line of recovery for a disk failure. If a disk fails, you replace it and you've suffered little downtime. If something major happens then yes, you restore from backup.

My other issue is with people forgetting the idea behind being sensible about what needs to be protected and how much it should cost. There is no reason why my personal collection of photos, music and video should cost me so much. Software RAID is way more than adequate for providing a cheap way to store my files. If data protection AND peak performance are what you need, then yes you need to go full hardware. WHERE'S THE MIDDLE GROUND PEOPLE?

Re:Why aren't you running a dedicated controller.. (1)

andrewman327 (635952) | about 8 years ago | (#15670600)

I go to school in Washington DC and I live the rest of the time in Southeast Pennsylvania. Both places have been hit by floods recently, which should alert you all to use things other than RAID for backup. RAID does have its palce, as a devestating hard drive failure can become a minor 45 minute annoyance.

Re:Why aren't you running a dedicated controller.. (1)

dfn_deux (535506) | about 8 years ago | (#15670626)

You are talking out your rear if yout think that going to a hardware raid is going to "double the cost" of a TB+ storage box. You can pickup an nice 3ware card for a few hundred bucks and if you want to go on the cheap a tekram or similar card is less than 100 bucks now days.

/me uses hardware raid at work (DataCenter) and at home and is perfectly satisfied with the price/performance ratio...

Re:Why aren't you running a dedicated controller.. (5, Insightful)

jelle (14827) | about 8 years ago | (#15670353)

"I agree with this. For most people, backing up your data every week is a LOT better option for data security. Users who should be using RAID 5 should also have dedicated controllers."

You're generalizing a little too much. For example: I have >1TB storage on my mythtv box (I just like to have a good selection of stuff to watch when I finally get to watch tv, and I'm never at home when the shows I like are being broadcasted), and I'm using software RAID5 on that. That is, software raid5, on shared controllers: All together seven disks off the mainboard, from a mixture of pata and sata connectors. I wouldn't do this on something like a server, but it's plenty fast enough for mythtv. It also gives a lot of protection for the array of disks, and it's a much, much better option than the weekly backup you suggest (first of all, a backup would take ages, cost waay more in disks (which wouldn't even fit in the HTPC), and last but not least: without raid5, if one disk dies, I could lose up to 7 days of recordings...).

Re:Why aren't you running a dedicated controller.. (0)

Anonymous Coward | about 8 years ago | (#15670839)

"I could lose up to 7 days of recordings..."

Oh the humanity..

Re:Why aren't you running a dedicated controller.. (1)

Pyrowolf (877012) | about 8 years ago | (#15670044)

Exactly. I'm baffled how a separate controller would cause such an issue. You would expect more than just raid controllers to exibit this behavior considering even non-intel cards are causing woes.

Re:Why aren't you running a dedicated controller.. (4, Insightful)

Kadin2048 (468275) | about 8 years ago | (#15670229)

I'm slightly confused.

The articles are both very light on technical details, and somewhat vague as to what's really going on. (Admittedly, maybe they don't know it.) In the first article [theinquirer.net] , they allude to the problems being the result of the "softmodem"-like RAID systems that modern integrated motherboards use, which would remove some of the blame from the processor. But then they also suggest that the same problem occurs with dedicated RAID controllers [theinquirer.net] (IBM ServeRAIDs -- I think these are dedicated controllers), which don't cause too much CPU load at all ... further implicating the mobo. However, similar mobos with AMD processors didn't experience the problem, so there's obviously something going on that's Intel's fault.

It doesn't seem like it would be that difficult to pin the blame down to the particular component: is it the integrated RAID subsystem utilizing the processor inefficiently? Or is it the processor itself, being slow? And if it was the processor, why wouldn't this slowness be exhibited in other situations?

Seems to me that what needs to happen, is for somebody to do a test with a Conroe processor in a motherboard that doesn't include any of the integrated, offload-work-to-the-processor type of integrated subsystems (RAID, sound, Ethernet), use a 'real' hardware RAID controller, and see what the results are. If there are still problems in that scenario, then there would seem to be something wrong with the processor, and this could be confirmed with simulative benchmarks.

As a criticism of Intel's complete "systems" (processor plus chipset) I suppose this is a valid criticism, but I'd like to see more of a breakdown as to where the performance hit is coming from.

Re:Why aren't you running a dedicated controller.. (2, Insightful)

DudemanX (44606) | about 8 years ago | (#15670739)

I'd also like to see how a Pentium D would perform in the same system seeing as how it's socket compatible with Conroe. This will help isolate whether it is indeed the CPU or if it's more to do with the I/O subsystem(chipset).

hi man, do you forget SO/HO users? (0)

Anonymous Coward | about 8 years ago | (#15670066)

I know a lot of people are using on-board RAID-5 feature.

Re:Why aren't you running a dedicated controller.. (1)

Shoeler (180797) | about 8 years ago | (#15670078)

The point TFA makes is not that a RAID5 setup would be used on a desktop, but that real-world performance seems to suffer on this chip.

Am I halucinating to recall something happening like this a long time ago with Intel?

Re:Why aren't you running a dedicated controller.. (2, Informative)

lukas84 (912874) | about 8 years ago | (#15670098)

Yes, i tought about this too.

Using RAID5 in software (be it completely in software like Linux MD or Windows Dynamics Disks, or 99% in Software, like most Onboard RAID Controllers out there) isn't a good idea if you want to run an "enterprise" setup. It might be okay for your mom's basement, or for test systems.

But productive systems should be using real raid controllers, equipped with half a gig of cache memory, a battery backup in case of a power failure for the cache, and dedicated processor for the raid5 overhead.

Intel might've screwed this up, but it will only affect non-professional IT.

Re:Why aren't you running a dedicated controller.. (1, Interesting)

Anonymous Coward | about 8 years ago | (#15670166)

Intel might've screwed this up, but it will only affect non-professional IT.

Or those with budgets.

Re:Why aren't you running a dedicated controller.. (1, Interesting)

lukas84 (912874) | about 8 years ago | (#15670281)

Or those with budgets.


If you can't afford to do something right, don't do it.

Re:Why aren't you running a dedicated controller.. (1)

Jedi Alec (258881) | about 8 years ago | (#15670513)

If you can't afford to do something right, don't do it.

hope you're happy with the way your life is going ;-)

Re:Why aren't you running a dedicated controller.. (1)

lukas84 (912874) | about 8 years ago | (#15670649)

hope you're happy with the way your life is going ;-)

This isn't about life. It's about a professional decision as someone working for a company. Before deploying half-assed solutions, it's usually better to deploy nothing at all.

Re:Why aren't you running a dedicated controller.. (1)

norton_I (64015) | about 8 years ago | (#15670740)

This isn't really reasonable. Software RAID can be as good as many hardware RAID configurations. Modern CPUs are very, very fast, and in many cases can calculate parity faster than dedicated controllers, at the cost of some CPU overhead. However, the cost of the CPU can be much less than the controller. Also, large disk cache helps reduce the read-pairty-write overhead from RAID 5, but most systems have more cache than the drive controller you would pair them with. Finally, if my computer has a UPS, the value of battery backed cache is limited.

Obviously there are many cases where hardware raid is desirable, but to say that any "real" system must use the most expensive hardware possible is just wrong.

Re:Why aren't you running a dedicated controller.. (2, Informative)

cyanics (168644) | about 8 years ago | (#15670115)

I totally agree. If this is actually a RAID-5 setup, then it requires at minimum 3 drives. Most onboard (intel) RAID controllers are only setup for 0,1,0+1, or 10. And not RAID 5. I don't see how it could possibly be correlated to the CPU. It seems much more likely that if it is a new North/South bridge, that the problem is the with IO controller.

CPU utilization in RAID5 configurations is almost entirely offloaded to the RAID controller.

The article (including spelling errors) fails to mention a lick about the RAID controller. Only that "it's a cpu problem."

Re:Why aren't you running a dedicated controller.. (1)

Ruie (30480) | about 8 years ago | (#15670133)

If you're running raid5 it's probably in an enterprise setup. If so, why aren't you running a dedicated controller? The CPU should have little to no impact on the raid subsystem...

This test is interesting for two reasons:

  • Cheap cluster nodes or desktops - one might not want to shell out $300+ for a dedicated controller
  • RAID code basically just munches data around. If software RAID performance is bad, it is likely that the performance of interpreted and bytecode/JIT languages (such as perl, python, tcltk, java, etc) suffers as well. This would be a good reason to avoid the cpu completely. I really hope this is just some sort of driver issue and not an inherent cpu weakness.

Re:Why aren't you running a dedicated controller.. (1)

saleenS281 (859657) | about 8 years ago | (#15670158)

Python/perl/java have not suffered in any tests I've seen. I guess that leads me to question these findings even more.

Re:Why aren't you running a dedicated controller.. (2, Interesting)

cnettel (836611) | about 8 years ago | (#15670227)

My personal "analysis", is that this sounds much more like a DMA issue, either in chipsets, in the processors, or in OSes. Core 2 is doing some speculative prefetching and a quite different cache management scheme, so some naive ideas would be that some piece of code or hardware got away with doing things improperly before, a very rare race condition might have become commonplace. If that's the reason, it might be easy to fix. Of course, it might also mean that the prefetching or cache sharing between the cores (or a couple of other things) are actually faulty...

Re:Why aren't you running a dedicated controller.. (2, Interesting)

kimvette (919543) | about 8 years ago | (#15670715)

Actually the market has become so diluted with everyone's jumping into the RAID game (thanks to Highpoint Tech and Intel with their hybrid solutions) that it's becoming increasingly difficult to discern the true hardware RAID controllers from the hybrid models. Of course there are the companies that won't so much as touch software RAID (namely 3ware) but Promise, Koutech, and even Adaptec all are very slick with their descriptions of the controllers and make it unclear as to whether or not their products are actual RAID controllers of if they offload the processing to the CPU. If you want to give a small business (mom & pop or a larger business with a tightass PHB who sees IT as solely a cost center rather than an essential tool to keep things going) better assurance of data integrity than a single HDD will provide, and they're NOT willing to back up regularly, and obviously won't spend $300-$700 for an entry level GOOD RAID5 controller, then a hybrid solution may be all that you can offer them. Given that these controllers are being implemented on motherboards more and more now, the performance they provide has to be reasonable, without hogging the processor.

Also, when you do find a hardware controller: will it run in your board? In other words, if it's PCI, do you actually have a PCI slot to fit it? This is especially a problem in a high-end consumer box or in a lower-end workstation, where you might have one or two PCI slots and the rest are PCI-E x1 slots. Where you're likely going to have a GOOD sound card and a capture card in your legacy PCI slots, or maybe a multi-port Firewire 400 card, where is the hardware RAID controller going to live? Obviously the solution is going to be to go with an embedded solution on the motherboard, hopefully with a model that doesn't totally suck.

Re:Why aren't you running a dedicated controller.. (1)

RingDev (879105) | about 8 years ago | (#15670151)

Not to mention that most workstations and home PCs don't run RAID 5. If the Core/Core2 chip sets are targeted for machines that don't run RAID, it's not a big deal. If you are running RAID 5, it's likely in a server environment where you would probably have a RAID controller and a Opteron or Xeon based chip.

-Rick

Re:Why aren't you running a dedicated controller.. (0)

Anonymous Coward | about 8 years ago | (#15670238)

If the Core/Core2 chip sets are targeted for machines that don't run RAID, it's not a big deal.

Everyone's downplaying this, but nobody's asking the real question: Why does it suck at RAID? If we don't know why it's doing this, what if there are other things that will cause the Core to likewise suck?

Re:Why aren't you running a dedicated controller.. (1)

Nom du Keyboard (633989) | about 8 years ago | (#15670688)

most workstations and home PCs don't run RAID 5.

I, for one, intend to run RAID 5 on my next home system, and not just for Geek Pride. Three hard drives with the advantage of mirroring for reliability, striping for performance, and the loss of only 1/3d of my hard drive space for this redundancy, verses 1/2 for mirroring alone, and no redundancy with striping alone. And since I'm doing this for improved performance, I want that performance. Three modern moderate performance hard drives are hardly expensive, and RAID 5 can make up for average performing of 7200rpm drives otherwise.

Re:Why aren't you running a dedicated controller.. (1)

bano (410) | about 8 years ago | (#15670833)

Your looking for "improved performance" out of raid5?
Never used it, or even read about it I'm guessing.
Raid5 while excellent for redundancy, is quite slow when you are talking about alot of writes.

Re:Why aren't you running a dedicated controller.. (1)

Nom du Keyboard (633989) | about 8 years ago | (#15670706)

If the Core/Core2 chip sets are targeted for machines that don't run RAID, it's not a big deal. If you are running RAID 5, it's likely in a server environment where you would probably have a RAID controller and a Opteron or Xeon based chip.

If you'd read TFA you'd see that the problem has shown itself with a Woodcrest (the next Xeon) CPU using an IBM RAID controller.

Re:Why aren't you running a dedicated controller.. (1)

RingDev (879105) | about 8 years ago | (#15670828)

Sorry, my bust, I'm illiterate.

-Rick

Because software raid outperforms controllers. (1, Informative)

Anonymous Coward | about 8 years ago | (#15670153)

If you have a dedicated file server, you're likely to find that software raid will significantly outperform a "HW" controller.
Remember that all a HW raid controller is, is a low end (compared to Xeons, etc) embedded CPU running software not unlike what your software raid solution would run.


This extra coprocessor for RAID is great when you have a box doing many different things like rendering, etc. But on a dedicated fileserver you'll be better off using the really fast CPU rather than the much slower raid controller chip to do the RAID logic.

Re:Why aren't you running a dedicated controller.. (2, Informative)

martok (7123) | about 8 years ago | (#15670183)

Because it's often slower to do so. We ran tests on a good Adaptec u320 raid controler about a year back and though cpu usage was good. We got much better performance out of Linux softraid5. I would suspect this was because the host cpu was faster than that on the controler.

Not to mention there is a huge cost savings in going with a softraid solution.

Re:Why aren't you running a dedicated controller.. (1)

Jeff DeMaagd (2015) | about 8 years ago | (#15670652)

It probably should be pointed out that many software RAID systems use a dedicated channel for every drive. RAID-5 on a SCSI hardware RAID adapter doesn't do this. The more drives you operate on the same channel, the bigger the issue can get. This could be an easy explaination as to why software RAID would be faster in your circumstance.

Re:Why aren't you running a dedicated controller.. (1)

Karma Farmer (595141) | about 8 years ago | (#15670232)

Seems odd to me that the Inquirer is the only one reporting this.
Do you consider it equally odd when a news article is only reported in Pravda, The Sun, the Washington Times, or WorldNetDaily?

Re:Why aren't you running a dedicated controller.. (0)

Anonymous Coward | about 8 years ago | (#15670363)

The problem persists even if you use a dedicated controller.

Re:Why aren't you running a dedicated controller.. (1)

kimvette (919543) | about 8 years ago | (#15670388)

Not any more. With even consumer-level boards offering embeddedRAID5 and RAID 1+0/0+1 support at the $100-$150 price level, and with hard drives being uber-cheap nowadays, there is absolutely no reason you won't see an explosion of growth in the use of RAID5 in at least higher-end home and SOHO machines.

Re:Why aren't you running a dedicated controller.. (1)

Nicolas MONNET (4727) | about 8 years ago | (#15670436)

If you're running raid5 it's probably in an enterprise setup.

I have installed a software Raid5 at work for online backups of workstations. 250GB SATA disks cost nothing (~80?); it'd pain my anus to fork out a kilobucks or two to pilot them. Sorry if that's not enterprisy [thedailywtf.com] enough for you!

Re:Why aren't you running a dedicated controller.. (4, Insightful)

temojen (678985) | about 8 years ago | (#15670865)

If so, why aren't you running a dedicated controller?

Because if your dedicated controller goes you have to find the same make & model of controller. On no notice. Possibly a few years after that make and model has been discontinued.

With software RAID-5, any controller that works with your host bus (PCI) and HDD bus (ATA, SATA, or SCSI) will do just fine.

Xserve? (1)

mrxak (727974) | about 8 years ago | (#15670001)

Does this mean that Apple won't be using Intel chips in their Xseve's for a while?

Small subset? (0)

Anonymous Coward | about 8 years ago | (#15670015)

Just because you don't use RAID in your basement doesn't the market is small.

Why bother with Intel right now? (-1, Troll)

gasmonso (929871) | about 8 years ago | (#15670023)

AMD produces a better/cheaper product. Why bother using Intel right now anyways? Intel has held the lead for so long because of marketing, not performance/quality. I'm glad to see Intel having their feet put to the fire... maybe they'll focus on quality and not marketing.

http://religiousfreaks.com/ [religiousfreaks.com]

Problem (3, Insightful)

laffer1 (701823) | about 8 years ago | (#15670031)

I don't get what the problem is. Are there specific instructions used often in raid 5 algorithms that are slow on the new chips? Is it bus contention?

Re:Problem (1)

afidel (530433) | about 8 years ago | (#15670206)

My guess is it's speed throtteling introducing delay into the occassional execution of these instructions whereas the chip is running full out when running through an artificial benchmark. That's pure speculation on my part though.

Re:Problem (1)

martok (7123) | about 8 years ago | (#15670220)

Afaik, raid5 is heavily dependant on a xor algorithm. I know with Linux md, it has several ways of doing it. It can use sse, sse2 etc. I'd be interested in seeing results of the actual xor thruput.

Re:Problem (2, Interesting)

TheRaven64 (641858) | about 8 years ago | (#15670248)

Software RAID 5 does:

Load byte 1.

Load byte 2.

XOR bytes 1 and 2.

Store result. There are a few things that could be wrong here. The XOR performance could be bad. This seems a bit unlikely but XOR is not an incredibly common operation so it wouldn't slow down too much else.

It could be that the pattern of data was bad for cache usage. This would be slightly odd, since it should be a series of 4K linear blocks.

It could be low I/O performance between the chip and the on-board controller. This seems the most likely; there could well be some multiplexing issues too. I would be interested to see what the results are using a Core Solo.

XOR is very common (3, Informative)

HaeMaker (221642) | about 8 years ago | (#15670300)

You use XOR to clear a register. XOR CX, CX sets the CX register to 0. It is faster than MOV CX, 0.

Re:XOR is very common (1)

cnettel (836611) | about 8 years ago | (#15670481)

The actual opcodes are certainly also more compact, which means more room in the L1 instruction cache. I'm not sure how relevant the speed argument itself would be these days, I would have guessed that they would use 1 cycle on average in any case, but maybe with different characteristics regarding register file usage and whatnot. As XOR has been "The" way to do it on x86, I guess it should continue to work well. On the other hand, Netburst suddenly made shifts relatively expensive. As this (XOR y, y) is so common, I would guess that the register dependency analyzer will class this as a "non-dependency". A naive parsing of the instruction would otherwise naturally conclude that CX was an operand used for both reading and writing, and that the instruction would be dependent on any earlier store to CX.

If there is special handling to fix the register assignment, then it's reasonable to guess that the execution of the instruction is never really done as an actual XOR, either. To speculate even further, one could start imagining all kinds of bad things happening if a "pre-parsing" treats all XORs as "reset register" with some kind of pipeline stall taking place when the XOR actually has to be executed. Quite unlikely, but an entertaining idea, at least to me, right now.

Re:Problem (1)

Mr Z (6791) | about 8 years ago | (#15670564)

I'd be more likely to bet there's SMP locking issues in the driver. The performance of XOR is negligible in the equation here.

--Joe

Re:Problem (1)

javaxjb (931766) | about 8 years ago | (#15670644)

But at the circuit level, XOR is one of the basic gates (http://en.wikipedia.org/wiki/Xor_gate [wikipedia.org] ). It's central to other operations such as add and graphics masking operations (and, as already mentioned RAID calculations). The engineers would really have to go out of their way to make this slow. There's an example of combining NAND gates to make an XOR in the linked Wikipedia entry, but who would want to waste that kind of real estate on the CPU?

Re:Problem (1)

TheRaven64 (641858) | about 8 years ago | (#15670716)

There is a difference between XOR and the XOR operation. There are a lot of different components to the XOR operation; the instruction needs to be decoded into micro-ops, dispatched, executed, reassembled and then retired. Any one of these steps could have a bug, and that bug might only exhibit itself when the chip in in a particular state, one which happens to be used a lot by the RAID firmware.

It is, however, a lot less likely that some of the other alternatives.

Re:Problem (2, Interesting)

ivan256 (17499) | about 8 years ago | (#15670674)

Seems more likely to be a scheduling issue to me...

Core 0 loads byte 1, Core 1 loads byte 2, Core 1 or Core 2 has a cache miss on the XOR...(Do the cores share a cache?) Or it could be a locking problem. XOR is very common, and it would surprise me if it was slower than on previous intel chips.

Re:Problem (1)

ch-chuck (9622) | about 8 years ago | (#15670256)

I think it's more a function of the mobo. Quoth TFA:

"If you are using an Intel's own baby, D975XBX motherboard and put four drives in RAID5 array, an interesting overhead and a slowdown will occur on all upcoming Conroes."

raid-5 is supported in an optional mobo sata controller. Still don't get the connection with the processor, but probably a mobo glitch that only shows up with certain ones.

These are the cheesy RAID cards, right? (1, Troll)

FatSean (18753) | about 8 years ago | (#15670033)

Article implies that these are the lame RAID systems that the l33t G4m3rz use...the ones that use the CPU to do the real work, rather than having the card do it.

I didn't realize they were so wide-spread...especially in government! Seems like just an additional avenue for data corruption should the software hiccup and lunch your data.

Re:These are the cheesy RAID cards, right? (4, Informative)

amorsen (7485) | about 8 years ago | (#15670138)

Software RAID is faster and more reliable than hardware RAID. Should your non-RAID controller fail, you just chuck it and get a random new one. If your RAID-controller fails, you have to get another controller exactly the same, sometimes even the same firmware revision, or kiss your data goodbye. And RAID-controllers are notoriously underpowered (SmartArray, I'm looking at you!)

Re:These are the cheesy RAID cards, right? (3, Informative)

InsaneGeek (175763) | about 8 years ago | (#15670247)

Software is more reliable+performance, what are you smoking? To get the performance you've got to turn on write caching, system goes down with write caching you're very likely (almost guaranteed) to have a corrupt filesystem. To get the reliability you turn off write caching, and performance plumments. Any hardware raid worth more than 3 cents has battery backed cache that allows you to have write caching and maintain reliability, not even taking in account being able to do some Raid5 operations with only 1 disk iop.

Re:These are the cheesy RAID cards, right? (1)

Professor_UNIX (867045) | about 8 years ago | (#15670290)

If your RAID-controller fails, you have to get another controller exactly the same, sometimes even the same firmware revision, or kiss your data goodbye.
Depends on your hardware controller. Some cards like the 3Ware controllers store the RAID info on the disks themselves so you can switch out the card with a completely different one and it should still work fine.

Re:These are the cheesy RAID cards, right? (2, Interesting)

Billly Gates (198444) | about 8 years ago | (#15670383)

I was going to recommend 3ware as well. I have done any administration work in years but one of my former employers use them for their servers for that reason. If it dies you can replace the board and we even have a few stored in case of a failure.

Organizations should look into this and not the vendor for their server for any raid setup. It would be nice if they all did as a server is not a desktop and the data is needed NOW when it goes down.

Re:These are the cheesy RAID cards, right? (0)

Anonymous Coward | about 8 years ago | (#15670331)

Software RAID is faster and more reliable than hardware RAID. Should your non-RAID controller fail, you just chuck it and get a random new one. If your RAID-controller fails, you have to get another controller exactly the same, sometimes even the same firmware revision, or kiss your data goodbye. And RAID-controllers are notoriously underpowered (SmartArray, I'm looking at you!)


You're a fucking retard.

As far as benchmarks, show me the numbers.

As far as reliability, in our operation (you know, hundreds of machines, not your mom's basement) I can pull a spare or a float of the same rev and pop it in.

If you're a smaller operation you should be running dual raid cards each hooked to arrays which mirror each other.

If you can't do that when you get a replacement you'd better damned well know what rev of firmware the old card was running there, sysadmin boy... along with its configuration. It's part of your job. Then flash the card and pop it in.

Either you've got the money to be a production outfit or you don't.

Re:These are the cheesy RAID cards, right? (2, Insightful)

Puff65535 (135814) | about 8 years ago | (#15670687)

No you're the retard, I have a SAN with all the production data, dual channels, dual switches, bazillion raid arrays, whole nine yards. I still use software raid, for the system disk, nothing on there that needs backing up, all the config files are under version control (repo elsewhere). Its a waste of dual channel fibrechannel disks to store the system image on the SAN, so I use a pair of cheap SATA disks in the system. Raid 1 means I don't have to re-install if I lose a disk, if the fuzz shows up with a warrant for my logs I just hand them one of the mirror disks, cloning a server for a quick cutover is cake, none of which justifies "real money".

Re:These are the cheesy RAID cards, right? (1)

MerlynEmrys67 (583469) | about 8 years ago | (#15670511)

Well, software CAN NOT be faster if you are on a high performance RAID system.

Lets say you have a disk subsystem that can perform 10 GBPS. Now I will take all of that and run it over a PCI-X bus (same thing applies to PCIe except the numbers are higher) and get throttled to 6.8Gbps for a throughput of 4.3Gbps.

Now I have a hardware RAID solution. I get 10Gbps from the disks, process it on the card - ship 6.6 Gbps over the bus, no problem.

Now I haven't even mentioned the CPU utilization. Lets not go there, but yeah - if you are benchmarking Disk I/O Host CPU utilization doesn't matter. In real world scenarios where I actually want to process something from that I/O, I need CPU power available to do the real work.

Low end RAID - I am right there with you... might as well waste the extra CPU cycles available on your desktop system. In real server environments - no way... Hardware RAID all the way (oh yeah - and I do backups on my raid array so if the controller fails, just get a new one and restore)

Re:These are the cheesy RAID cards, right? (1)

rbanffy (584143) | about 8 years ago | (#15670539)

Software may be faster if your RAID controller is very dumb, your CPU is very fast and your computer is under a workload that's very easy on the processor.

If it's a server, with heavy processor load and lots of disk reads and writes, a good, intelligent RAID controller is the way to go.

And having a second identical unit on the shelf is very smart. After all, how much is your data worth?

Re:These are the cheesy RAID cards, right? (1)

h4ck7h3p14n37 (926070) | about 8 years ago | (#15670544)

Sorry, but I have to disagree; software RAID is potentially slower and less reliable. Software RAID systems are going to use the CPU, stealing cycles from your other applications. I've seen utilization as high as 30% during resync operations using Solaris Volume Manager on an E-250 (granted, it's an old machine). Hardware RAID systems are going to contain embedded processors that are responsible for computing checksums, among other things, and take the load off of the CPU. They'll also contain additional disk controllers and power supplies, most likely with redundant components. You also typically get additional features like the ability to cross-connect your RAID system to multiple servers.

Soft-RAID systems typically store the RAID metadata on the disks themselves, thus they're vulnerable to damage/destruction if someone screws up a format operation. Hard-RAID systems are going to store this metadata on separate, battery backed, storage.

I'm not sure why you're concerned about not being able to find a new controller to replace a failed unit. Sun and IBM will ship you a replacement unit almost immediately; are you concerned about finding replacement for EOL'd units? If you're into running EOL equipment (I am), then it's your responsibility to track down replacement parts (yay eBay). I'd suggest keeping at least one replacement on-site.

I can't speak to HP controllers being underpowered, I only have experience with SUN and IBM. I get the impression that you're used to running soft-RAID on small, non-critical systems?

Re:These are the cheesy RAID cards, right? (1)

WoodstockJeff (568111) | about 8 years ago | (#15670167)

They are "wide spread" because a lot of SATA-based boards have these "RAID Controllers" built in, whether you want them or not. Something like 80% of the popular A64 boards have "RAID chips" on them, usually just the RAID 0/1 variety. And there are a lot of $30 add-in cards that are of the same ilk.

Re:These are the cheesy RAID cards, right? (4, Insightful)

afidel (530433) | about 8 years ago | (#15670184)

Actually I would trust the Linux RAID5 software setup more than a LOT of the RAID controller firmware setups which I have had no end of problems with over the years including a card that rebuilt an array from the new drive on insertion instead of the other way around! Firmware is after all simply software, and software that tends to get a lot less scrutiny then alot of other classes of software, especially potentially data eating code in a project like Linux or one of the BSD's.

Oops. (1)

Jerk City Troll (661616) | about 8 years ago | (#15670759)

Did you get a refurbished drive that was once part of a RAID-5 array? ;)

It's only onboard RAID (2, Insightful)

b00m3rang (682108) | about 8 years ago | (#15670113)

You should be using a controler with a dedicated processor, anyway.

Re:It's only onboard RAID (1)

catch23 (97972) | about 8 years ago | (#15670296)

I disagree. I've had lots more successes with open source software raid over hardware raid. Usually the raid overhead is relatively minimal anyway. With hardware raid, if your raid card somehow dies (it happens more often than you'd think) you'd have to get the exact same one... which is usually hard when the company that created your raid card went out of business a few years ago. At least with open source software raid, you don't have to worry about those kinds of problems.

Re:It's only onboard RAID (1)

aachrisg (899192) | about 8 years ago | (#15670371)

Echo this. I've got terabytes of software raid5. Performance with dual cpus matches what the actual disks can deliver. The good thing, is if my disk controller dies, or the machine the drives are in dies, or other mishap, I can take those drives, hook them up to any random pc, boot linux off cd, and my data will be accessible.

Re:It's only onboard RAID (2, Informative)

lukas84 (912874) | about 8 years ago | (#15670394)

which is usually hard when the company that created your raid card went out of business a few years ago

Professional IT doesn't work like that. You have a maintenance contract on your machine, usually from the machine manefacturer itself (like IBM, HP, DELL, whatever floats your boat). You buy this maintenance contract depending on the time you will need the machine (they're usually available from 3-5 years).

You renew the machine before the contract runs out. IBM, HP, DELL running out of Business seems very unlikely to me.

Re:It's only onboard RAID (1)

h4ck7h3p14n37 (926070) | about 8 years ago | (#15670658)

Why are you using a RAID card from a defunct manufacturer? Either buy a replacement before this happens, or stay away from EOL hardware.

Re:It's only onboard RAID (1)

Billly Gates (198444) | about 8 years ago | (#15670356)

RTFA

It happens with other hardware based non intel based boards as well. IBM raidserve as an example

Another poster mentioned that it could be DMA related or the way the new speculative branch prediction algorithms work. Perhaps a race condition exists that rarely happens with earlier chips but is executed with the newer ones often which would slow it down.

Re:It's only onboard RAID (1)

b00m3rang (682108) | about 8 years ago | (#15670645)

Aah... I'd only read the second article earlier today, didn't notice there was another posted here.

Re:It's only onboard RAID (1)

pdbaby (609052) | about 8 years ago | (#15670701)

RTFA
Hi, you must be new here!

Won't matter for most users (1, Insightful)

Anonymous Coward | about 8 years ago | (#15670120)

If you don't care about data integrity, throughput, or CPU utilization, like me. Most users will be buying the core 2 for household heating, and not superfluous stuff like data access.

Re:Won't matter for most users (2, Funny)

Anonymous Coward | about 8 years ago | (#15670214)

cat coke | nose > keyboard

Re:Won't matter for most users (3, Funny)

firl (907479) | about 8 years ago | (#15670225)

Well, my 60 hard drive array provides much better heating than my 4 processor sparc server

Gee, let me think. (1, Insightful)

Anonymous Coward | about 8 years ago | (#15670122)

Can Intel afford to make a misstep now with even in the small subset of users running RAID 5 systems?"
Yes. Yes, they can. They have a shitload of money, and survived the Pentium dividing by zero.

So much for the lead (-1, Flamebait)

Chabil Ha' (875116) | about 8 years ago | (#15670131)

And here I was thinking that Intel was starting to thwart AMD's gain on its market share. I was even pondering getting an Intel chip. Guess not--to both.

Raid 5 - Non Issue (0, Troll)

Iberian (533067) | about 8 years ago | (#15670193)

Raid 5 is 3 hard drives at the minimum. Most gamers will have 2 that run in striped configuration as that is the best price/performance ratio saving money for the ever expensive video cards. Corporations on the other hand have servers with dedicated raid cards.
Short answer: Move along nothing to see here.

Talk about Fear Mongering (1)

hcob$ (766699) | about 8 years ago | (#15670217)

If it truely is a problem with the CPU, then that's what Intel has the Microcode Upgrades. A simple little upgrade and voila... no more AMD people shouting and pointing "OHH! OHHH!!!! Intel made a MISTAKE!!!!! Lookie Lookie! HAHA!!!! AMD NEVER MAKES MISTAKES!!"

Of course, this is definately a case of the pot calling the kettle black. I mean, name one processor in the last 20 years that HASN'T had a bug in it.

Re:Talk about Fear Mongering (1)

WilliamSChips (793741) | about 8 years ago | (#15670242)

Name me one AMD processor nearly as buggy as Intel's offerings.

Re:Talk about Fear Mongering (2, Informative)

hcob$ (766699) | about 8 years ago | (#15670273)

K6. As a bonus answer... Comparable Intel Processors at the time also had the Microcode Upgrade ability as well.

Next QUESTION!

Re:Talk about Fear Mongering (1)

cnettel (836611) | about 8 years ago | (#15670551)

Even after the already mentioned K6, there certainly were times where the only affordable and performant chipset was a VIA offering with definite problems. The 686C southbridge problem, for example, was naturally completely out of control for AMD, and was also used with Intel chips, but it was more common on the AMD ones. I'm mostly mentioning it now as that was also a performance and data corruption problem showing up much more easily in RAID configs and HD-HD transfers in non-RAIDed systems. The Inquirer articles are mentioning Southbridge problems in one sentence, but they do not elaborate conclusively on what chipsets have been tried this time.

(And, yeah, while Intel was most heavily touting RAMBUS, VIA was the primary source for those who were more afraid to overclock 440BX like crazy on the Intel side, as well.)

Re:Talk about Fear Mongering (1)

plague3106 (71849) | about 8 years ago | (#15670464)

How do you know the microcode update doesn't also have bugs in it? Its a trust thing. This isn't the first time Intel has a chip that raises questions about trust.

Re:Talk about Fear Mongering (1)

Hymer (856453) | about 8 years ago | (#15670844)

"I mean, name one processor in the last 20 years that HASN'T had a bug in it."
Like the Digital Alpha 221164 and 221264 ?
...I don't care it's considered dead... it is the best CPU...
--
...and a Tru64 cluster is what clustering should be like.

doesn't matter (1)

dJOEK (66178) | about 8 years ago | (#15670243)

Real world businesses use External Hardware RAID boxes, or SANs
using your CPU to play RAID5 controller is just plain dumb

now, as long as mirroring works fine, i'm happyh

Re:doesn't matter (0)

Anonymous Coward | about 8 years ago | (#15670506)

--
Exercise caution when modding this message up: the author acts like a jerk when his karma is excellent.
Mod parent -2: One for signature karma whoring, and one for referring to himself in the third person.

Re:doesn't matter (1)

Nom du Keyboard (633989) | about 8 years ago | (#15670804)

as long as mirroring works fine, i'm happyh

Mirroring uses 1/2 the disk space for redundant storage. It works with even numbers of hard drive 2 and above.

RAID 5 puts all the redundant storage on a single drive, and works with numbers of hard drives 3 and above.

Example: with 4 equally sized hard drives, mirroring would only give you .5 the storage capacity of all the drives. RAID 5 would give you .75 the storage capacity of all the drives. A much better buy, especially considering that many computers only accept 4 SATA drives.

Class dismissed!

*NOT* just on-board RAID (2, Interesting)

Anonymous Coward | about 8 years ago | (#15670272)

From TFA:
The reason was that there were severe problems when Woodcrest was paired with a 1E RAID field when using IBM ServeRAID controllers. The problems didn't occur just in benchmarking, it was the every-day usage model that produced unexpected errors.

ServeRAID controllers aren't some cheapo CPU-based RAID, it looks like this might be a more serious problems.

Mod up, please (1)

SaDan (81097) | about 8 years ago | (#15670777)

No kidding. That part caught my eye, as we use IBM where I work, and everything is RAID-5. If there's some kind of problem with even the lower end xSeries machines, this will affect our purchasing.

Timing problem (3, Insightful)

toybuilder (161045) | about 8 years ago | (#15670312)

This sounds like a timing problem -- the processors are too fast, causing the system to slow down.

There was a similar problem that I had to wrestle with on a Linux when runnig 3Ware RAID controllers w/ RHEL3 on fast dual-processor servers. When battery backed write caching was turned on, the fast acceptance of IO requests (by the CPU's and then by the hardware RAID controller) lead to awesome sustained performance for short bursts, but under constant load would suddenly hit a wall and then IO would practically hang. (https://bugzilla.redhat.com/bugzilla/show_bug.cgi ?id=121434)

Is the sky falling? (4, Funny)

DysenteryInTheRanks (902824) | about 8 years ago | (#15670318)

Can Intel afford to make a misstep now with even in the small subset of users running RAID 5 systems?

No. No, it cannot. Sell your stock. Rip the CPU out of your boxen. One hundred ten billion dollars in market capitalization has disappeared in a flash with the publication of this groundbreaking article in the Inquirer.

Intel has signed its own death warrant. As goes RAID5, so goes the world.

The Inq (1, Flamebait)

yem (170316) | about 8 years ago | (#15670330)

Every Inquirer story I've clicked through to from slashdot has been subsequently debunked.
Anyone got independant verification of this startling discovery?

Oh Ye of Little Faith (1)

muffinass (763257) | about 8 years ago | (#15670724)

You seem to be resisting the group-think... surely you didn't read TFA, as your sig suggests. Here ya go. [r33b.net]

I dunno I've had bad luck with Raid5 (1)

vertinox (846076) | about 8 years ago | (#15670770)

I tried Raid5 once with 5 hard drives but one drive had a failure while I was swapping out one and the whole thing went kaput so I had to do the image all over again.

For saftey's sake, I just used 6 hard drives with 2 pairs striped and then had those drives mirror each other with 2 extra that would go to a drive when one failed. So in theory I could loose 3 drives instead of two and still keep my data.

And yes... This was on my personal setup for no good reason other than a big ego, but in reality Raid5 isn't that useful ore efficient unless you are using enteprise applications that requires 100% uptime and you have way more than 3 hard drives (just in case two of them fail on you at once for no particular reason) and then you should have that server mirrored by another one so if one server fails because of a bad motherboard/powersupply etc, you'll have another server ready to go.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...