Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

RAID Problems With Intel Core 2? 284

Nom du Keyboard writes "The Inquirer is reporting that the new Intel Core 2 processors Woodcrest and Conroe are suffering badly when running RAID 5 disk arrays, even when using non-Intel controllers. Can Intel afford to make a misstep now with even in the small subset of users running RAID 5 systems?" From the article: "The performance in benchmarks is there, but the performance in real world isn't. While synthetic benchmarks will do the thing and show RAID5-worthy results, CPU utilization will go through the roof no matter what CPU is used, and the hiccups will occur every now and then. It remains to be seen whether this can be fixed via BIOS or micro-code update."
This discussion has been archived. No new comments can be posted.

RAID Problems With Intel Core 2?

Comments Filter:
  • don't worry (Score:5, Funny)

    by sum.zero ( 807087 ) on Thursday July 06, 2006 @03:52PM (#15669984)
    it's not a bug, just errata ;)

    sum.zero
  • by saleenS281 ( 859657 ) on Thursday July 06, 2006 @03:53PM (#15669995) Homepage
    If you're running raid5 it's probably in an enterprise setup. If so, why aren't you running a dedicated controller? The CPU should have little to no impact on the raid subsystem...

    Seems odd to me that the inquirer is the only one reporting this. How about a real hardware review site?
    • by andrewman327 ( 635952 ) on Thursday July 06, 2006 @03:57PM (#15670040) Homepage Journal
      I agree with this. For most people, backing up your data every week is a LOT better option for data security. Users who should be using RAID 5 should also have dedicated controllers.


      Still, this is a problem for Intel. Their products are supposed to do what they do extremely well under all conditions. I hope that they find a way to fix this admittedly niche problem.

      • by moggie_xev ( 695282 ) on Thursday July 06, 2006 @04:04PM (#15670111)
        Reading the article it's all about software raid and the performance they get.

        The interesting question is what other peices of software that we run will get unexpectedly bad performance.

        ( I have > 2TB of hardware RAID 5 at home so I was wondering ... )
        • Reading the article it's all about software raid and the performance they get.
          Err, NO! It's about FAKERAID, which is a H/W S/W combo.
          • I've never heard it called FAKERAID maybe it should just be called FAID? I'll file that one back for use later...

            Anyway, it's not entirely a hw/sw combo. These types of raid controllers are entirely software based. They consist basically of an ata or sata controller and an interrupt handler. When the disk is being accessed in legacy bios mode (ie during an os install, etc) the cpu pulls the interrupt to write to the disk and the BIOS calls the software stored on the card. This software is executed by the BI
          • by SwellJoe ( 100612 ) on Friday July 07, 2006 @01:45AM (#15673505) Homepage
            Err, NO! It's about FAKERAID, which is a H/W S/W combo.

            RAID stands for Redundant Array of Inexpensive/Independent Disks. Nowhere does it say "Controlled By A Dedicated CPU" ("RAIDCBADC"? Doesn't quite sing like "RAID"). Software RAID is as much RAID as a top of the line server RAID controller with RAM and a battery backup. It isn't as fast, sure, and it loads the system CPU, but it is still RAID. Calling it "FAKERAID" is just pretentious and misleading. The data integrity benefits are still present, as are some performance benefits in some circumstances (in fact, Linux RAID is demonstrably faster in some workloads than a top end Adaptec hardware RAID controller, though this is the exception rather than the rule)

            That said, I hate pretty much all RAID controllers (whether software or hardware). Linux software RAID means that I can drop the disks into any PC and access the data. Every RAID controller from Promise, Adaptec, and Tektronic requires me to use their disk format, and if I lose the controller I lose the data until I can get another controller. Sure, in high availability environments, you keep a spare...but with Linux software RAID, every PC in the office is a spare controller. That's my kinda redundancy. I've even had two identical Adaptecs with different firmware lead to pretty massive data loss during a server migration. Thankfully there were good backups. I've never had similar problems moving Linux software RAID disks into a new Linux box.
      • by Albanach ( 527650 ) on Thursday July 06, 2006 @04:28PM (#15670325) Homepage
        You are correct that RAID isn't a backup solution, but incorrect when you say if you're using RAID5 you should be in a data centre.

        What if you have a lot of photos, music or movies - these aren't unusual things these days. I don't want to go rummaging through DVDs to find the picture I want, I want to fire up f-spot and see it there straight away.

        RAID5 provides sensible protection against data loss when using consumer hard disks - software RAID5 is readily available on linux and hard disks in the 2-300GB range are easily affordable. You can often pick them up for $50 after rebates. So I can get a TB of storage for a few hundred dollars, but to use hardware RAID5 would probably double the cost. Fine if you're an enterprise, but not fine if you're using it at home.

        • by myz24 ( 256948 ) on Thursday July 06, 2006 @04:45PM (#15670516) Homepage Journal
          I agree, it seems on slashdot (and actually, some of my friends) that you're an idiot if you're not running RAID but your equally dumb if you're running RAID5 because it's not a backup solution. It's as if there can't be any gray area in the matter. People make it seem like RAID5 has no purpose or benefit and everyone should just be using striping+backup. To me, the point of RAID5 or other redundancy RAID setups is it's your first line of recovery for a disk failure. If a disk fails, you replace it and you've suffered little downtime. If something major happens then yes, you restore from backup.

          My other issue is with people forgetting the idea behind being sensible about what needs to be protected and how much it should cost. There is no reason why my personal collection of photos, music and video should cost me so much. Software RAID is way more than adequate for providing a cheap way to store my files. If data protection AND peak performance are what you need, then yes you need to go full hardware. WHERE'S THE MIDDLE GROUND PEOPLE?
          • I go to school in Washington DC and I live the rest of the time in Southeast Pennsylvania. Both places have been hit by floods recently, which should alert you all to use things other than RAID for backup. RAID does have its palce, as a devestating hard drive failure can become a minor 45 minute annoyance.
          • by Nutria ( 679911 ) on Thursday July 06, 2006 @11:55PM (#15673141)
            WHERE'S THE MIDDLE GROUND PEOPLE?

            There's no "middle ground", there's cost-benefit analysis.

            I.e., is it worth my time to spend $50, $100, $200, $500, etc, and an hour a week to mirror a pr0n collection? Some people would say $50 and 5 minutes, and others would say $500 and 6 hours a week. And some would say, "Chunk it. If the disk dies, I'll just download it all again."

        • You are talking out your rear if yout think that going to a hardware raid is going to "double the cost" of a TB+ storage box. You can pickup an nice 3ware card for a few hundred bucks and if you want to go on the cheap a tekram or similar card is less than 100 bucks now days.

          /me uses hardware raid at work (DataCenter) and at home and is perfectly satisfied with the price/performance ratio...

          • "3ware card for a few hundred bucks "

            The nice 3ware cards for 100 bucks are NOT hardware raid, they use the CPU to calculate the RAID, it might even say it is in the literature but working at company (tech support) who sells servers that use 3ware for 80% of it's business, I can definitely tell you this isn't the case.

            You CAN get a hardware based 3ware card, but then you are looking at 400-500 bucks (+some for the battery backup unit).

            Plus if you read the parent correctly, 4 300GB hard drives for 50 bucks t
            • Not to be a total pedant, but if you are going to say that because you can buy *A* 300GB disk for 50 AFTER rebates that 4*300GB is going to be 200 bucks then you have a very perilous grip on reality... That said, he said "double the cost of the box" so eve with your $200=1.2TB of storage arguement you are still not factoring the cost of the MB, PSU, chassis, etc... Which will certainly add another few hundred tot he cost of the box... So this theoretical box is still like $400-600 before the cost of a 4 cha
            • These 3ware cards are definitely hardware RAID. You are spreading FUD.

              The parallel card is the $110 on newegg.

              From Newegg StorSwitch switched architecture delivers the full performance benefit of Parallel ATA's pointto- point architecture up to 133MB/sec per port On-board processor provides true hardware-based RAID and intelligent drive management functions BIOS set up utility and 3ware Disk Management (3DM) web-based management software Bootable array support for greater system fault tolerance

              http://3ware [3ware.com]
      • by jelle ( 14827 ) on Thursday July 06, 2006 @04:31PM (#15670353) Homepage
        "I agree with this. For most people, backing up your data every week is a LOT better option for data security. Users who should be using RAID 5 should also have dedicated controllers."

        You're generalizing a little too much. For example: I have >1TB storage on my mythtv box (I just like to have a good selection of stuff to watch when I finally get to watch tv, and I'm never at home when the shows I like are being broadcasted), and I'm using software RAID5 on that. That is, software raid5, on shared controllers: All together seven disks off the mainboard, from a mixture of pata and sata connectors. I wouldn't do this on something like a server, but it's plenty fast enough for mythtv. It also gives a lot of protection for the array of disks, and it's a much, much better option than the weekly backup you suggest (first of all, a backup would take ages, cost waay more in disks (which wouldn't even fit in the HTPC), and last but not least: without raid5, if one disk dies, I could lose up to 7 days of recordings...).

    • The point TFA makes is not that a RAID5 setup would be used on a desktop, but that real-world performance seems to suffer on this chip.

      Am I halucinating to recall something happening like this a long time ago with Intel?
    • Yes, i tought about this too.

      Using RAID5 in software (be it completely in software like Linux MD or Windows Dynamics Disks, or 99% in Software, like most Onboard RAID Controllers out there) isn't a good idea if you want to run an "enterprise" setup. It might be okay for your mom's basement, or for test systems.

      But productive systems should be using real raid controllers, equipped with half a gig of cache memory, a battery backup in case of a power failure for the cache, and dedicated processor for the raid5
      • This isn't really reasonable. Software RAID can be as good as many hardware RAID configurations. Modern CPUs are very, very fast, and in many cases can calculate parity faster than dedicated controllers, at the cost of some CPU overhead. However, the cost of the CPU can be much less than the controller. Also, large disk cache helps reduce the read-pairty-write overhead from RAID 5, but most systems have more cache than the drive controller you would pair them with. Finally, if my computer has a UPS, th
        • "Modern CPUs are very, very fast, and in many cases can calculate parity faster than dedicated controllers,"

          Especially given the fact that most CPU usage is less than 18% when calcualting parity for RAID 5, so compare 18% of your CPU cost and see if it is worth it to lose that much over head (or buy the next higher model) OR pay 400+ for hardware RAID.

          I'd say RAID 5 is the best solution for home backups, you have redundancy for disk failure and with monthly or even quarterly backups you should be fine. (who
    • I totally agree. If this is actually a RAID-5 setup, then it requires at minimum 3 drives. Most onboard (intel) RAID controllers are only setup for 0,1,0+1, or 10. And not RAID 5. I don't see how it could possibly be correlated to the CPU. It seems much more likely that if it is a new North/South bridge, that the problem is the with IO controller.

      CPU utilization in RAID5 configurations is almost entirely offloaded to the RAID controller.

      The article (including spelling errors) fails to mention a lick about t
      • by ocbwilg ( 259828 ) on Thursday July 06, 2006 @05:21PM (#15670869)
        Most onboard (intel) RAID controllers are only setup for 0,1,0+1, or 10. And not RAID 5. I don't see how it could possibly be correlated to the CPU.

        That's because you can do RAID 0, 1 or any combination of 0 and 1 without needing parity data. The performance killer on RAID 5 (and any other form of RAID that requires parity) is in the XOR operations used to compute and verify the parity information. In order for RAID 5 to perform at a satisfactory rate and not totally bog down your CPU, the XOR calculations should be handled on a dedicated hardware controller, not in software.

        However, for non-parity RAID setups the amount of CPU overhead is almost trivial, so referring to "fake RAID" or "software RAID" with the integrated RAID controllers on most motherboards is a misnomer. That being said, at least one of these articles is talking about servers using third-party RAID controllers.
        • by drsmithy ( 35869 ) <drsmithy&gmail,com> on Thursday July 06, 2006 @10:07PM (#15672657)
          That's because you can do RAID 0, 1 or any combination of 0 and 1 without needing parity data. The performance killer on RAID 5 (and any other form of RAID that requires parity) is in the XOR operations used to compute and verify the parity information. In order for RAID 5 to perform at a satisfactory rate and not totally bog down your CPU, the XOR calculations should be handled on a dedicated hardware controller, not in software.

          No, no, no, no. The processing overhead of parity calculations is miniscule on any remotely modern CPU (even a paltry 300Mhz Pentium 2 has a parity throughput of ~700M/sec).

          The performance killer on parity-based RAID configuration is the additional disk reads required to calculate the parity, *not* the parity calculations themselves. Which is why modern software RAID is typically faster than hardware RAID until you get into larges numbers of disks and/or machines with limited bus bandwidth.

          This "RAID 5 is slow because of parity calculations" meme must die (although, admittedly, it's a good indicator of whether or not someone really understands what's going on).

    • If you're running raid5 it's probably in an enterprise setup. If so, why aren't you running a dedicated controller? The CPU should have little to no impact on the raid subsystem...

      This test is interesting for two reasons:

      • Cheap cluster nodes or desktops - one might not want to shell out $300+ for a dedicated controller
      • RAID code basically just munches data around. If software RAID performance is bad, it is likely that the performance of interpreted and bytecode/JIT languages (such as perl, python, tcltk,
      • Python/perl/java have not suffered in any tests I've seen. I guess that leads me to question these findings even more.
      • My personal "analysis", is that this sounds much more like a DMA issue, either in chipsets, in the processors, or in OSes. Core 2 is doing some speculative prefetching and a quite different cache management scheme, so some naive ideas would be that some piece of code or hardware got away with doing things improperly before, a very rare race condition might have become commonplace. If that's the reason, it might be easy to fix. Of course, it might also mean that the prefetching or cache sharing between the c
      • Actually the market has become so diluted with everyone's jumping into the RAID game (thanks to Highpoint Tech and Intel with their hybrid solutions) that it's becoming increasingly difficult to discern the true hardware RAID controllers from the hybrid models. Of course there are the companies that won't so much as touch software RAID (namely 3ware) but Promise, Koutech, and even Adaptec all are very slick with their descriptions of the controllers and make it unclear as to whether or not their products ar
        • "Of course there are the companies that won't so much as touch software RAID (namely 3ware)"

          Until the 9500 (and MAYBE 9000) series ALL of 3ware's offerings were software RAID. this is why you couldn't initalize a RAID array until the OS booted (and 10 minutes after that), it even says this in the manual.

          I worked in tech support for a company that sold these for 2 years and talked with the 3ware techs quite extensively on the subject.

          As a company they don't officially admit it, but from a technical standpoin
    • Not to mention that most workstations and home PCs don't run RAID 5. If the Core/Core2 chip sets are targeted for machines that don't run RAID, it's not a big deal. If you are running RAID 5, it's likely in a server environment where you would probably have a RAID controller and a Opteron or Xeon based chip.

      -Rick
      • most workstations and home PCs don't run RAID 5.

        I, for one, intend to run RAID 5 on my next home system, and not just for Geek Pride. Three hard drives with the advantage of mirroring for reliability, striping for performance, and the loss of only 1/3d of my hard drive space for this redundancy, verses 1/2 for mirroring alone, and no redundancy with striping alone. And since I'm doing this for improved performance, I want that performance. Three modern moderate performance hard drives are hardly expens

      • If the Core/Core2 chip sets are targeted for machines that don't run RAID, it's not a big deal. If you are running RAID 5, it's likely in a server environment where you would probably have a RAID controller and a Opteron or Xeon based chip.

        If you'd read TFA you'd see that the problem has shown itself with a Woodcrest (the next Xeon) CPU using an IBM RAID controller.

    • Because it's often slower to do so. We ran tests on a good Adaptec u320 raid controler about a year back and though cpu usage was good. We got much better performance out of Linux softraid5. I would suspect this was because the host cpu was faster than that on the controler.

      Not to mention there is a huge cost savings in going with a softraid solution.

      • It probably should be pointed out that many software RAID systems use a dedicated channel for every drive. RAID-5 on a SCSI hardware RAID adapter doesn't do this. The more drives you operate on the same channel, the bigger the issue can get. This could be an easy explaination as to why software RAID would be faster in your circumstance.
        • I dunno about that, with U160 and U320 you would have to use at least 5 drives or more to max out the bandwidth that U160 provides and upwards of 8 or more for U320.

          No RAID systems use one channel per drive, maybe you are thinking connector, in which case that would be SATA, and possibly IDE to maximize bandwidth.

          Most RAID cards have more than one channel per card, but multiple connectors per channel (7-15 on SCSI, up to 12 that I have seen on SATA, and 2 per channel for IDE).
    • Seems odd to me that the Inquirer is the only one reporting this.
      Do you consider it equally odd when a news article is only reported in Pravda, The Sun, the Washington Times, or WorldNetDaily?
    • Not any more. With even consumer-level boards offering embeddedRAID5 and RAID 1+0/0+1 support at the $100-$150 price level, and with hard drives being uber-cheap nowadays, there is absolutely no reason you won't see an explosion of growth in the use of RAID5 in at least higher-end home and SOHO machines.
    • If you're running raid5 it's probably in an enterprise setup.

      I have installed a software Raid5 at work for online backups of workstations. 250GB SATA disks cost nothing (~80?); it'd pain my anus to fork out a kilobucks or two to pilot them. Sorry if that's not enterprisy [thedailywtf.com] enough for you!
    • by temojen ( 678985 ) on Thursday July 06, 2006 @05:20PM (#15670865) Journal
      If so, why aren't you running a dedicated controller?

      Because if your dedicated controller goes you have to find the same make & model of controller. On no notice. Possibly a few years after that make and model has been discontinued.

      With software RAID-5, any controller that works with your host bus (PCI) and HDD bus (ATA, SATA, or SCSI) will do just fine.

    • Because hardware controllers add a failure point.
      If your not going for ultimate disk performance, and your cpu isnt overly burdened, the use of software raid is sensible.
    • If you're running raid5 it's probably in an enterprise setup. If so, why aren't you running a dedicated controller? The CPU should have little to no impact on the raid subsystem...

      The first of the two referenced articles talks about them using Woodcrest CPUs (the Conroe-based replacement for the Xeon server CPUs) on IBM systems that used IBM ServeRAID controllers. They obviously aren't talking about the Intel RAID controller integrated into the chipset. And while I agree that the CPU should have little
    • why aren't you running a dedicated controller?

      Because such controllers use a proprietary disk format. So when (not if) your controller breaks, you need to get a new one to be able to read your data. Whereas when using a free (libre) operating system and software raid, you'll always be able to get to your data.
    • RAID5 is extremely common in the home media appliance/server area. It's a small part of the market, but also a part of the market fairly likely to buy a core duo - they're lower-power than most competitors and they have enough horsepower to do full-HD video.
  • Problem (Score:4, Insightful)

    by laffer1 ( 701823 ) <luke@@@foolishgames...com> on Thursday July 06, 2006 @03:56PM (#15670031) Homepage Journal
    I don't get what the problem is. Are there specific instructions used often in raid 5 algorithms that are slow on the new chips? Is it bus contention?
    • My guess is it's speed throtteling introducing delay into the occassional execution of these instructions whereas the chip is running full out when running through an artificial benchmark. That's pure speculation on my part though.
    • Re:Problem (Score:3, Interesting)

      by TheRaven64 ( 641858 )
      Software RAID 5 does:

      Load byte 1.

      Load byte 2.

      XOR bytes 1 and 2.

      Store result. There are a few things that could be wrong here. The XOR performance could be bad. This seems a bit unlikely but XOR is not an incredibly common operation so it wouldn't slow down too much else.

      It could be that the pattern of data was bad for cache usage. This would be slightly odd, since it should be a series of 4K linear blocks.

      It could be low I/O performance between the chip and the on-board controller. This seems the

      • XOR is very common (Score:4, Informative)

        by HaeMaker ( 221642 ) on Thursday July 06, 2006 @04:24PM (#15670300) Homepage
        You use XOR to clear a register. XOR CX, CX sets the CX register to 0. It is faster than MOV CX, 0.
        • The actual opcodes are certainly also more compact, which means more room in the L1 instruction cache. I'm not sure how relevant the speed argument itself would be these days, I would have guessed that they would use 1 cycle on average in any case, but maybe with different characteristics regarding register file usage and whatnot. As XOR has been "The" way to do it on x86, I guess it should continue to work well. On the other hand, Netburst suddenly made shifts relatively expensive. As this (XOR y, y) is so
      • I'd be more likely to bet there's SMP locking issues in the driver. The performance of XOR is negligible in the equation here.

        --Joe
      • Re:Problem (Score:3, Interesting)

        by ivan256 ( 17499 )
        Seems more likely to be a scheduling issue to me...

        Core 0 loads byte 1, Core 1 loads byte 2, Core 1 or Core 2 has a cache miss on the XOR...(Do the cores share a cache?) Or it could be a locking problem. XOR is very common, and it would surprise me if it was slower than on previous intel chips.
  • by b00m3rang ( 682108 ) * on Thursday July 06, 2006 @04:04PM (#15670113)
    You should be using a controler with a dedicated processor, anyway.
    • I disagree. I've had lots more successes with open source software raid over hardware raid. Usually the raid overhead is relatively minimal anyway. With hardware raid, if your raid card somehow dies (it happens more often than you'd think) you'd have to get the exact same one... which is usually hard when the company that created your raid card went out of business a few years ago. At least with open source software raid, you don't have to worry about those kinds of problems.
      • which is usually hard when the company that created your raid card went out of business a few years ago

        Professional IT doesn't work like that. You have a maintenance contract on your machine, usually from the machine manefacturer itself (like IBM, HP, DELL, whatever floats your boat). You buy this maintenance contract depending on the time you will need the machine (they're usually available from 3-5 years).

        You renew the machine before the contract runs out. IBM, HP, DELL running out of Business seems very

    • RTFA

      It happens with other hardware based non intel based boards as well. IBM raidserve as an example

      Another poster mentioned that it could be DMA related or the way the new speculative branch prediction algorithms work. Perhaps a race condition exists that rarely happens with earlier chips but is executed with the newer ones often which would slow it down.

  • by Anonymous Coward
    From TFA:
    The reason was that there were severe problems when Woodcrest was paired with a 1E RAID field when using IBM ServeRAID controllers. The problems didn't occur just in benchmarking, it was the every-day usage model that produced unexpected errors.

    ServeRAID controllers aren't some cheapo CPU-based RAID, it looks like this might be a more serious problems.
  • Timing problem (Score:4, Insightful)

    by toybuilder ( 161045 ) on Thursday July 06, 2006 @04:26PM (#15670312)
    This sounds like a timing problem -- the processors are too fast, causing the system to slow down.

    There was a similar problem that I had to wrestle with on a Linux when runnig 3Ware RAID controllers w/ RHEL3 on fast dual-processor servers. When battery backed write caching was turned on, the fast acceptance of IO requests (by the CPU's and then by the hardware RAID controller) lead to awesome sustained performance for short bursts, but under constant load would suddenly hit a wall and then IO would practically hang. (https://bugzilla.redhat.com/bugzilla/show_bug.cgi ?id=121434)

  • by DysenteryInTheRanks ( 902824 ) on Thursday July 06, 2006 @04:27PM (#15670318) Homepage
    Can Intel afford to make a misstep now with even in the small subset of users running RAID 5 systems?

    No. No, it cannot. Sell your stock. Rip the CPU out of your boxen. One hundred ten billion dollars in market capitalization has disappeared in a flash with the publication of this groundbreaking article in the Inquirer.

    Intel has signed its own death warrant. As goes RAID5, so goes the world.
  • The Inq (Score:2, Flamebait)

    by yem ( 170316 )
    Every Inquirer story I've clicked through to from slashdot has been subsequently debunked.
    Anyone got independant verification of this startling discovery?
  • I tried Raid5 once with 5 hard drives but one drive had a failure while I was swapping out one and the whole thing went kaput so I had to do the image all over again.

    For saftey's sake, I just used 6 hard drives with 2 pairs striped and then had those drives mirror each other with 2 extra that would go to a drive when one failed. So in theory I could loose 3 drives instead of two and still keep my data.

    And yes... This was on my personal setup for no good reason other than a big ego, but in reality Raid5 isn'
    • Score:0 more like it. Hey vertinox, if you run RAID5 and you lose two drives that have both copies of one chunk, you have lost that chunk.

      2 chunks minus 2 chunks equals zero chunks, get it?

      "...in reality Raid5 isn't that useful ore efficient unless you are using enteprise applications that requires 100% uptime and you have way more than 3 hard drives (just in case two of them fail on you at once for no particular reason) and then you should have that server mirrored by another one s
  • by jgarzik ( 11218 ) on Thursday July 06, 2006 @05:32PM (#15670952) Homepage
    This crap does not happen on Linux, on the same hardware. Most likely *BSD is not affected either, though I have not tested such.

    It's almost a certainty that this is a software problem of some sort. Driver bugs are the most common source of "hardware" instability, particularly on Windows. Drivers are often written by clueless intern-level engineers, and quickly forgotten once the drivers initially pass based Windows hardware quality tests.

    Jeff, the Linux SATA driver guy

    • Yeah, so if this is software RAID, what OS are they using? XP I guess?

      Why is Intel's hardware bad just because Windows performs poorly on it?

      I'm a little surprised that, all the way through BOTH articles and this thread, it took this long for someone to ask if it was a HARDWARE or SOFTWARE issue.
       
      As Carlos Mencia would say, "Dee Tuh Dee".
    • Indeed. A driver issue could easily be causing these problems, particularly if it's a new chipset and so forth as well.
  • Interesting post from: www.warp2search.net [warp2search.net]

    Intel hints that Conroe is going to release at B-2 Stepping as Intel Core 2 Duo processor. As for the previous version, a problem was found to make the system full loaded. It's only solved in the new stepping. We don't encourage anyone to buy the engineering sample from the web. The retail version is going to release in the end of this month, and it's much stable.

  • I just built a new file server for home use [monkeysushi.net], using 4 300GB SATA drives and the slowest socket-939 CPU I could get (1.8Ghz Athlon 64) in a RAID5. I'm getting pretty close to the limits of the individual drives during testing, with just over 100MB/sec writes and just under 200MB/sec reads. CPU usage remains comfortably under 50% (that's the spikes; average is more like 20%) in that configuration, and dmesg reports my RAID6 checksumming speed at over 4GB/sec.

    With the drives on the PCI SATA controller, I'm bu

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...