Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Technology

SATA vs ATA? 111

An anonymous reader asks: "I have a client that needs a server with quite a bit of storage, reasonable level of reliability and redundancy and all for as cheap as possible. In other words they need a server with a RAID array using a number or large hard drives. Since SCSI is still more expensive than ATA (or SATA), I'm looking to using either an ATA or a SATA RAID controller from Promise Technologies. While I had initially was planning on using SATA drives, I have read some material recently to make me rethink that decision and stick with ATA drives. What kind of experiences (good and bad) have people had with SATA drives as compared to ATA drives, especially in a server type environment?"
This discussion has been archived. No new comments can be posted.

SATA vs ATA?

Comments Filter:
  • by b00m3rang ( 682108 ) on Friday June 18, 2004 @06:37PM (#9468141)
    Promise and Highpoint (and any other cheap raid card) in my experience are no more than an IDE card with RAID software that eats up CPU cycles. Recovery options for a lost drive member are usually limited and unreliable. If you want reasonable reliability, go with one of the drives that uses SCSI hardware adapted to an SATA interface (such as WD Raptor). I would personally recommend Adaptec for your host controller needs, as they do the RAID in hardware.
    • by Cthefuture ( 665326 ) on Friday June 18, 2004 @06:46PM (#9468228)
      3ware is another (some say superior) hardware RAID controller.

      One thing about SATA is that it's easy to remotely mount the drives. You can easily put them outside the machine (in a rack or whatever) for enhanced cooling. They're kinda like really fast firewire drives.
      • I have to second the 3ware vote. the drivers for these cards have been in the linux kernel since 2.2, and they come stock in windows 2000 on so no haveing to load a driver disk to get things installed, and the performance is great to boot.
      • I personally chose a 3ware SATA controler running an 8 drive (7+ hot spare) raid5 array and i haven't regretted it since... I work in a big university bookstore running redhat ES3 (please don't flame, i know it's junk, but it's supported junk and my boss insisted). the raid array was used to increase performance to a moderately acceptable speed in our database i/o... it does it's job very well.

        • I've played with many a distribution over the last ten years, and when it came time to choose a web and file platform at work, RHEL was the best value in my opinion. I pay for one update subscription and feed all my other RHEL installations from that. I'm sincerely curious as to how the product has disappointed you. I find it to be rock solid and well tuned. And, no, I have no affiliation with RedHat other than my single subscription, for which we paid full freight.
          • He probably just said that to cover his ass. Happens all the time.

            Second your thoughts about RHEL being a decent system, though. Would our systems be running it if I were in charge of the budget? Probably not. Is there anything wrong with RHEL? Nah.

            • Yeah - - my thoughts, too. I just get tired of hearing people labeling the work of talented developers as "junk." Especially when the people being criticized have made many contributions to OSS, and the ones hurling the insults just have their panties in a wad because they couldn't get their ripped-off MP3s to play, or their porn viewer did not have the right codecs installed by default. I suspect most people who so flippantly disparage the work of others have never written a piece of software for public co
    • by UserChrisCanter4 ( 464072 ) * on Friday June 18, 2004 @06:55PM (#9468321)
      Don't confuse the fact that Promise produces on-mobo RAID "hardware" with the impression that all of their equipment is like that. Promise makes several truly hardware-based SATA and ATA cards, as well as a few enclosures that take numerous (4-16) IDE drives, do RAID in hardware, and interface to a server over U160 SCSI. They are perfectly capable of making hardware RAID solutions, provided you're willing to buy something other than a $60 "RAID" card.

      Their only major drawback I saw last time I looked at their hardware was that Linux drivers at the time tended to be binary and proprietary to specific versions (works on redhat but not Suse, etc.), which may or may not matter depending upon the OS you're choosing to run.

      I don't work for them, and I don't even use their equipment in any of my stuff (a buddy of mine runs an SX4000 card, though, so I have seen them in action), but I do get a bit peeved when someone dismisses a company's higher-end solutions because of (admittedly) bad experience with their low-end kit.
      • by afidel ( 530433 ) on Friday June 18, 2004 @08:15PM (#9468986)
        Their IDE RAID card, the SuperTrak SX6000 does REALLY poorly at some tasks. It eats CPU and from mailing lists has a lot of problems recovering from drive failures. For a good comparison to other ATA RAID cards see this [storagereview.com] storage review writeup on it.

        • We've had trouble getting tech support for Promise equipment recently.

          This is uncertain, but it seems that there is some bug in Windows XP which causes RAID cards that don't have their own CPUs to malfunction. According to a HighPoint technical support rep, the RAID adapter card does not get enough CPU time, and writing to the drives times out, breaking the RAID array. This fits with our experience.
      • The latest 2.6 kernels support Promise SATA with the sata_promise module. I'm using the TX2plus with a 160GB SATA drive in a hotswap enclosure. The only drawback to the free driver is that it doesn't yet support the PATA port on the card.

        Promise has released the source to their SATA card drivers, but it's written for the 2.4 kernels and at least the build process would need to be updated for them to compile cleanly in 2.6. In any case, the only reason to do so would be to get the PATA port working, and
    • there's good p-ata cards as well..
      the thing just is that they don't come cheap.

      cheapo cards are just an easy way to add more (s/p)ata slots.

    • Avoid Promise - seek 3ware [3ware.com].

      3ware has great support and superior benchmarks to the Promise, et al. equivalents.

      Do your own homework/research, but 3ware appears to be the clear performance leader.

    • Why is this post Insightful ? Poster didnt even RTFA :/

      Click on the Promise link you insensitive clod, ALL FastTrak SX4xxx series controllers are HARDWARE RAIDs :/. Plus
      http://techreport.com/reviews/2002q4/ideraid/index .x?pg=1 [techreport.com]
      Promise SX4xxx is the best in the low budget area (beating Adaptec and 3ware).
  • using Promise "to give you headaches" Controllers. If you're going to use (S)ATA you really should give 3-Ware [3ware.com] a look.
  • It really depends what you're after. It's a tradeoff between performance+upgradeability and assurance of stability. ATA is more mature (though SATA is still very good), but if you are willing to take the tiny risk, your client will be glad you chose SATA when he starts putting some load on the server.
  • by nneul ( 8033 ) * <nneul@neulinger.org> on Friday June 18, 2004 @06:41PM (#9468181) Homepage
    Nice idea, but poor implementation, they have had a tendency to easily come loose on several servers we have.
  • by cymen ( 8178 )
    Obligatory 3ware post...

    www.3ware.com [3ware.com]

    does raid in hardware unlike most (all?) promise, yadda yadda, software raid faster than battery-backed hardware, yadda yadda yadda, do you really need hot swap? if not, software raid, yadda yadda
  • Comment removed (Score:4, Interesting)

    by account_deleted ( 4530225 ) on Friday June 18, 2004 @06:43PM (#9468206)
    Comment removed based on user account deletion
    • Warranty != MTBF. Nor does it guarantee uptime. The newest and cheapest cars in America always start with the longest warranties. That doesn't mean they're going to more reliable than any other vehicle. They want to add value in your mind, but not effect the actual value of the product. Few car owners keep their cars for 10 years/10k miles, and few servers are still production for 5 years, and I don't want to have my RAID degrated for the time it takes to get a factory replacement if at all possible.

      A
      • I'd be digging for 30 pin SIMS and an ISA network card (I know I had one in the basement somewhere, damn it!).


        I have a few 486 motherboards with PCI slots and 72-pin memory slots I can sell you. Some even sport 5x86 cpu chips.
      • Warranty != MTBF. Nor does it guarantee uptime.

        You don't buy 5-year warranty drives because they won't fail (they will). You buy them because WD is REALLY good about sending advance replacements. Seagate is less good (they want the bad drive in their hands first).

        Get yourself 4 or more of these drives and set up a RAID5 array with a hot spare. If a drive fails, you get a new one from WD, swap it for the old one, send the old one back in the same mailer box, and continue, with no downtime - for five

    • im not aware of an ata drive that even comes close to the 5 year warranty of wd's sata drives.

      Or, if it's your fancy, no PATA drive matches the Raptor 10k RPM SATA drive. Imagine two of those striped... yummy...
  • Head on over to 3ware [3ware.com] and select the RAID controller you need... I've got a 7506-4LP in my server at home and it simply kicks ass.
    • Absolutely, we have a 3TB server I recently set up at work with a 3Ware 9500-12 SATA RAID card. The card is expensive (~$700), but well worth it, for the supported RAID levels, management software, drivers, and support that only 3Ware currently offer in this market.
      • Is their SATA card any better with simultaneous reads/writes than their PATA cards? I've got 4-5 of them at work, and especially over NFS the performance blows. Here's what one of my colleagues wrote on the subject:

        Write performance:
        local writes are 27 MB/sec (~5MB files, unloaded CPU)
        If both RAIDs are written together, performance cut in half
        (not what we were hoping...)
        local writes to local disk (/scr0) are 40 MB/sec
        NFS writes to RAID are 200 times slower (1 minute per file)
        NFS writes to single disk are
        • No, the PATA cards suffer from being PATA, performance is cut in half due to using the same channel, and the channel is limited to 100/133MB/sec transfer rate. Shouldn't be half, though, unless drives are fast enough to max out the interface speed (which they aren't, fastest 15K rpm SCSI drives are about 109MB/sec internal transfer).

          I haven't experienced any issues like that, and can confirm that the hot-swap and hot-spare capabilities work as expected on the 9500-12. I have not performed any benchmarks
        • Comment removed based on user account deletion
    • I'll second/third the vote for 3ware. Not only is their stuff fast (I'm using RAID 1+0 on a 7506-4LP card now) but it's also very reliable. On an older 2-drive card /w a RAID-1 mirror, I had both drives fail (not at the same time of course) over the course of about 3 years. Both were easily replaced w/o any data loss, and eventually the RAID was running on 2 drives that weren't originally part of the array...the 3ware card made the replacement a breeze.
  • You're going to be hard pressed to build a system that's as reliable and as inexpensive as this [apple.com]. The whole thing, drives, controllers, power supplies, everything, is available for about $3/GB, and it plugs into any host computer.
    • Re:Buy a RAID (Score:5, Informative)

      by Guspaz ( 556486 ) on Friday June 18, 2004 @09:13PM (#9469333)
      You may be right about building a system as reliable, and it'd certainly be hard to compete with it from a size standpoint, you are totally wrong about it being inexpensive.

      Apple's 3.5TB system costs $10,999 US. If you were to build a system that comprised 9 Hitachi 7200RPM 400GB drives, you would acheive 100GB more storage space for 3,600$ plus the cost of the server it was hosted in. Throw in 750$ for a high-end RAID card and 1000$ for a server to enclose and handle it, and you're still priced at under HALF the price of Apple's solution.

      So, in conclusion, Apple's solution is many things, and is certainly VERY sexy and attractive. But inexpensive compared to a self-built solution it is NOT.
      • hrmmm...the xserve raid uses ATA -> 2Gb/s fibre channel, has redundant power supplies, hotswap drives and power, and better raid management software than a single nice raid card. I am by no means an apple fanboy, but you are ignoring some of apple value-adds (or sun, or ibm, or emc, etc.). While this may not be an ideal solution for the asker, it works well for any biz that values reliability over initial cost. After all, the actual hardware cost is only about 25% of TCO. Personally, I don't think th
      • If you were to build a system that comprised 9 Hitachi 7200RPM 400GB drives, you would acheive 100GB more storage space for 3,600$ plus the cost of the server it was hosted in.

        Plus the power supplies (dual redundant) and cooling systems (dual redundant) and controllers (dual redundant) and the case to house it all!

        That's an awful lot of stuff to just hand-wave away. Not to mention the time and labor required to build and support the fucking thing.

        But inexpensive compared to a self-built solution it is
        • I never said it wasn't reliable. I just said it wasn't inexpensive.

          I had budgeted 1000$ for the rest of the server. A fast processor isn't required since we have all-hardware controllers. As long as the CPU isn't saturated under heavy load, all is good.

          You think you can't fit redundant power, cooling, and controllers into that 1000$? Fine, add another 1000$. 2000$ even. That's 3000$ total for the server plus the 3600$ for the drives. That's 6600$ in total, still a good 5500$ cheaper than Apple's solution.
    • It wasn't that hard, there's the Gateway 840 2TB Serial-ATA Raid Enclosure [gateway.com]. It only holds 12 drives vs. the Xserve RAID's 14 but with 12 250GB drives it's only $7,528 for 3000GB or roughly $2.50/GB. The 840's drives are SATA rather than PATA and the external interface is Ultra320 SCSI vs. the more expensive FiberChannel which also makes it cheaper to own. You could probably get it to be even cheaper by buying fewer drives from Gateway because unlike the Xserve RAID, the 840 comes with drive carriers for all
      • It only as one RAID controller. You don't have the option of adding a second, at any price. When the controller fails, you lose data and the RAID goes down.

        It does not have either redundant or hot-swap fans. They're not available at any price. When a fan fails, you have to shut the RAID down to replace it... assuming you catch it in time. It doesn't appear to come with any sort of monitoring system that informs you of the health of things like the controller, the power supply, and the fans.

        Finally, it doe
        • It only as one RAID controller. You don't have the option of adding a second, at any price. When the controller fails, you lose data and the RAID goes down.

          Frankly, I don't see the failure of chips to be a big issue. Moving parts, sure. And if you're that concerned, you shouldn't count on one storage device anyway. There's always *some* one thing that can fail inside one box.

          It doesn't appear to come with any sort of monitoring system that informs you of the health of things like the controller, the po

          • Frankly, I don't see the failure of chips to be a big issue.

            Spoken as only somebody who's never had a RAID controller fail on them can.

            And if you're that concerned, you shouldn't count on one storage device anyway. There's always *some* one thing that can fail inside one box.

            Well... actually no. In a well-built RAID--not just Apple's, but anybody's--there is no single point of failure. Two totally separate and redundant power supplies. Two totally separate and redundant controllers. Two totally separa
  • It's all in the name (Score:4, Informative)

    by linuxwrangler ( 582055 ) on Friday June 18, 2004 @06:53PM (#9468298)
    I bought a machine from with a controller from Promise and I think I know how they got the name. They kept promising me things.

    I was using SuSE 8.2 and they had no drivers but they "promised" that they would be out by the end of the month. Of course I could compile them myself but since that required installing the OS which was impossible without the drivers that required finding another machine and dealing with other problems.

    After about 3 months of "promise" after "promise" (this month for sure) they told me it the drivers would be out "in a couple months". The longer I waited the longer away the drivers were scheduled.

    It wasn't like I had grabbed 8.2 when it was released either. Promise's Linux "support" was way behind and they basically told me that Linux is their poor stepchild that gets leftover resources when Windows stuff is done.

    I contacted my vendor and had them swap the Promise card for a 3-ware. I tossed in the disk and loaded SuSE without any need for downloading or compiling drivers. I'm running RAID-5 on 4 120GB drives. I had a drive fail a couple months back but just hot-swapped/rebuilt it with no problem. The machine was up for about a year before I had to shut it down to replace a failed tape drive but I've had no trouble with the 3-ware.


    • Why doesn't Promise abstract their cross-platform code from the Linux and Windows device driver "glue" code? Then they could just port the Linux and Windows specific code once and all their device drivers' platform-independent code should "just work". (but keep your fingers crossed anyways) ;)

      I know Linus does not like cross-platform wrapper crap code in his kernel, but there is nothing preventing Promise from doing this outside the Linus tree or wrapping the Linux device driver API around the Windows devi
  • by caseih ( 160668 ) on Friday June 18, 2004 @06:59PM (#9468351)
    Apple's XServer RAID solution seems to be one of the cheapest dollar per gigabyte solution that I've ever seen. They use fast ATA drives. Although ATA drives can have problems, Apple uses only the best drives from each lot (hence they are a bit more expensive than if you bought the disks from a jobber). The RAID is a true hardware raid, allowing the creation of a hot-spare, e-mail notification, etc. The configuration software is java and runs on any platform. The RAID unit itself is fibre channel, so it can hook to servers running any OS and looks just like a big scsi disk. We have our arrays set up such that we're mirroring 2 phyiscally sparate arrays together (each raid 5+ hot spare), so we can lose up to 4 disks wihout any loss of data. Each array is about 2 or 3 raw terrabytes.

    I would avoid the other controller cards you mentioned for the reasons the other posters mentioned. The Xserve RAID is all the benifits of a good scsi backplane (RAID, monitoring, etc) for a fraction of the cost.
  • I've had great luck with RAIDCore's SATA controllers -- very fast.

    -Bill
  • by T-Ranger ( 10520 ) <jeffw@cheMENCKENbucto.ns.ca minus author> on Friday June 18, 2004 @07:39PM (#9468704) Homepage
    .. And their marketing paper comes in a Tyvek envelope! (I don't work for them, nor am I even a customer)

    StoreCase Technologies [storcase.com]

    RAID boxen with ATA on the inside, SCSI and/or FC on the outside. Seemingly incredable warrenties of as long as 7 years.

  • by DocSponge ( 97833 ) on Friday June 18, 2004 @07:42PM (#9468733)
    You may want to read this whitepaper [sr5tech.com] and see what they have to say about using ata or sata drives in a raid configuration. It is possible, due to the use of write-back caching, to lose the integrity of the raid array and lose your data eliminating any intial cost benefits. To quote the paper:
    Though performance enhancement is helpful, the use use of write back caching in ATA RAID implementations presents at least two severe reliability drawbacks. The first involves the integrity of the data in the write back cache during a power failure event. When power is suddenly lost in the drive bays, the data located in the cache memories of the drives is also lost. In fact, in addition to data loss, the drive may also have reordered any pending writes in its write back cache. Because this data has been already committed as a write from the standpoint of the application, this may make it impossible for the application to perform consisten crash recovery. When this type of corruption occurs, it not only causes data loss to specific applications at specific places on the drive but can frequently corrupt filesystems and effectively cause the loss of all data on the "damaged" disk.
    Trying to remedy this by turning off write-back caching severly impacts the performance of the drives and some vendors do not certify the recovery of drives that deactivate write-back caching so this may increase failure rates.

    Losing data on an ata raid array happened to a friend of mine and I wouldn't advise using something other than SCSI without understanding the ramifications.

    Best regards,

    Doc

    I made a new years resolution to give up sigs...so far so good!

    • by FueledByRamen ( 581784 ) <sabretooth@gmail.com> on Friday June 18, 2004 @08:45PM (#9469194)
      SCSI drives tend to have the same size (or larger) caches as [S]ATA drives. You can disable the write-behind caching on any drive fairly easily using hdparm. ( hdparm -W 0 /dev/... to disable, -W 1 to enable).

      Of course, if you are using a hardware RAID controller, you'll have to figure out how to tell it to disable the write-behind cache on the drives under its control. Perhaps it will be smart enough to figure it out if you use the hdparm command on the logical device it presents to the operating system, but I'd certainly want to read the manual and find out.

      I know from experience that Windows 2000 automatically disables write-behind caching on drives in software RAID arrays (and dumps some Informational messages in the system log to let you know what's going on).
      • Actually I just had a 120GB maxtor drive that i used to replace a 60GB one that had failed give me kernel messages to the effect off "flush cache command failed", meaning the disk refused to obey when the kernel told it to flush the write-back cache (probably to make windows benchmarks look better). Why should I trust this drive when I tell it to disable to write-back cache entirely?

        Furthermore, if I am using a hardware raid how do I use hdparm? And finally, ATA drives have write-back ON by default, SCSI d
      • hdparm -W doesn't work on scsi drives anyway..
        Infact, most hdparm functions don't work on scsi.. Even tho features like turning off the disk motor were supported on scsi first
    • by Guspaz ( 556486 ) on Friday June 18, 2004 @09:25PM (#9469400)
      This would be why professional ATA RAID solutions have battery backup. Somebody previously linked to Apple's XServe solution. It has enough battery backup power built in to keep the caches going for 24 hours. If you can't find a power source for your server within 24 hours of a power failure, your data obviously isn't that important.

      First off I'd assume if your data is so important you're going to have UPS and generators. If you don't have a generator, and the power fails, great, you've got 24 hours to purchase one. A 1500W generator costs about 450$ US, and should be more than powerfull enough to run your server, AND network connectivity. You'll not only keep your server happy during a power failure, you'll be able to keep using the server.

      Anyhow, this post started out about the battery backup. What you stated as a major problem isn't one, since serious ATA RAID solutions have battery backup.
      • I don't believe he is speaking about a general power failure, but a failure of a drive that has say 8mb cache memory.
        From the standpoint of the OS, the data has been written to the HD but it is actually "lost".
        Of course if you were building a real server (i.e. not for hosting large amounts of mp3s, porn, movies, etc.) you'd probably disable the cache on the drives and let the RAID controller do its job (assuming you bought a RAID controller with cache).
        But a better solution, for a storage only server, would
        • That's exactly what Apple's XServe RAID does; it disables the onboard cache on the drives and uses the RAID controllers' cache. That cache is what is battery backup, if I read it correctly.

          The drives also appear to be ATA.
        • I don't believe he is speaking about a general power failure, but a failure of a drive that has say 8mb cache memory.
          From the standpoint of the OS, the data has been written to the HD but it is actually "lost".


          If you've got any kind of error checking at all that shouldn't even be an issue. If the OS thinks the data was written to a drive then it should be recovered during rebuild, assuming it was only one drive that went. That's exactly the sort of failure that every RAID level other than 0 is designed to
          • how can it not be an issue? If the drive was given a series of write operations that it stored in its own memory and returned to the OS that it completed it, when in fact the drive is still writing the cache to the disk and the disk dies, what happens to that data? The OS (or even the RAID controller) assumes it has been written.
            • What do you think error checking is for? If you're using any RAID type other than 0 and you lose a drive the RAID will use the data on the other drives in the RAID to rebuild the information that was supposed to be on the dead drive. Whether that data was ever actually written to the platter of the now dead drive is irrelevant.

              I strongly suggest you educate yourself on how RAID works. Here [arstechnica.com] is a good place to start.
      • If you can't find a power source for your server within 24 hours of a power failure, your data obviously isn't that important.

        You obviously weren't in the middle of the East Coast after Isabel hit last fall and wiped out power throughout our entire area for over a week. I don't remember the numbers, but in our area, after something like 5 days there was still only a 60% restored rate for the area. For days travel in some areas was impossible, due to trees on the roads, etc.

        I would have been willing to
        • Comment removed based on user account deletion
          • I'm just invising you standing there in the road with a tree down in front of you wondering where to go...

            Actually, the closest wasn't in front of my house (but three fell into my yard from other yards), but there were something like 10 (I'm not exaggerating!) trees down across the road between me and the closest main road (as in not a neighborhood road).

            First, you could go around the debris

            Like I said in my first post, You obviously weren't in the middle of the East Coast after Isabel hit last fall.
            • Comment removed based on user account deletion
              • If you could get the brand name of the generator your friend has, I'd appreciate it. I checked on a number of generators and the sales people or the maker (yes -- I could call the companies, in spite of the mess, our phones were only out a day or 2 -- I don't know why, but I'm guessing they may have had more slack in their lines than the power company), and was repeatedly told NOT to run computers off them -- so if you could post the brand, it would be a help. My business doesn't really have to worry abou
        • It doesn't matter HOW long the power is down. The post this is all about is claiming that power failures can cause data loss due to cache loss. But if you have a 5 day power outage, obviously your UPS/generators won't last that long. But even if you only have a UPS, that's time enough to shut down the servers, avoiding data loss. The guy was saying you shouldn't use ATA/SATA because if the power fails your data will go poof, I'm saying that's not an issue.

          Besides, 5 days of down time is nothing compared to
      • Actually, despite being quite a proprietary system over all, I kinda like the approach that my CLARiiON FC array takes to this problem.

        It has special set-aside space on a number of drives that it designates as the "cache vault". Then, it has its own special UPS connected to the box that holds the RAID controllers and the first 10 drives of the system. When the main power fails, the controllers flush the write cache to this "cache vault" location, and then tell the UPS that its ok to "shut down now". Whe
    • Isn't that what journalling filesystems, especially ones with atomic writes, are for? And as somebody else pointed out, it's not like SCSI drives don't have write caches, and you can disable them if you wish.

    • This is speculation, but it seems to have some validity. In the case of a RAID adapter that has an on-board CPU, the card might be able to recover from a power failure. The capacitors on the RAID adapter should hold enough energy for a few milliseconds of operation. During that time the adapter could write to non-volatile memory on the adapter enough information to know what data is lost and how it can be recovered.

      Again, this is speculation, but RAID adapter cards that do not have on-board CPUs depend
    • Trying to remedy this by turning off write-back caching severly impacts the performance of the drives and some vendors do not certify the recovery of drives that deactivate write-back caching so this may increase failure rates.

      I don't buy this argument one bit.

      I agree with you that write-back can break journalling FS guarantees.

      However, I don't know of any consumer drive vendor that guarantees that their write-back algorithms are in-order. This means that write-back can trash *any* filesystem, and whet
    • You may want to read this whitepaper...

      And, yeah! Guess what? Buy their software to fix it! Move along ...

      I wouldn't advise using something other than SCSI without understanding the ramifications.

      Uh, would you advise to use anything without understanding the ramifications?
  • by soundsop ( 228890 ) on Friday June 18, 2004 @08:18PM (#9469015) Homepage

    I have a client that needs a server.

    On a related note, I was having dinner at a restaurant and my waiter asked me for a recommendation for a good email program. So I guess it turns outs that I have a server that needs a client.

  • Whether its just maxtor in general or a few poorly constructed hard drives i've had a few problems with the connectors - simply the plastic tabs at the back had a bad habit of being extremely easy to break :( (i.e to hold cable in place)
  • and they support real RAID configurations, like RAID 1+0 or 5 etc...

    http://www.3ware.com/products/serial_ata.asp
  • Don't Confuse (Score:4, Insightful)

    by Crypt0pimP ( 172050 ) on Friday June 18, 2004 @09:26PM (#9469406) Homepage
    The connection technology with the drive / spindle quality.

    (P)ATA and SATA are connection technologies.
    They have their individual benefits and drawbacks
    (cost, reliability, speed)

    The real factors to consider are the details of the drives themselves - vibration dampening, bearing and motor quality, MTBF.

    It used to be rather simple to guess what quality of drive you were buying. If it was 146GB or less (73GB, 36GB), and rotational speed was 10K or 15K, it was either SCSI or FC, and an "enterprise" class drive, rated in Mean Time Between Failure.

    Good drive, high quality, expect it to last several years, spinning 24 hours a day, sustaining high read and write activity during production and backup hours.

    If the drive was larger (200GB+) and slower (7200 RPM), typically an ATA drive, maybe low end SCSI.

    Then it was, at best, a workstation class drive, rated in "Contact Start Stops", meaning how many spin-ups and shutdowns the drive should survive. Not meant to run 24 hours a day, and run under heavy load except for short periods.

    The lines are beginning to blur with 300 - 500 GB drives with FC drive attachment. Those drives are meant for archiving and reference data. Not production databases and such.

    In my personal experience, the 3Ware products are worth the premium.

    Pick your attachment technology as appropriate.

    Best of Luck,
    Patrick (slineyp at hotmail dot com)
  • Stability: 3Ware #1 AMI Megaraid (the 4 port ones) #2 Naked drive with linux raid #3 ... rest are either crap or I did not use them.

    Performance: Naked drive with linux raid #1 Megaraid/3ware - both slower

    I don't know why but how come linux with naked drives using software raid *always* comes in the top with performance. May be you guys can tell me.

    • Probably because it sucks down all your CPU speed when you run the test. The faster your CPU, the faster the software RAID.

      The hardware raid controllers have limited clock speeds and less RAM than your computer, so they're slower.
  • Experience... (Score:4, Interesting)

    by poofmeisterp ( 650750 ) on Friday June 18, 2004 @10:29PM (#9469878) Journal
    The backplanes on server cases are horrid for SATA. They work, but you have to have special hookups for the LEDs (drive fail and activity) and often the controller cards or motherboards don't supply them. All I've managed to get is power LEDs on the front of the Super Micro cases I've worked with.

    SATA is not that much faster in practice than PATA, because the kinds of load that you put a drive under in a production environment are not like the speed/load tests used to generate benchmark numbers.

    You asked for opinions, and mine is that PATA (ATA-133) is more than fast enough, and the cost of SATA and the quirks that have yet to be ironed out are not worth it. It's the latest shiny object, and shiny objects are not always the most useful.

    I base my experience on the Western Digital SATA (mostly 36 gig) drives and the Western Digital 40 and 80 gig JB drives connected to multiple brands of motherboards and add-on controller cards.
  • by cgenman ( 325138 ) on Friday June 18, 2004 @11:57PM (#9470433) Homepage
    I have a client that needs a server with quite a bit of storage, reasonable level of reliability and redundancy and all for as cheap as possible.

    So what you need is this [gmail.com].

  • Price is a non-issue. Compared to all the rest the investment in SCSI is not that much more when you look at the price of Server level CPU/memory/housing.

    You gain technology that is now so well known and tested that you can just count on it to work.

    Sata on the other hand still isn't finalized in its spec. New one is coming out wich adds some new features. (or has recently).

    So for me I look at the following things. (note this mostly applies to webserver or servers in support of webservers)

    • Read write acc
  • RaidCore (Score:3, Interesting)

    by beernutz ( 16190 ) * on Saturday June 19, 2004 @04:10AM (#9471281) Homepage Journal
    This product will blow your socks off!
    Here are some of the highlights from their page [raidcore.net]:

    Online capacity expansion and online array level migration

    Split mirroring, array hiding, controller spanning, distributed sparing

    All RAID levels including RAID5/50, RAID1n/10n

    Serial ATA-based

    Choice of 4 or 8 channels and 2 functionality levels

    64-bit, 133 MHz PCI-X controller in a low-profile, 2U module

    And the HIGH-END board can be had for under $350!

    • This looks suspiciously like a pseudo-hardware software raid card. Check out this quote from their faq:

      Q: How did you manage to achieve such high performance without cache and additional I/O processor?

      A: The RAID processing and caching is done on the host motherboard. Performance is so high because of today's high CPU speeds and patented RAIDCore RAID algorithms.

      I would have to doubt what benefits this gives you over software raid, as it appears to use your CPU anyway. I think you are far mo

    • beware .. they are also just software raid ! (read their FAQ, of course a bit hidden, but they say they use the CPU of the host for the XOR calculation ...)

      Also they write that their algorithm is patented, and about linux drives .. be carefull of them !

      I still would recommend 3ware, fast, stable, and proven linux drives (since at least kernel 2.2, and fully open souce)

  • by loony ( 37622 )
    We're running several servers with 3ware controllers and SATA drives where I work and while the controllers are great the SATA connectors suck. They are just too fragile. Everyone in my team who touched the setup - no matter how careful they were - ended up breaking a connector. If you have only one or two cables its alright - but once you end up having 8 or more and try to route them nicely you'll be in trouble.

    If you're going for more than just 2 or 3 drives and want to go SATA you should go with one of
  • 3ware are great and $$$$$.
    LSI are great and not too expensive. They offer hardware raid support (not like promise, highpoint, etc.) for a good price and excellent linux support. They same driver and software that is used in their SCSI line of MEGARAID controllers is used in their series of SATA controllers. This is my recommendation.
    The promise controller has HORRIBLE linux support. Having emailed with promise many times about the SX6000 I can tell you to avoid it. If it is too late, you need to run it as a

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...