Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

SATA vs ATA?

Cliff posted more than 10 years ago | from the battle-of-the-drive-technologies dept.

Data Storage 111

An anonymous reader asks: "I have a client that needs a server with quite a bit of storage, reasonable level of reliability and redundancy and all for as cheap as possible. In other words they need a server with a RAID array using a number or large hard drives. Since SCSI is still more expensive than ATA (or SATA), I'm looking to using either an ATA or a SATA RAID controller from Promise Technologies. While I had initially was planning on using SATA drives, I have read some material recently to make me rethink that decision and stick with ATA drives. What kind of experiences (good and bad) have people had with SATA drives as compared to ATA drives, especially in a server type environment?"

cancel ×

111 comments

Sorry! There are no comments related to the filter you selected.

my thoughts (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#9468131)

FP!!!!

Don't use Promise, for one thing (4, Insightful)

b00m3rang (682108) | more than 10 years ago | (#9468141)

Promise and Highpoint (and any other cheap raid card) in my experience are no more than an IDE card with RAID software that eats up CPU cycles. Recovery options for a lost drive member are usually limited and unreliable. If you want reasonable reliability, go with one of the drives that uses SCSI hardware adapted to an SATA interface (such as WD Raptor). I would personally recommend Adaptec for your host controller needs, as they do the RAID in hardware.

Re:Don't use Promise, for one thing (3, Informative)

Cthefuture (665326) | more than 10 years ago | (#9468228)

3ware is another (some say superior) hardware RAID controller.

One thing about SATA is that it's easy to remotely mount the drives. You can easily put them outside the machine (in a rack or whatever) for enhanced cooling. They're kinda like really fast firewire drives.

Re:Don't use Promise, for one thing (1)

xsecrets (560261) | more than 10 years ago | (#9468622)

I have to second the 3ware vote. the drivers for these cards have been in the linux kernel since 2.2, and they come stock in windows 2000 on so no haveing to load a driver disk to get things installed, and the performance is great to boot.

Re:Don't use Promise, for one thing (1)

cpgeek (236645) | more than 10 years ago | (#9472920)

I personally chose a 3ware SATA controler running an 8 drive (7+ hot spare) raid5 array and i haven't regretted it since... I work in a big university bookstore running redhat ES3 (please don't flame, i know it's junk, but it's supported junk and my boss insisted). the raid array was used to increase performance to a moderately acceptable speed in our database i/o... it does it's job very well.

Why is RHEL 3.0 "junk"??? (1)

LazloToth (623604) | more than 10 years ago | (#9475019)


I've played with many a distribution over the last ten years, and when it came time to choose a web and file platform at work, RHEL was the best value in my opinion. I pay for one update subscription and feed all my other RHEL installations from that. I'm sincerely curious as to how the product has disappointed you. I find it to be rock solid and well tuned. And, no, I have no affiliation with RedHat other than my single subscription, for which we paid full freight.

Re:Why is RHEL 3.0 "junk"??? (1)

lewp (95638) | more than 10 years ago | (#9481214)

He probably just said that to cover his ass. Happens all the time.

Second your thoughts about RHEL being a decent system, though. Would our systems be running it if I were in charge of the budget? Probably not. Is there anything wrong with RHEL? Nah.

Re:Why is RHEL 3.0 "junk"??? (1)

LazloToth (623604) | more than 10 years ago | (#9484818)


Yeah - - my thoughts, too. I just get tired of hearing people labeling the work of talented developers as "junk." Especially when the people being criticized have made many contributions to OSS, and the ones hurling the insults just have their panties in a wad because they couldn't get their ripped-off MP3s to play, or their porn viewer did not have the right codecs installed by default. I suspect most people who so flippantly disparage the work of others have never written a piece of software for public consumption.

Re:Don't use Promise, for one thing (3, Insightful)

UserChrisCanter4 (464072) | more than 10 years ago | (#9468321)

Don't confuse the fact that Promise produces on-mobo RAID "hardware" with the impression that all of their equipment is like that. Promise makes several truly hardware-based SATA and ATA cards, as well as a few enclosures that take numerous (4-16) IDE drives, do RAID in hardware, and interface to a server over U160 SCSI. They are perfectly capable of making hardware RAID solutions, provided you're willing to buy something other than a $60 "RAID" card.

Their only major drawback I saw last time I looked at their hardware was that Linux drivers at the time tended to be binary and proprietary to specific versions (works on redhat but not Suse, etc.), which may or may not matter depending upon the OS you're choosing to run.

I don't work for them, and I don't even use their equipment in any of my stuff (a buddy of mine runs an SX4000 card, though, so I have seen them in action), but I do get a bit peeved when someone dismisses a company's higher-end solutions because of (admittedly) bad experience with their low-end kit.

Re:Don't use Promise, for one thing (4, Informative)

afidel (530433) | more than 10 years ago | (#9468986)

Their IDE RAID card, the SuperTrak SX6000 does REALLY poorly at some tasks. It eats CPU and from mailing lists has a lot of problems recovering from drive failures. For a good comparison to other ATA RAID cards see this [storagereview.com] storage review writeup on it.

Bad experience with Promise. XP problem with RAID. (1)

Futurepower(R) (558542) | more than 10 years ago | (#9471113)


We've had trouble getting tech support for Promise equipment recently.

This is uncertain, but it seems that there is some bug in Windows XP which causes RAID cards that don't have their own CPUs to malfunction. According to a HighPoint technical support rep, the RAID adapter card does not get enough CPU time, and writing to the drives times out, breaking the RAID array. This fits with our experience.

Re:Bad experience with Promise. XP problem with RA (0)

Anonymous Coward | more than 10 years ago | (#9478898)

I don't think this is a problem with Windows XP. I think it's a problem with the hardware design. How do you expect that card to perform when your server is under heavy load, and other processes are (rightfully) taking CPU time away from your RAID card?

Like another poster said, try to get a card that does RAID in hardware.

Re:Don't use Promise, for one thing (1)

Quinn (4474) | more than 10 years ago | (#9473856)

The latest 2.6 kernels support Promise SATA with the sata_promise module. I'm using the TX2plus with a 160GB SATA drive in a hotswap enclosure. The only drawback to the free driver is that it doesn't yet support the PATA port on the card.

Promise has released the source to their SATA card drivers, but it's written for the 2.4 kernels and at least the build process would need to be updated for them to compile cleanly in 2.6. In any case, the only reason to do so would be to get the PATA port working, and that'll be done freely eventually.

Re:Don't use Promise, for one thing (1)

gl4ss (559668) | more than 10 years ago | (#9468353)

there's good p-ata cards as well..
the thing just is that they don't come cheap.

cheapo cards are just an easy way to add more (s/p)ata slots.

Re:Don't use Promise, for one thing (1)

bergeron76 (176351) | more than 10 years ago | (#9469678)

Avoid Promise - seek 3ware [3ware.com] .

3ware has great support and superior benchmarks to the Promise, et al. equivalents.

Do your own homework/research, but 3ware appears to be the clear performance leader.

Re:Don't use Promise, for one thing (1)

rasz (788512) | more than 10 years ago | (#9472711)

Why is this post Insightful ? Poster didnt even RTFA :/

Click on the Promise link you insensitive clod, ALL FastTrak SX4xxx series controllers are HARDWARE RAIDs :/. Plus
http://techreport.com/reviews/2002q4/ideraid/index .x?pg=1 [techreport.com]
Promise SX4xxx is the best in the low budget area (beating Adaptec and 3ware).

Don't even think about.... (1)

Methlin (604355) | more than 10 years ago | (#9468154)

using Promise "to give you headaches" Controllers. If you're going to use (S)ATA you really should give 3-Ware [3ware.com] a look.

It's a tradeoff. (1)

mkavanagh2 (776662) | more than 10 years ago | (#9468163)

It really depends what you're after. It's a tradeoff between performance+upgradeability and assurance of stability. ATA is more mature (though SATA is still very good), but if you are willing to take the tiny risk, your client will be glad you chose SATA when he starts putting some load on the server.

Connectors are poor on SATA (2, Interesting)

nneul (8033) | more than 10 years ago | (#9468181)

Nice idea, but poor implementation, they have had a tendency to easily come loose on several servers we have.

Re:Connectors are poor on SATA (2, Interesting)

Guspaz (556486) | more than 10 years ago | (#9469282)

IIRC, I read something about a certain drive that had some sort of retention clip system. So it seems that the falling-out problem has already been solved by at least some manufacturers.

Re:Connectors are poor on SATA (1)

Cyno01 (573917) | more than 10 years ago | (#9475783)

Western Digital has a thing, but it clips to like the whole back of the drive and blocks the SATA power connector so you have to use molex.

Re:Connectors are poor on SATA (0)

Anonymous Coward | more than 10 years ago | (#9479376)

Not on mine, I have two 120gig WD SATA and an IC7-G mobo. They fit snugly in both.

3ware (1)

cymen (8178) | more than 10 years ago | (#9468194)

Obligatory 3ware post...

www.3ware.com [3ware.com]

does raid in hardware unlike most (all?) promise, yadda yadda, software raid faster than battery-backed hardware, yadda yadda yadda, do you really need hot swap? if not, software raid, yadda yadda

Re:3ware (0)

Anonymous Coward | more than 10 years ago | (#9469825)

Last time I looked you could not add drives to an existing RAID set. If you ever need to expand, you need to be able to deal with new drives at the host level.

If you need RAID, investigate used RAID subsystems, not RAID cards. The differences in features (including administration features) is amazing. Just make sure you can re-license the unit.

no-brainer (3, Interesting)

gyratedotorg (545872) | more than 10 years ago | (#9468206)

if you're looking for reliability, this seems like a no-brainer to me. sata all the way. im not aware of an ata drive that even comes close to the 5 year warranty of wd's sata drives.

Re:no-brainer (1)

Kalak (260968) | more than 10 years ago | (#9468345)

Warranty != MTBF. Nor does it guarantee uptime. The newest and cheapest cars in America always start with the longest warranties. That doesn't mean they're going to more reliable than any other vehicle. They want to add value in your mind, but not effect the actual value of the product. Few car owners keep their cars for 10 years/10k miles, and few servers are still production for 5 years, and I don't want to have my RAID degrated for the time it takes to get a factory replacement if at all possible.

And yes, I can get a 486 with a 120MB drive going as a server, but the OP didn't ask what someone on slashdot *can* do, but to hear suggestions based on experience. I would never recommend a 486 as a server, but if the local school/non-profit/cheap bastard has that, and can only go with that, then I wouldn't be asking about new hardware when I do it. I'd be digging for 30 pin SIMS and an ISA network card (I know I had one in the basement somewhere, damn it!).

Our 1 SATA drive we have in our shop has been replaced. That's a sample size of 1, take it for what you paid for it - nothing (or pennies if you subscribe).

Re:no-brainer (1)

Halfbaked Plan (769830) | more than 10 years ago | (#9469482)

I'd be digging for 30 pin SIMS and an ISA network card (I know I had one in the basement somewhere, damn it!).


I have a few 486 motherboards with PCI slots and 72-pin memory slots I can sell you. Some even sport 5x86 cpu chips.

Re:no-brainer (0)

Anonymous Coward | more than 10 years ago | (#9478113)

Few car owners keep their cars for 10 years/10k miles

You DID miss a digit on the miles didn't you...? Please don't tell me Americans only keep their cars for 16000 km... (It would explain why modern cars aren't built to last...)

Re:no-brainer (1)

bersl2 (689221) | more than 10 years ago | (#9469643)

im not aware of an ata drive that even comes close to the 5 year warranty of wd's sata drives.

Or, if it's your fancy, no PATA drive matches the Raptor 10k RPM SATA drive. Imagine two of those striped... yummy...

Re:no-brainer (1)

Painting (1608) | more than 10 years ago | (#9470982)

it is very yummy, I highly recommend the raptors using adaptec host controllers.

Don't use Promise... (1)

pjl5602 (150416) | more than 10 years ago | (#9468219)

Head on over to 3ware [3ware.com] and select the RAID controller you need... I've got a 7506-4LP in my server at home and it simply kicks ass.

Re:Don't use Promise... (2, Informative)

innosent (618233) | more than 10 years ago | (#9468630)

Absolutely, we have a 3TB server I recently set up at work with a 3Ware 9500-12 SATA RAID card. The card is expensive (~$700), but well worth it, for the supported RAID levels, management software, drivers, and support that only 3Ware currently offer in this market.

Re:Don't use Promise... (1)

srhuston (161786) | more than 10 years ago | (#9472935)

Is their SATA card any better with simultaneous reads/writes than their PATA cards? I've got 4-5 of them at work, and especially over NFS the performance blows. Here's what one of my colleagues wrote on the subject:

Write performance:
local writes are 27 MB/sec (~5MB files, unloaded CPU)
If both RAIDs are written together, performance cut in half
(not what we were hoping...)
local writes to local disk (/scr0) are 40 MB/sec
NFS writes to RAID are 200 times slower (1 minute per file)
NFS writes to single disk are 20MB/sec over gigabit ethernet
(a bit slower than expected, but not bad)

READ:
local reads are fast (100MB/sec) off the RAID, but cut in half when
both are read.
NFS reads off single disk are limited only by ethernet and the disk
NFS reads off the RAID are 8MB/sec, 4MB/sec when you read both.

Is this "normal" for 3Ware hardware? I've got other RAIDs, some all in software, that put this performance to shame, though you now don't have any kind of hot-swap abilities. Not too big a deal when one considers the last few times the 3Ware card lost a disk, it took the entire partition out with it and required a reboot to fix anyway, so much for hot-swap.

Re:Don't use Promise... (2, Informative)

innosent (618233) | more than 10 years ago | (#9475549)

No, the PATA cards suffer from being PATA, performance is cut in half due to using the same channel, and the channel is limited to 100/133MB/sec transfer rate. Shouldn't be half, though, unless drives are fast enough to max out the interface speed (which they aren't, fastest 15K rpm SCSI drives are about 109MB/sec internal transfer).

I haven't experienced any issues like that, and can confirm that the hot-swap and hot-spare capabilities work as expected on the 9500-12. I have not performed any benchmarks on the system, but have not experienced any read/write delays, which is probably helped by the cache of the controller and drives. I have 12 250GB drives, a mirrored pair for the OS, and a 9 drive level 5 array, with one hot spare. Pull any drive, and the hot spare works correctly, and the missing drive is rebuilt when plugged back in. Works exactly like the Adaptec 7902/2010ZCR solution in our SCSI servers, except the drives are about half the speed.

The few complaints I've seen about the 3ware cards are very similar to what you've said, though, but seem to be limited to a few specific OS/software combinations, which leads me to believe that: a) Few people realize what impact specific RAID levels have on performance (e.g. RAID 5 requires reads from all disks during a write to calculate the parity information.), and how certain hardware may reduce that impact (or in the case of PATA, significantly increase the impact of that specific problem), and b) There is some sort of issue with a specific type of read being performed by NFS, either a filesystem driver problem, or a hardware driver problem, possibly both.

As a quick summary, the major problem with PATA cards is that they use the PATA interface, command set, and drives. For speed, each drive should have it's own channel. The benchmarks you gave suggest other problems as well, and the older (PCI/33) controllers may have bottlenecks at the bus, but the newer cards (PCI-X/133) eliminate that issue for PATA/SATA drives (since you shouldn't use either if speed is your primary concern). If you want CHEAP, LARGE, and RELIABLE storage, the 9500 works well. If you want FAST and RELIABLE, a dual-channel U320 Adaptec RAID card works wonders, and it's nice to hit 640MB/sec instantaneous transfer speeds, with actual continuous reads at almost 600MB/sec, and writes at about 280MB/sec (RAID 1+0, 6 15Krpm drives, mirrored across channels, striped within channels).

Re:Don't use Promise... (1)

kormoc (122955) | more than 10 years ago | (#9484053)

Uhh, I'm realtivly sure that your NFS speeds are not due to the raid being slow, as your local speeds are lots better. I'd look into tring say, smb (samba) and seeing if you speeds improve.

Nfs has always been slow in my experences, maybe it needs tuning? I've never really needed much and samba has always worked well for myself.

My 3ware card works great, 88 megs a sec read, 40 some odd write. Local of course, and that's 2x120 gig western digital special editions raid 0ed.

Re:Don't use Promise... (1)

cdipierr (4045) | more than 10 years ago | (#9487641)

I'll second/third the vote for 3ware. Not only is their stuff fast (I'm using RAID 1+0 on a 7506-4LP card now) but it's also very reliable. On an older 2-drive card /w a RAID-1 mirror, I had both drives fail (not at the same time of course) over the course of about 3 years. Both were easily replaced w/o any data loss, and eventually the RAID was running on 2 drives that weren't originally part of the array...the 3ware card made the replacement a breeze.

Buy a RAID (1)

Twirlip of the Mists (615030) | more than 10 years ago | (#9468281)

You're going to be hard pressed to build a system that's as reliable and as inexpensive as this [apple.com] . The whole thing, drives, controllers, power supplies, everything, is available for about $3/GB, and it plugs into any host computer.

Re:Buy a RAID (4, Informative)

Guspaz (556486) | more than 10 years ago | (#9469333)

You may be right about building a system as reliable, and it'd certainly be hard to compete with it from a size standpoint, you are totally wrong about it being inexpensive.

Apple's 3.5TB system costs $10,999 US. If you were to build a system that comprised 9 Hitachi 7200RPM 400GB drives, you would acheive 100GB more storage space for 3,600$ plus the cost of the server it was hosted in. Throw in 750$ for a high-end RAID card and 1000$ for a server to enclose and handle it, and you're still priced at under HALF the price of Apple's solution.

So, in conclusion, Apple's solution is many things, and is certainly VERY sexy and attractive. But inexpensive compared to a self-built solution it is NOT.

Re:Buy a RAID (1)

Elias Serge (657630) | more than 10 years ago | (#9470852)

hrmmm...the xserve raid uses ATA -> 2Gb/s fibre channel, has redundant power supplies, hotswap drives and power, and better raid management software than a single nice raid card. I am by no means an apple fanboy, but you are ignoring some of apple value-adds (or sun, or ibm, or emc, etc.). While this may not be an ideal solution for the asker, it works well for any biz that values reliability over initial cost. After all, the actual hardware cost is only about 25% of TCO. Personally, I don't think the one of the requirements for RAID should be "as cheap as possible"- I cut corners on hardware I build for myself all the time, but if I was going to the trouble of using RAID, I'd want to make sure that it is very reliable.

Re:Buy a RAID (1, Flamebait)

Twirlip of the Mists (615030) | more than 10 years ago | (#9470920)

If you were to build a system that comprised 9 Hitachi 7200RPM 400GB drives, you would acheive 100GB more storage space for 3,600$ plus the cost of the server it was hosted in.

Plus the power supplies (dual redundant) and cooling systems (dual redundant) and controllers (dual redundant) and the case to house it all!

That's an awful lot of stuff to just hand-wave away. Not to mention the time and labor required to build and support the fucking thing.

But inexpensive compared to a self-built solution it is NOT.

The point here is that a "self-built solution" (what is that, one that builds itself?) will be cheaper, but not by nearly as much as you estimated. The only way to get it down to a price that approaches the figures you made up (let's be honest here) is to slash key features and capabilities until you get to the target price point.

Sorry, but in a large storage system redundant power supplies are not an option. Redundant fans are not an option. Redundant controllers are certainly not an option.

Re:Buy a RAID (1)

Guspaz (556486) | more than 10 years ago | (#9472474)

I never said it wasn't reliable. I just said it wasn't inexpensive.

I had budgeted 1000$ for the rest of the server. A fast processor isn't required since we have all-hardware controllers. As long as the CPU isn't saturated under heavy load, all is good.

You think you can't fit redundant power, cooling, and controllers into that 1000$? Fine, add another 1000$. 2000$ even. That's 3000$ total for the server plus the 3600$ for the drives. That's 6600$ in total, still a good 5500$ cheaper than Apple's solution. Fine, it's not less than half the price, it's only 60% of the price.

The reason Apple's machines are so expensive is that they overcharge horrendously for the drives. They charge 5999$ for a model with 4x250GB drives, and 10999$ for a model with 14x250GB drives. That's 500$ per drive, or twice as expensive as the 400GB Hitachi drives (per GB) I used for my calculations. The drives Apple is using go for about 225$ US around where I live, so their markup is where the cost is coming from.

Some quick math: From the difference between their 4 drive and 14 drive model, we've determined that Apple charges 500$ per drive. This also holds true for their 7 drive model. Now, if Apple were to sell a model with NO DRIVES (So we could fill it ourselves), the XServe RAID would cost 3999$ US. Not so unreasonable any more for what you're getting.

Then, a 3.6TB XServe RAID with our own drives would cost 7599$, a good 3500$ less than Apple. Or if we wanted we could fill up the XServe with 14 drives, for a 5.6TB XServe RAID for 9599$ US, still 1500$ less than Apple, with over 2TB more storage capacity.

Heck, even if you buy the low-end XServe (the 4x250GB model), and THROW AWAY the drives (Or ebay them?), and buy 9x400GB drives, your total cost is 5999$ + 3600$ = 9599$.

So it's cheaper to buy an xserve, sell the drives, and use your own drives. Apple's pricing is a bit odd.

Re:Buy a RAID (0, Flamebait)

Twirlip of the Mists (615030) | more than 10 years ago | (#9472575)

The reason Apple's machines are so expensive...

Xserve RAID is, by far, the cheapest RAID system in its class. By "in its class" I mean "of comparable capacity and reliability." And reliability is simply not an optional thing in this context. You have to have it. You can't get it by scrounging parts from your dad's junk drawer.

They charge 5999$ for a model with 4x250GB drives, and 10999$...

The dollar sign goes in front of the figure, not behind it. Basic literacy isn't too much to ask, I hope?

Now, if Apple were to sell a model with NO DRIVES (So we could fill it ourselves)

Heh. That's good. That's funny. Run along and play now, you fucking amateur. Leave the discussion of business-class RAID systems to the grown-ups.

Re:Buy a RAID (1)

Guspaz (556486) | more than 10 years ago | (#9472778)

Do you intentionally try to come across as haughty and insolent?

The dollar sign goes in front of the figure, not behind it. Basic literacy isn't too much to ask, I hope?

I happen to like writing currency how it's read. Is that too much of a problem for you? Does your brain shut down when you see it written that way, and you can no longer understand? Didn't think so.

Heh. That's good. That's funny. Run along and play now, you fucking amateur. Leave the discussion of business-class RAID systems to the grown-ups.

What a nice elitist-bastard attitude you've got there. Pray tell, what do you find so fucking amateurish about populating the drive bays after-market? You're not sacrificing reliability in the slightest, as I'm sure Apple uses off-the-shelf drives. You're simply using superior components, which seems to be what you are insisting is required. Fact is it remains cheaper (and better, through faster, higher capacity drives) to purchase Apple's 4x250 model and populate the device ourselves with 400GB drives. The extra 250GB drives can be used elsewhere, or sold off to cut costs.

Now, if you're done with your "I'm going to pretend I can't argue effectively as an excuse to hurl insults!" riff, I'd like to know what's so taboo about this.

Re:Buy a RAID (-1, Flamebait)

Twirlip of the Mists (615030) | more than 10 years ago | (#9472871)

I happen to like writing currency how it's read.

I don't recall asking what you like. There's a right and a wrong, and you're wrong.

What a nice elitist-bastard attitude you've got there.

Again: there's a right and a wrong. Which side do you think you're on?

Pray tell, what do you find so fucking amateurish about populating the drive bays after-market?

Firmware. If you require more explanation than this, then you're so far out of your depth that I can't even see you from here.

You're not sacrificing reliability in the slightest, as I'm sure Apple uses off-the-shelf drives.

Wrong twice.

I'll say it again: scurry on back to your sandbox, little man. Leave the grown-up talk to the grown-ups.

Re:Buy a RAID (0, Flamebait)

Guspaz (556486) | more than 10 years ago | (#9472996)

I don't recall asking what you like. There's a right and a wrong, and you're wrong.

And I don't recall telling you that I care. You're free to whine about it, of course, but I'm not quite sure what point that serves.

Firmware. If you require more explanation than this, then you're so far out of your depth that I can't even see you from here.

Try not to be quite so patronizing, it doesn't fit you well. I'm going to combine this with your next comment and ask you to provide proof. Your "Because I said so" response isn't sufficient. If you are going to insist that I'm incompetent, by all means, I may very well be. So PROVE it. Link to official Apple content showing that the drives use custom firmware, and that you can't place any other drive inside of an Apple Drive Module. You do that and I'll freely admit that I'm ill informed. I will, of course, be able to take solace in the fact that you have proven yourself to have the maturity of a young child, despite your superior knowledge.

Re:Buy a RAID (0, Troll)

Twirlip of the Mists (615030) | more than 10 years ago | (#9473519)

I'm going to combine this with your next comment and ask you to provide proof.

"Proof?" You're funny. You're like a joke. You're funny.

This isn't a school. I'm not your teacher. If you're ignorant, it's your responsibility to do something about it.

Off you go, little boy. Go play.

Re:Buy a RAID (1)

haluness (219661) | more than 10 years ago | (#9476558)

what a bunch of fucking kids! Why don't the both of you go and piss on each other?

Re:Buy a RAID (0)

Anonymous Coward | more than 10 years ago | (#9479282)

Awww... isn't that cute? You work in IT and think that makes you something important.

I'm sure the guy who cleans up vomit at Disneyland thinks he's important too.

Re:Buy a RAID (1)

Twirlip of the Mists (615030) | more than 10 years ago | (#9479982)

Where've you been? I don't work in anything like IT. This has been well known on Slashdot for nearly two years now.

Re:Buy a RAID (2, Informative)

Anonymous Coward | more than 10 years ago | (#9474280)

Apple uses our stock firmware. They tweak some constants, but usually everyone who knows how to code a specific drive firmware works for the drive manufacturer. Apple is no exception.

Re:Buy a RAID (0)

Anonymous Coward | more than 10 years ago | (#9481029)

Wow. A dickwaving contest over computer knowledge.

If you're going to start a dickwaving contest, make sure you've got something to wave first.

Re:Buy a RAID (0)

Anonymous Coward | more than 10 years ago | (#9474295)

"Do you intentionally try to come across as haughty and insolent?"

I see this is your first experience with Apple users...

Re:Buy a RAID (0)

Anonymous Coward | more than 10 years ago | (#9481101)

Apple's 3.5TB system costs $10,999 US. If you were to build a system that comprised 9 Hitachi 7200RPM 400GB drives,

Erm, you're seriously going to suggest comparing pricepoints between an enterprise-class, integrated, burn-tested, well supported product... and some box of disks you slapped together over beers some Friday night?

Hell, I assemble MY OWN PC's... but at work, I don't want anything to do with that kind of support. I'd rather spend a couple of hundred extra bucks, get Dell, and not worry about licensing and system rebuilding.

BTW... have you tried comparing the Apple servers against Dell servers, or storage boxes from EMC? Bzzt! The Apple boxes come out WAYYY more power per $$$.

It's all in the name (3, Informative)

linuxwrangler (582055) | more than 10 years ago | (#9468298)

I bought a machine from with a controller from Promise and I think I know how they got the name. They kept promising me things.

I was using SuSE 8.2 and they had no drivers but they "promised" that they would be out by the end of the month. Of course I could compile them myself but since that required installing the OS which was impossible without the drivers that required finding another machine and dealing with other problems.

After about 3 months of "promise" after "promise" (this month for sure) they told me it the drivers would be out "in a couple months". The longer I waited the longer away the drivers were scheduled.

It wasn't like I had grabbed 8.2 when it was released either. Promise's Linux "support" was way behind and they basically told me that Linux is their poor stepchild that gets leftover resources when Windows stuff is done.

I contacted my vendor and had them swap the Promise card for a 3-ware. I tossed in the disk and loaded SuSE without any need for downloading or compiling drivers. I'm running RAID-5 on 4 120GB drives. I had a drive fail a couple months back but just hot-swapped/rebuilt it with no problem. The machine was up for about a year before I had to shut it down to replace a failed tape drive but I've had no trouble with the 3-ware.

Re:It's all in the name (2, Interesting)

cpeterso (19082) | more than 10 years ago | (#9468848)


Why doesn't Promise abstract their cross-platform code from the Linux and Windows device driver "glue" code? Then they could just port the Linux and Windows specific code once and all their device drivers' platform-independent code should "just work". (but keep your fingers crossed anyways) ;)

I know Linus does not like cross-platform wrapper crap code in his kernel, but there is nothing preventing Promise from doing this outside the Linus tree or wrapping the Linux device driver API around the Windows device driver model.

Re:It's all in the name (1)

kormoc (122955) | more than 10 years ago | (#9484078)

you see, that would be smart...

Look at Apple's Xserve RAID (2, Insightful)

caseih (160668) | more than 10 years ago | (#9468351)

Apple's XServer RAID solution seems to be one of the cheapest dollar per gigabyte solution that I've ever seen. They use fast ATA drives. Although ATA drives can have problems, Apple uses only the best drives from each lot (hence they are a bit more expensive than if you bought the disks from a jobber). The RAID is a true hardware raid, allowing the creation of a hot-spare, e-mail notification, etc. The configuration software is java and runs on any platform. The RAID unit itself is fibre channel, so it can hook to servers running any OS and looks just like a big scsi disk. We have our arrays set up such that we're mirroring 2 phyiscally sparate arrays together (each raid 5+ hot spare), so we can lose up to 4 disks wihout any loss of data. Each array is about 2 or 3 raw terrabytes.

I would avoid the other controller cards you mentioned for the reasons the other posters mentioned. The Xserve RAID is all the benifits of a good scsi backplane (RAID, monitoring, etc) for a fraction of the cost.

Re:Look at Apple's Xserve RAID (1)

Hungus (585181) | more than 10 years ago | (#9468445)

Can you back this statement up
"Apple uses only the best drives from each lot"
Because I would love to know what your source is for this. I have been working with Apple and on Macs for decades now (checks... yes, I can go plural!) and have never heard anything like this before.

Re:Look at Apple's Xserve RAID (1)

Halfbaked Plan (769830) | more than 10 years ago | (#9469515)

It also raises the question: "which customers do they stick with the other drives?"

RAIDCore SATA (1)

StormForge (596170) | more than 10 years ago | (#9468375)

I've had great luck with RAIDCore's SATA controllers -- very fast.

-Bill

These things look pretty (2, Informative)

T-Ranger (10520) | more than 10 years ago | (#9468704)

.. And their marketing paper comes in a Tyvek envelope! (I don't work for them, nor am I even a customer)

StoreCase Technologies [storcase.com]

RAID boxen with ATA on the inside, SCSI and/or FC on the outside. Seemingly incredable warrenties of as long as 7 years.

Dangers of using ATA or SATA for Raid (5, Interesting)

DocSponge (97833) | more than 10 years ago | (#9468733)

You may want to read this whitepaper [sr5tech.com] and see what they have to say about using ata or sata drives in a raid configuration. It is possible, due to the use of write-back caching, to lose the integrity of the raid array and lose your data eliminating any intial cost benefits. To quote the paper:
Though performance enhancement is helpful, the use use of write back caching in ATA RAID implementations presents at least two severe reliability drawbacks. The first involves the integrity of the data in the write back cache during a power failure event. When power is suddenly lost in the drive bays, the data located in the cache memories of the drives is also lost. In fact, in addition to data loss, the drive may also have reordered any pending writes in its write back cache. Because this data has been already committed as a write from the standpoint of the application, this may make it impossible for the application to perform consisten crash recovery. When this type of corruption occurs, it not only causes data loss to specific applications at specific places on the drive but can frequently corrupt filesystems and effectively cause the loss of all data on the "damaged" disk.
Trying to remedy this by turning off write-back caching severly impacts the performance of the drives and some vendors do not certify the recovery of drives that deactivate write-back caching so this may increase failure rates.

Losing data on an ata raid array happened to a friend of mine and I wouldn't advise using something other than SCSI without understanding the ramifications.

Best regards,

Doc

I made a new years resolution to give up sigs...so far so good!

Re:Dangers of using ATA or SATA for Raid (3, Informative)

FueledByRamen (581784) | more than 10 years ago | (#9469194)

SCSI drives tend to have the same size (or larger) caches as [S]ATA drives. You can disable the write-behind caching on any drive fairly easily using hdparm. ( hdparm -W 0 /dev/... to disable, -W 1 to enable).

Of course, if you are using a hardware RAID controller, you'll have to figure out how to tell it to disable the write-behind cache on the drives under its control. Perhaps it will be smart enough to figure it out if you use the hdparm command on the logical device it presents to the operating system, but I'd certainly want to read the manual and find out.

I know from experience that Windows 2000 automatically disables write-behind caching on drives in software RAID arrays (and dumps some Informational messages in the system log to let you know what's going on).

Re:Dangers of using ATA or SATA for Raid (2, Interesting)

hamanu (23005) | more than 10 years ago | (#9469354)

Actually I just had a 120GB maxtor drive that i used to replace a 60GB one that had failed give me kernel messages to the effect off "flush cache command failed", meaning the disk refused to obey when the kernel told it to flush the write-back cache (probably to make windows benchmarks look better). Why should I trust this drive when I tell it to disable to write-back cache entirely?

Furthermore, if I am using a hardware raid how do I use hdparm? And finally, ATA drives have write-back ON by default, SCSI drives have it OFF by default.

Re:Dangers of using ATA or SATA for Raid (1)

Bert64 (520050) | more than 10 years ago | (#9483239)

hdparm -W doesn't work on scsi drives anyway..
Infact, most hdparm functions don't work on scsi.. Even tho features like turning off the disk motor were supported on scsi first

Re:Dangers of using ATA or SATA for Raid (3, Insightful)

Guspaz (556486) | more than 10 years ago | (#9469400)

This would be why professional ATA RAID solutions have battery backup. Somebody previously linked to Apple's XServe solution. It has enough battery backup power built in to keep the caches going for 24 hours. If you can't find a power source for your server within 24 hours of a power failure, your data obviously isn't that important.

First off I'd assume if your data is so important you're going to have UPS and generators. If you don't have a generator, and the power fails, great, you've got 24 hours to purchase one. A 1500W generator costs about 450$ US, and should be more than powerfull enough to run your server, AND network connectivity. You'll not only keep your server happy during a power failure, you'll be able to keep using the server.

Anyhow, this post started out about the battery backup. What you stated as a major problem isn't one, since serious ATA RAID solutions have battery backup.

Re:Dangers of using ATA or SATA for Raid (1)

smcavoy (114157) | more than 10 years ago | (#9474609)

I don't believe he is speaking about a general power failure, but a failure of a drive that has say 8mb cache memory.
From the standpoint of the OS, the data has been written to the HD but it is actually "lost".
Of course if you were building a real server (i.e. not for hosting large amounts of mp3s, porn, movies, etc.) you'd probably disable the cache on the drives and let the RAID controller do its job (assuming you bought a RAID controller with cache).
But a better solution, for a storage only server, would be to use regular ATA drives (or SATA if you really feel the need for some reason) attached to regular controllers and use software RAID.

Re:Dangers of using ATA or SATA for Raid (1)

Guspaz (556486) | more than 10 years ago | (#9475658)

That's exactly what Apple's XServe RAID does; it disables the onboard cache on the drives and uses the RAID controllers' cache. That cache is what is battery backup, if I read it correctly.

The drives also appear to be ATA.

Re:Dangers of using ATA or SATA for Raid (1)

TheWanderingHermit (513872) | more than 10 years ago | (#9483060)

If you can't find a power source for your server within 24 hours of a power failure, your data obviously isn't that important.

You obviously weren't in the middle of the East Coast after Isabel hit last fall and wiped out power throughout our entire area for over a week. I don't remember the numbers, but in our area, after something like 5 days there was still only a 60% restored rate for the area. For days travel in some areas was impossible, due to trees on the roads, etc.

I would have been willing to bike it to get to a power source, but that was virtually impossible, as well, due to the debris in the road.

And this is not in the boondocks, I live 10 minutes from the state capital.

Re:Dangers of using ATA or SATA for Raid (1)

kormoc (122955) | more than 10 years ago | (#9484137)

I would have been willing to bike it to get to a power source, but that was virtually impossible, as well, due to the debris in the road.

I'm just invising you standing there in the road with a tree down in front of you wondering where to go...

First, you could go around the debris, and second, if your data is really important, you pay to have large banks of batterys and a backup powered generator...

Re:Dangers of using ATA or SATA for Raid (1)

TheWanderingHermit (513872) | more than 10 years ago | (#9484450)

I'm just invising you standing there in the road with a tree down in front of you wondering where to go...

Actually, the closest wasn't in front of my house (but three fell into my yard from other yards), but there were something like 10 (I'm not exaggerating!) trees down across the road between me and the closest main road (as in not a neighborhood road).

First, you could go around the debris

Like I said in my first post, You obviously weren't in the middle of the East Coast after Isabel hit last fall. It was unlike anything I ever imagined I'd see in America. I felt like I was in a Mad Max movie. Trees weren't just bent, by down, I mean they were down across the road. You couldn't drive around, you couldn't get under them on a bike, you had to climb over or climb under. You could expect to have to go through several trees a block. I live in the suburbs. We have a lot of tree lined streets downtown, and it was worse there (the few trees near the streets didn't have the root systems enforcing them like the trees in large groups). Ice became a "hot" commidity. When transportation was possible, people waited in line for hours to get any amount of ice to keep any food they had from spoiling.

A simple statement like "you could go around the debris" is what I would have said, thinking it couldn't have been that bad. It was. Almost any store east of the Mississippi had no generators because they were all shipped to the disaster area. If you didn't have a generator ahead of time, then there was almost no chance of getting one later. (And, if you didn't know, a generator is not enough -- they do not produce power acceptable for computers with out more equipment.)

if your data is really important, you pay to have large banks of batterys and a backup powered generator

I was fortunate, since everyone in my business was ready. All we did was shutdown for the duration, since our data sources and Internet throughout the city was down, but there was someone in my neighborhood who was the situation I mentioned. He had just started his business (from his house), and was still making less than $10,000 per year. He was lucky to have a simple UPS to keep his boxen from going down during power flickers.

While his equipment was secure, and fsck fixed his problems, Isabel set him back months at a time when he couldn't afford it. He had to work 18 hour days from late September until May to catch up.

My main point is that it isn't as simple as you make it sound. Even with generators, how much fuel do you feel safe keeping on the premesis? A disaster like Isabel can make it almost impossible to keep up and running for almost all but the largest of companies.

Re:Dangers of using ATA or SATA for Raid (1)

kormoc (122955) | more than 10 years ago | (#9485378)

You obviously weren't in the middle of the East Coast after Isabel hit last fall

Actually, I am quite close, but your right, it didn't hit as hard up here, I'm north of dc.

(And, if you didn't know, a generator is not enough -- they do not produce power acceptable for computers with out more equipment.)

Hrm. That's news to me. My friend has one and he's used it when the power goes out. He runs a buisness out of his home as well, and it runs everything just fine. He keeps about 2 days worth of fuel in a tank in the back and within two days, I'm sure he could get more if needed to keep those servers running. /me shrugs.

I'm just saying that it seems like it could have been done fine. Hard work, but isn't that the best kind?

Re:Dangers of using ATA or SATA for Raid (1)

TheWanderingHermit (513872) | more than 10 years ago | (#9485673)

If you could get the brand name of the generator your friend has, I'd appreciate it. I checked on a number of generators and the sales people or the maker (yes -- I could call the companies, in spite of the mess, our phones were only out a day or 2 -- I don't know why, but I'm guessing they may have had more slack in their lines than the power company), and was repeatedly told NOT to run computers off them -- so if you could post the brand, it would be a help. My business doesn't really have to worry about it (with now power, many of our data services go offline anyway), but it may help my neighbor, and it'd be nice if I could work on my system at home in another blackout.

I'm not trying to obnoxious about how bad it was. If you didn't live through it, you wouldn't believe it. Since Isabel, I've sen disaster reports on the news, and there's no way a few shots on the news can convey what it's like when block after block has not just one, but several trees down over the road and on the power lines. My one link to the rest of the world was my battery powered radio. Listening to the reports was like living in the movie "The Day After" or (as I said earlier), the first Mad max movie. The news was focused exclusively on where you could get medical supplies, fresh food, or ice. If a restaraunt was open, it was on the news. If you didn't get in touch with someone to repair your house or remove the trees on it quickly, you could wait for weeks until they could fit you in.

For most of us, no matter how important our data was, it was one of the last thing on our minds -- after making sure we had medical attention, food, and shelter. It was an amazing and humbling experience -- one I hope to never repeat.

Re:Dangers of using ATA or SATA for Raid (1)

kormoc (122955) | more than 10 years ago | (#9487540)

I'm quite sure it's a honda, but I'll try to remember to check when my friend gets back from vacation. Here's a page about them:
http://www.hondapowerequipment.com/genhan.h tm
IIRC he has two EU2000's in parallel, but for awhile he only used one.

Well, as you said, I won't have any idea till I lived though it, I'm hoping I won't.

Well, take a look at the honda generators and see if one of them will meet your needs. I can vouch that if that is the right model, it's not loud at all.

Re:Dangers of using ATA or SATA for Raid (1)

Guspaz (556486) | more than 9 years ago | (#9490733)

It doesn't matter HOW long the power is down. The post this is all about is claiming that power failures can cause data loss due to cache loss. But if you have a 5 day power outage, obviously your UPS/generators won't last that long. But even if you only have a UPS, that's time enough to shut down the servers, avoiding data loss. The guy was saying you shouldn't use ATA/SATA because if the power fails your data will go poof, I'm saying that's not an issue.

Besides, 5 days of down time is nothing compared to the 2 weeks plus that Quebec and parts of Ontario were without power due to the Ice Storm in, I think it was, 1998. Some places were without power for over a month. In the middle of the Winter. In Canada.

Re:Dangers of using ATA or SATA for Raid (1)

Octorian (14086) | more than 10 years ago | (#9487757)

Actually, despite being quite a proprietary system over all, I kinda like the approach that my CLARiiON FC array takes to this problem.

It has special set-aside space on a number of drives that it designates as the "cache vault". Then, it has its own special UPS connected to the box that holds the RAID controllers and the first 10 drives of the system. When the main power fails, the controllers flush the write cache to this "cache vault" location, and then tell the UPS that its ok to "shut down now". When main power returns, it reads this data back in and continues about its business. The thing is VERY picky about everything being just right to enable write cache, however, with good reason.

I did an extensive writeup on my experiences with these boxes here [hecomputing.org] , and even included info on how to "fake" the existance of one of those special UPS units if you didn't happen to have one. (with the intention of being scripted into something like "nut", of course)

Reliability when cache writes lost- journalling (0)

jensend (71114) | more than 10 years ago | (#9470581)

Isn't that what journalling filesystems, especially ones with atomic writes, are for? And as somebody else pointed out, it's not like SCSI drives don't have write caches, and you can disable them if you wish.

RAID cards with CPUs could, theoretically, recover (1)

Futurepower(R) (558542) | more than 10 years ago | (#9471171)


This is speculation, but it seems to have some validity. In the case of a RAID adapter that has an on-board CPU, the card might be able to recover from a power failure. The capacitors on the RAID adapter should hold enough energy for a few milliseconds of operation. During that time the adapter could write to non-volatile memory on the adapter enough information to know what data is lost and how it can be recovered.

Again, this is speculation, but RAID adapter cards that do not have on-board CPUs depend on the main CPU to assure data integrity. If they don't get attention from the main CPU when they need it, the data is corrupt.

It seems to me that power failures are somewhat rare. The real failure rate of RAID adapter cards with no on-board CPUs could be much higher if there are instances where the OS does not service the RAID adapter card in a timely fashion.

(non) Dangers of using ATA or SATA for Raid (2, Interesting)

0x0d0a (568518) | more than 10 years ago | (#9471834)

Trying to remedy this by turning off write-back caching severly impacts the performance of the drives and some vendors do not certify the recovery of drives that deactivate write-back caching so this may increase failure rates.

I don't buy this argument one bit.

I agree with you that write-back can break journalling FS guarantees.

However, I don't know of any consumer drive vendor that guarantees that their write-back algorithms are in-order. This means that write-back can trash *any* filesystem, and whether it be RAID or not be damned.

Write-back should *never* be on on drives using modern filesystems.

As for an impact on performance, I call foul again. The write-back cache benefits are useful only in the presence of an OS that does poor disk caching. Take a nice Linux box -- it'll use all available free memory as a big fat writeback cache. There is only a single advantage to using a drive's native writeback controller -- the drive knows the true geometry of the disk (not whatever fantasy geometry it's handed off to the host), and furthermore knows the performance characteristics (settle time, seek times, etc) of the drive. That's useful, but it's not comparable to having ten times or more the amount of memory for buffering.

Hard drive vendors would be *much* better off from a performance standpoint exporting a profile of their drive's performance characteristics to the host -- "settle time on the drive can be determined by this function, seek time can be determined by this function, this is the real geometry", etc. Then, the much more powerful (in both memory, CPU, and code size) host could do whatever scheduling it wanted to try out.

Re:Dangers of using ATA or SATA for Raid (2, Insightful)

bruthasj (175228) | more than 10 years ago | (#9471952)

You may want to read this whitepaper...

And, yeah! Guess what? Buy their software to fix it! Move along ...

I wouldn't advise using something other than SCSI without understanding the ramifications.

Uh, would you advise to use anything without understanding the ramifications?

Client/Server (5, Funny)

soundsop (228890) | more than 10 years ago | (#9469015)

I have a client that needs a server.

On a related note, I was having dinner at a restaurant and my waiter asked me for a recommendation for a good email program. So I guess it turns outs that I have a server that needs a client.

Non-Performance Related Problems (2, Interesting)

Devalia (581422) | more than 10 years ago | (#9469115)

Whether its just maxtor in general or a few poorly constructed hard drives i've had a few problems with the connectors - simply the plastic tabs at the back had a bad habit of being extremely easy to break :( (i.e to hold cable in place)

Re:Non-Performance Related Problems (1)

crackshoe (751995) | more than 10 years ago | (#9469845)

I had the same problem with my 120 gig WD. It just seems like a remarkably flimsy way to attach a cable.

3ware escalades are quite good. (1)

Sxooter (29722) | more than 10 years ago | (#9469229)

and they support real RAID configurations, like RAID 1+0 or 5 etc...

http://www.3ware.com/products/serial_ata.asp

Don't Confuse (4, Insightful)

Crypt0pimP (172050) | more than 10 years ago | (#9469406)

The connection technology with the drive / spindle quality.

(P)ATA and SATA are connection technologies.
They have their individual benefits and drawbacks
(cost, reliability, speed)

The real factors to consider are the details of the drives themselves - vibration dampening, bearing and motor quality, MTBF.

It used to be rather simple to guess what quality of drive you were buying. If it was 146GB or less (73GB, 36GB), and rotational speed was 10K or 15K, it was either SCSI or FC, and an "enterprise" class drive, rated in Mean Time Between Failure.

Good drive, high quality, expect it to last several years, spinning 24 hours a day, sustaining high read and write activity during production and backup hours.

If the drive was larger (200GB+) and slower (7200 RPM), typically an ATA drive, maybe low end SCSI.

Then it was, at best, a workstation class drive, rated in "Contact Start Stops", meaning how many spin-ups and shutdowns the drive should survive. Not meant to run 24 hours a day, and run under heavy load except for short periods.

The lines are beginning to blur with 300 - 500 GB drives with FC drive attachment. Those drives are meant for archiving and reference data. Not production databases and such.

In my personal experience, the 3Ware products are worth the premium.

Pick your attachment technology as appropriate.

Best of Luck,
Patrick (slineyp at hotmail dot com)

My experience with RAID cards (2, Interesting)

SlashingComments (702709) | more than 10 years ago | (#9469868)

Stability: 3Ware #1 AMI Megaraid (the 4 port ones) #2 Naked drive with linux raid #3 ... rest are either crap or I did not use them.

Performance: Naked drive with linux raid #1 Megaraid/3ware - both slower

I don't know why but how come linux with naked drives using software raid *always* comes in the top with performance. May be you guys can tell me.

Re:My experience with RAID cards (2, Insightful)

FlameSnyper (31312) | more than 10 years ago | (#9470229)

Probably because it sucks down all your CPU speed when you run the test. The faster your CPU, the faster the software RAID.

The hardware raid controllers have limited clock speeds and less RAM than your computer, so they're slower.

Experience... (3, Interesting)

poofmeisterp (650750) | more than 10 years ago | (#9469878)

The backplanes on server cases are horrid for SATA. They work, but you have to have special hookups for the LEDs (drive fail and activity) and often the controller cards or motherboards don't supply them. All I've managed to get is power LEDs on the front of the Super Micro cases I've worked with.

SATA is not that much faster in practice than PATA, because the kinds of load that you put a drive under in a production environment are not like the speed/load tests used to generate benchmark numbers.

You asked for opinions, and mine is that PATA (ATA-133) is more than fast enough, and the cost of SATA and the quirks that have yet to be ironed out are not worth it. It's the latest shiny object, and shiny objects are not always the most useful.

I base my experience on the Western Digital SATA (mostly 36 gig) drives and the Western Digital 40 and 80 gig JB drives connected to multiple brands of motherboards and add-on controller cards.

about your controller (0, Redundant)

bendsley (217788) | more than 10 years ago | (#9469883)

Instead of using a card from Promise, think about using one from 3WARE. I speak from experience.

All you need (2, Funny)

cgenman (325138) | more than 10 years ago | (#9470433)

I have a client that needs a server with quite a bit of storage, reasonable level of reliability and redundancy and all for as cheap as possible.

So what you need is this [gmail.com] .

Experience with 3ware (0)

Anonymous Coward | more than 10 years ago | (#9470823)

We have a couple of 3ware cards. With the first ones we had (7 series) we were pretty disappointed since they always failed. Maybe it was also part of the motherboard, but the cards were also not perfect. However, the later ones are pretty stable. Actually one of the early cards had a capacitor soldered onto the board by hand! We were pretty pissed at it. We realized it must have been a beta board. However, when we finally had with it, 3ware replaced the card without a single question (actually one question, the serial number) despite that it was already well beyond the warranty. At that time we agreed that this was a very good customer service, and we still buy their cards and are happy with them.

Vilmos

Mmmm new tech still developing or old reliable (1)

SmallFurryCreature (593017) | more than 10 years ago | (#9470897)

Price is a non-issue. Compared to all the rest the investment in SCSI is not that much more when you look at the price of Server level CPU/memory/housing.

You gain technology that is now so well known and tested that you can just count on it to work.

Sata on the other hand still isn't finalized in its spec. New one is coming out wich adds some new features. (or has recently).

So for me I look at the following things. (note this mostly applies to webserver or servers in support of webservers)

  • Read write access lvl. A website with say 1 gig in content and a small database doesn't really need a disk except when booting. Put 2 gig of memory in there and the drive will barely be used except for logging (trim the logging down to usefull info only). The content will be in memory.
  • Price. If there is going to be a lot of access to the disk then the question becomes is there the money to go to SCSI? If it is possible then SCSI it is. Else I like the raptor disks. Good solid drives that are almost the lvl of SCSI. Ordinary IDE drives are a no-no. They just ain't designed for 24/7 operation.
  • Space. This is an important one but just not in my field. IDE drives are a lot cheaper per MB. If you are talking servers you might not even be able to fit enough SCSI drives to make up for 1 single IDE drive. Well at least not without selling your first born.

So if you can afford it use SCSI, if you need massive amounts of space and can't afford SCSI go for IDE, if your HD is barely going to be used you can settle for IDE just don't be suprised when the disk dies. (then again that hardly matters since you do do backups right? RIGHT???)

RaidCore (2, Interesting)

beernutz (16190) | more than 10 years ago | (#9471281)

This product will blow your socks off!
Here are some of the highlights from their page [raidcore.net] :

Online capacity expansion and online array level migration

Split mirroring, array hiding, controller spanning, distributed sparing

All RAID levels including RAID5/50, RAID1n/10n

Serial ATA-based

Choice of 4 or 8 channels and 2 functionality levels

64-bit, 133 MHz PCI-X controller in a low-profile, 2U module

And the HIGH-END board can be had for under $350!

Re:RaidCore (1)

Lexicon (21437) | more than 10 years ago | (#9472003)

This looks suspiciously like a pseudo-hardware software raid card. Check out this quote from their faq:

Q: How did you manage to achieve such high performance without cache and additional I/O processor?

A: The RAID processing and caching is done on the host motherboard. Performance is so high because of today's high CPU speeds and patented RAIDCore RAID algorithms.


I would have to doubt what benefits this gives you over software raid, as it appears to use your CPU anyway. I think you are far more likely to have maximum reliability and successfully recover your data from a more widely used and open software raid implementation than from this closed system.

Re:RaidCore (0)

Anonymous Coward | more than 10 years ago | (#9472050)

And if you check their page you find out their cards are soft raid with some high end features.

Re:RaidCore (1)

beernutz (16190) | more than 10 years ago | (#9476458)

Actually they are NOT softRaid. The reason i say this is that they do not REQUIRE drivers to work. They opperate with just the card itself and drives.

The speed they achieve is astounding, and the setup can ALL be done with NO os loaded.

This does not SEEM like softRaid to me, but i am NOT an expert on these things. Simply a VERY satisfied customer.

Re:RaidCore (1)

mbyte (65875) | more than 10 years ago | (#9477963)

beware .. they are also just software raid ! (read their FAQ, of course a bit hidden, but they say they use the CPU of the host for the XOR calculation ...)

Also they write that their algorithm is patented, and about linux drives .. be carefull of them !

I still would recommend 3ware, fast, stable, and proven linux drives (since at least kernel 2.2, and fully open souce)

3ware + ATA hotswap trays (2, Informative)

loony (37622) | more than 10 years ago | (#9475206)

We're running several servers with 3ware controllers and SATA drives where I work and while the controllers are great the SATA connectors suck. They are just too fragile. Everyone in my team who touched the setup - no matter how careful they were - ended up breaking a connector. If you have only one or two cables its alright - but once you end up having 8 or more and try to route them nicely you'll be in trouble.

If you're going for more than just 2 or 3 drives and want to go SATA you should go with one of the newer multilane connectors. One connector carries 4 SATA channels and for an array with 12 drives you only have to worry about 3 cables. That makes the cable layout much neater and the connectors are fairly solid.

3ware (1)

time4tea (193125) | more than 10 years ago | (#9482811)

I have two 3ware setups at home, the 8506-4lp, and the new 9000 range (the 85xx range is now no longer available, but you might get some old stock).

Each one is configured identically, 4x160Gb SATA using seagate barracuda sata drives. 3 for RAID 5 and one as "hot spare", which is automatically brought into play should a drive fail.

I had multiple failures using Maxtor drives, and so far, the seagate have been very reliable.

The 3ware stuff can be accessed from the boot-up screen, or they have a little shell program ("tw_cli") where you can see the status of the drives.

To be honest im 100% satisfied. The CPU load is low, the product has been rock solid. The drivers came as part of the kernel. (For the 8506, which i have on linux). For the windows box, again it installed no problem.

The sata cabling makes life much easier.

I also use hot swap caddies, using the 2-to-3 converters. I have two different ones, one from 3ware, and one from chenbro?, which I prefer, as its flush with the case. They both work inthe same way. The hot spare i mount internally, as in theory it should never be used, and when it is, only until i hot swap the failed drive out.

LSI, Promise, 3Ware (1)

Avrice (237283) | more than 10 years ago | (#9484929)

3ware are great and $$$$$.
LSI are great and not too expensive. They offer hardware raid support (not like promise, highpoint, etc.) for a good price and excellent linux support. They same driver and software that is used in their SCSI line of MEGARAID controllers is used in their series of SATA controllers. This is my recommendation.
The promise controller has HORRIBLE linux support. Having emailed with promise many times about the SX6000 I can tell you to avoid it. If it is too late, you need to run it as an I2O device and use 2.4.19-ac4 (that is 2.4.19 with the alan Cox 4 patch).

Go with the LSI for performance, reliability, and price.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?