Slashdot: News for Nerds


Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Hardware For Bulk IDE Hard Drive Burn-In?

Cliff posted more than 11 years ago | from the making-sure-it-works-correctly dept.

Hardware 51

r0gue_ asks: "I work for a mid-size OEM hardware manufacturer. We ship approximately 300 to 500 IDE HDs every month across all our units. Currently we experience about a 4% failure rate (Maxtor and WDs), though in recent months it has been a couple percent higher. The problem is our systems are dedicated boxes with a non end-user friendly form factor. Virtually every physical HD failure results in an RMA. What we are looking for is a hardware based IDE HD burn-in platform. Something that we could drop a dozen or so drives in at once, stress test them for a day or two, then put them into inventory for builds. I know the HD manufacturers and larger OEMs use them but I have not been able to track down anywhere we could purchase one. Right now moving to SCSI or a form factor that supports externally removable drives is not an option. I was hoping that the Slashdot community could point me in the right direction."

cancel ×


Hmmm... (0, Flamebait)

Anonymous Coward | more than 11 years ago | (#5818596)

"We sell expertise. The only catch: we are not experts ourselves." Worry not! There's always "ask Slashdot".


What's with the lame Ask Slashdot questions lately? I'm fine with people asking stuff, all sorts of stuff, but lately there has been this trend "I'm too lazy to do my own job, could you please do it for me? And make that before 5, I'd like to go home." For godness' sake! If you are a OEM, call the fscking provider! They got to know this stuff. "Look, Joe, we are having this problem with the drives we are shipping, can you please tell me where to find stress testing hardware? And if it's not much of a problem, we'd like to avoid killing half of the drive's usable life while we are at it".

Here's a solution: (2, Interesting)

Swift Guru (168704) | more than 11 years ago | (#5818597)

For the love of God, don't use Western Digital or Maxtor drives. It's like you're asking for that 4%.

Re:Here's a solution: (1)

wpc4 (169892) | more than 11 years ago | (#5819052)

Who's reccomended today? I used to like IBM, they started sucking, got sued, sold their HD division. Had 2 of their drives die this month. I have a Western Digital 540mb HD from '94 that still runs fine. Had no problems with Maxtor either.

Re:Here's a solution: (2, Informative)

slaker (53818) | more than 11 years ago | (#5821966)

I'd suggest Samsung. Yes, I'm being serious. Even the best of their drives is slow, but "slow" just means that the 7200rpm 80GB Samsung brings up the tail of the pack of _current_ ATA drives, performing better than current 5400rpm entries from WD and Seagate and just a hair slower than current Seagate and Maxtor drives. Before someone jumps on me about performance, do TRY to keep in mind that any current ATA drive is going to be substantially faster than any two-year-old ATA drive, mainly due to the benefits of increased platter density.

My main reason for suggesting Samsung, aside from the joys of a real 3 year warranty and the fact that Samsung drives really are value-priced, is that my return rate, and the return rates of several other resellers I know, has been exceedingly low.

Re:Here's a solution: (0)

Anonymous Coward | more than 11 years ago | (#5819287)

You dumbass. Western Digital drives are the best.
Top notch outfit. No joke.

Re:Here's a solution: (2, Funny)

MarkusQ (450076) | more than 11 years ago | (#5819673)

For the love of God, don't use Western Digital or Maxtor drives. It's like you're asking for that 4%.

Yep. Use only Maxtor. That way you should be able to get 8%.

-- MarkusQ

Re:Here's a solution: (1)

zarqman (64555) | more than 11 years ago | (#5819771)

that not a solution; it's a trite comment (not worthy of its present moderation 'interesting' either. a solution would include another brand or two to actually use.

i'm still using maxtors myself at this point. why? i'm not sure what else to use that's better and the ones that have failed at least made noises and gave warning--something past western digitals did not.

so again, what do you recommend that would be better?

Re:Here's a solution: (3, Interesting)

tomoe27 (315555) | more than 11 years ago | (#5820146)

Western Digital: I've owned several western digital drives over the past decade, and none of them have ever failed me. At my workplace, I've found old WD drives in Pentium I PC's that have been in service for 6+ years without a single problem.

Maxtor: I've been plagued with problems from maxtor drives over the years. From one original Maxtor i've bought (and it's RMA replacements), i had 2 that had spindle motors that became abnormally loud, one catastrophically fail (IDE Auto-detect had problems even detecting what the drive was), and then the last one i had started failing (with SMART warning, at least) about 5 days after my warranty period ended.

Seagate: I've never used any of the newer seagate offerings, but my older seagate drives lasted for years before i replaced them with higher capacity drives.

Quantum: Most of the quantum drives (standard 3.5" form factor) i have encountered at my workplace have performed reliabily over the years, recently however we've started seeing a bunch of them failing, but considering most of them are 4+ years old, it's not a bad track record. I recently had a quantum bigfoot die last fall, but even that was close to 4 years old also. (Just what exactly prompted quantum to make this strange hybrid form factor bigfoot anyways?)

IBM: The older drives were great, and I used to love their service, i would contact their support and they would do an advanced RMA with no problem. However on my most recent experience with them in Fall '02 getting an RMA on a drive less than a year old, they told me they couldn't Advance RMA me a drive, i would have to use standard RMA, which including shipping times, would take almost a month, which was quite unacceptable. Not to mention how their newer drives had a problem with failures.

Re:Here's a solution: (3, Informative)

mmontour (2208) | more than 11 years ago | (#5821084)

You can't go by brand alone - at some point every manufacturer has had a line of bad drives.

StorageReview [] has a Drive Reliability Survey that lists statistics for many drive families. For example, WD 205Bx drives are near the top of the rankings (99th percentile) while the 600Ax is near the bottom (10th percentile).

Re:Here's a solution: (3, Interesting)

afidel (530433) | more than 11 years ago | (#5821748)

That 99th percentile is based on barely enough drives for it to be rated (just over 60) so it doesn't mean much, most of the drives have hundreds of units in the database. Besides I think the database is in many ways flawed as most people who list their drives will do so because they have had a failed drive. The best way from my perspective is to look at what companies with hundreds or thousands of drives are doing. Rackspace switched out all of their IBM IDE's for Maxtors, google uses Maxtor's, and the recent IDE backup unit featured here used Maxtors. But maybe I'm just biased because as an OEM I had tons of drives from almost every manufacturer die on me except for Maxtors.

Re:Here's a solution: (1)

croddy (659025) | more than 11 years ago | (#5823826)

I have 3 20MB Seagate ST225s ... 20 years old ... still work like the day they were made. that being said, my two current HDs are a Western Digital, and a ... uh, Quantum Fireball. the name inspires such confidence!

Re:Here's a solution: (1)

tomoe27 (315555) | more than 11 years ago | (#5843444)

Same here, i don't think anybody ever made a more reliable drive than the old ST-225, so many people i know never had a single failure with that drive. about a decade ago i used to use a ST-225 and an ST-277R (65mb RLL Drive).

I've had pretty good luck in the past with the seagate IDE drives as well.

How-to (3, Informative)

m0rph3us0 (549631) | more than 11 years ago | (#5818613)

Put 8 IDE controllers into a box (more than this maxs out the PCI bus bandwidth) .
write bash script that checks dmesg for how many drives are in the system and invoked the follwing perl script for each drive.

Write perl script that does this.
formats and partitions drive to max size,
copies a kernel or some other large file onto the disk until it is full.
monitors syslog for IDE errors
md5sums the files to make sure they all match.
reports an error if the MD5 doesnt match.

unless you get hotswap controllers you will have to reboot everytime you want to test another batch of drives.

if you dont wish to write this perl script i can be hired to do it for you.

Re:How-to (2, Informative)

m0rph3us0 (549631) | more than 11 years ago | (#5818617)

one more thing, dont put more than 1 drive on each channel as it massively slows down the operation.

Re:How-to (0)

Anonymous Coward | more than 11 years ago | (#5820833)

The fastest sustained drive tranfer I've seen is around 40MB/sec. Two drives on a channel will get you 80MB/sec. We'll throw in an extra 20MB/sec for bursts. That's 100MB/sec out of a possible 133MB/sec (assuming current controllers). The only slowing I see happening is latency, which for testing/burning-in is pretty useless.

Re:How-to (0)

Anonymous Coward | more than 11 years ago | (#5823513)

Not possible for IDE. With the master/slave configuration, only one drive can talk at a time. The maximum speed for two drives is half the maximum speed for one drive divided by two.

Re:How-to (3, Interesting)

torpor (458) | more than 11 years ago | (#5818691)

It's kinda stupid to only do *8* disks at a time, when you can easily do 64 ... using Firewire.

My advice would be to investigate into as many Firewire->IDE convertors as your company can afford, and then use a Firewire-friendly OS to do the burn-in. Something like OSX or Linux would work very well in this case - actually, a cheap Apple machine would be perfect for this application.

There's no need to start things up in batches with Firewire, either. You can plug in a disk, and your 'stresser' program can be written in a way that it just picks up that disk, stresses it, and reports failures along the way.

Would be a very simple project. If you want specific help, feel free to contact me ...

Re:How-to (3, Informative)

vadim_t (324782) | more than 11 years ago | (#5818922)

Yeah, that'd be real fast. The bandwidth of Firewire is less than PCI. But okay, suppose you get several cards. In this case, the bandwidth is still 133MB/s. Assuming that you have all that for your disks, which doesn't include the network card, sound card, overhead, and whatever else you have, that gives 2MB/s per disk. Real fast.

Now, my motherboard supports PCI 64 at 66Mhz, with a bandwidth of 532MB/s, this would give 8.3MB/s per disk. Still not a lot, and you'd have to find a PCI 64 Firewire card with a lot of connectors, because at least my motherboard has only two slots.

My 80GB disk can do 40MB/s quite easily according to hdparm, so with my available bandwidth I could support about 13 drives, let's say 10 to compensate for overhead and other things on the bus. I think that 40MB/s is quite near the limit of Firewire, so I might need ATA instead. With two serial ATA cards with at least 5 connectors on each I suppose it'd be possible. Parallel ATA would also work, I guess, but the wiring would be really complicated with so many drives, especially because you want to have a drive per cable for maximum performance.

Re:How-to (2, Informative)

torpor (458) | more than 11 years ago | (#5823173)

How do you - at a consumer'ish level - fix the ugliness of the IDE cable, and non-hotswappable capabilities? I would think being able to load disks on and off without having to do a full system re-boot would be pretty advantageous... more important than speed concerns, anyway.

Can your PC really do sustained writes to >8 drives without getting into performance issues?

Actually, speed is a good point. I guess I hadn't thought about that so much as part of the setup, but I guess if you're exercising disks to make them fail, you want to do it as fast as possible... even though, in your end-system where the disks will be used, a Firewire stress-test may be more realistic in terms of disk behaviour.

Either way, there's tradeoffs.

Re:How-to (1)

shepd (155729) | more than 11 years ago | (#5823526)

>How do you - at a consumer'ish level - fix the ugliness of the IDE cable, and non-hotswappable capabilities?

You want the dangerous answer?

I used to do this, and never did blow my IDE interface, as some say I should have (try at your own risk).

Buy some IDE removable drive bays (one per drive) -- $20 each. Put the drives into the sleeves, and hook up a bay to your computer. Simply remove the drive sleeve and replace it with another when you want to. Obviously this drive can't have any active data on it, and it can't contain your operating system.

You'll have to check how your OS reacts. Windows 2000 & XP should lock solid for a while, and detect the new drive. With linux, I think you'd have to unload and reload the ide module (if it's modularized) if the drive doesn't have the same parameters as the old one. Yes, this sorta necessitates booting from a SCSI drive, as I think you'd have trouble removing the IDE module if there's IDE drives mounted.

Like I said, I've done it without running a motherboard (still works today, even after many swaps) but it could be very dangerous to your hardware, so no warranties, no support, tough noogies.

Re:How-to (1)

vadim_t (324782) | more than 11 years ago | (#5823989)

At consumer level I'd get serial ATA, which has a thin cable and is hot swappable, although I think the drive has to suppport how swap.

About speed, I'm not really sure, I only have 2 drives at the moment, and nothing in the PCI 64 slots, but at least the available bandwidth wouldn't be a bottleneck. Of course, it also depends on how fast those drives are. I'm pretty sure there are drives that are noticeably faster than mine.

Re:How-to (5, Informative)

dubl-u (51156) | more than 11 years ago | (#5819229)

That's a good approach. At 8 drives a day that's 250 a month for a station that you can build for well under $1000. I'd only add that you may need to tune this based on the failure modes that you are seeing.

For example, if it's just bad spots, then you'll want to do as many reads and writes as possible. For that, the fastest thing would be a little C program that reads and writes different patterns to the raw device linearly.

On the other hand, if the failures are tied to seeks, you'll want to write to semi-random locations on the device, to force maximum seeks. Or if you see a mix of both, then your best bet might be to follow m0rph3us0's plan, perhaps tweaking it a bit to better simulate normal filesystem efficiency (and you can just do bit compares rather than md5sums if CPU is an issue).

You should also keep an eye on heat issues. The burn-in should happen at temperatures that are like what they will be in the end systems. If you pack 8 seeking drives into some cases, they'll cook. If you leave them in the open air, they might not trigger the failures you are seeing in the field. Try to match measured operating case temperature.

Oh, and don't forget to measure whether this burn-in is really helping. Take stats now, and keep tracking causes of return. It could be that the drives are sensitive to noisy power or vibration or something else that your burn-in won't catch.

Re:How-to (1)

SirTwitchALot (576315) | more than 11 years ago | (#5823837)

or maybe 8 hardware raid IDE controllers? Set them to mirror and you could possibly double the number of drives?

I used to repair disk drives... (5, Informative)

dfinster (65564) | more than 11 years ago | (#5818633)

In the late '80s I did hard drive repair and we used Wilson and Flexstar equipment for testing and burn-in. I can't find any links to Wilson equipment right now. Flexstar [] had a more extensible architecture and sounds like what you need. I've used the 2550 series RLL and EDSI Flexstar modules (this was the late '80s, we all thought that IDE was a passing fad at the time) and I can verify that the programming language for this equipment was very straightforward. The Flexstar equipment was very reliable. The only trouble we ever had was the cable ends that would naturally wear out from constant plugging and unplugging. We just replaced all the cable ends every two or three months.

Re:I used to repair disk drives... (1)

GnarlyNome (660878) | more than 11 years ago | (#5822613)

This problem sounds made for a basic stamp (i think) 35.00 ea. write the Pbasic program to exercise the drive and use a whole bunch of them

Western Digital and Maxtor are equal? (4, Informative)

Futurepower(R) (558542) | more than 11 years ago | (#5818658)

Slashdotters! If you don't find a story interesting, please don't complain and call Slashdot lame. Just ignore the story. Do you complain to your local newspaper that they should not publish recipes because you don't cook?

Comment about the Slashdot question: The wording of the question seems to imply that you believe that Maxtor and Western Digital hard drives have an equal failure rate. That has not been my experience. My experience has been that Western Digital are the most reliable hard drives. I'm very interested to know the experience of other readers.

Western Digital went through a bad stretch in which they experienced a problem that caused high failure rates several years ago, but that was cured.

It's shocking that you are in the computer business and knowingly shipping products with a 4% failure rate. That's very expensive and annoys the customers.

However, you are on the right track. Electronic products have what is called "infant failure". Most failures occur in the first week. During 192 hours (one week), the failure rate falls typically by a factor of 100 or even 10,000. At the end of one week most failures have already happened.

It's very easy to write a program that exercises a hard drive. Just copy files back and forth from folder to folder. It is easy to write a program that fills a hard drive with files, then erases them and starts again.

The Promise Ultra133 TX2 [] supports adding four more hard drives to the 4 already supported by modern motherboards. Eight is enough for one test computer, usually, because the power supply won't support more. Be careful to use delayed start. Maybe you will need more powerful power supplies than you normally use.

Make SURE that you are not having troubles with heat. Are your drives cool when they are installed in your product? High heat will cause high failure rate.

Re:Western Digital and Maxtor are equal? (2, Interesting)

GigsVT (208848) | more than 11 years ago | (#5819595)

Addendums to your message:
With a true 400 watt power supply, you can easily power 16 drives reliably. For reference, 8 drives pull a total of about 5-6 amps on 12v spin up, for about 1 second, then together use less than an amp on 12v, and very little 5v. This is based on testing with Maxtor 5400rpm drives, 7200 probably use a little more, and other brands may vary.

Power specs given in hard disk spec sheets are mostly boilerplate and do not reflect actual power consumption, the actual consumption is usually much lower than the spec.

ATA doesn't support delayed start, your power supply has to be able to take the full startup. 3ware makes controllers that support up to 12 drives, and hot swap when you use thier hot swap bays. A setup like that isn't cheap compared to the Promise card, but it may be worth it if you are testing hundreds of disks.

Intel motherboards have "Hard Disk Pre-Delay". (1)

Futurepower(R) (558542) | more than 11 years ago | (#5820053)


Intel motherboards have a BIOS setting called "Hard Disk Pre-Delay". The system waits for the hard drives to spin before it tries to detect them.

Re:Intel motherboards have "Hard Disk Pre-Delay". (1)

damien_kane (519267) | more than 11 years ago | (#5820472)

What your parents are talking about, however, is not pre-delay detection.

They are talking about actually delaying the spin-up sequentially to save your system from the initial power draw of the drives all spinning up at once.
First one drive starts, drawing about an amp. Then, once it is spun-up, the next one starts. This continues for each of your hard drives.
In this way you do not have a 5-10 amp draw when you turn on your system, as that is a very good way to cook your power supply.

All power supplies have overcurrent protection. (1)

Futurepower(R) (558542) | more than 11 years ago | (#5821264)

Yes, exactly. However, cooking the power supply is not a problem, since all power supplies have overcurrent protection. The problem is that the BIOS begins its detection process before the power supply has stabilized enough to provide the correct voltage, due to the unusual load. When the detection fails, there is an error message. So the BIOS pre-delay can be helpful.

Re:All power supplies have overcurrent protection. (1)

GigsVT (208848) | more than 11 years ago | (#5821511)

In theory. :) As a recent hardware site review showed, many power supplies burned up at or before their rated power. They didn't review any of the good brands though, so I think you would be OK with one of the big three (enermax, antec, PC power+cooling)

In any case, it is more stressful on the components to surge at startup. It's not really much of an issue for servers that stay on all the time, since they probably go through the spin-up stress less than once a year, if that much.

Antec is a case maker. (1)

Futurepower(R) (558542) | more than 11 years ago | (#5822038)

Antec is a case maker. I have not been impressed with their power supplies. They are adequate, not wonderful, in my experience.

Re:Western Digital and Maxtor are equal? (1)

robhancock (136922) | more than 11 years ago | (#5820807)

I believe there have been some ATA drives which have some support for delayed start - i.e. there's a jumper setting to delay spinup until they're initalized by the BIOS..

Re:Western Digital and Maxtor are equal? (1)

xsbellx (94649) | more than 11 years ago | (#5821649)

Sorry if this sounds really dumb but....

Why can't you use multiple power supplies for the drives themselves? As far as I know, there is no requirement for the power supply to supply both disks and system. This would eliminate the need to have a "Spin-up" option in BIOS as the drives to be tested would already be powered up.

For example, you could easily have two additional, external power supplies and plug four drives into each. Simply power the drives up first (count to ten or whatever), then the system.

Re:Western Digital and Maxtor are equal? (2, Informative)

GigsVT (208848) | more than 11 years ago | (#5821729)

You can do that, just have to make sure the grounds don't float and are tied together.

As far as powering them up in a sequence, there is no need to do that really, you can just turn on all the power supplies with the same switch. That's a little trickier to do with ATX, but cyberguys sells a adapter to make an ATX power supply act like an AT one with an external switch, and AT style motherboard connector. Or, since you aren't using the motherboard connector, you could just send the power supply the on signal the same way the motherboard does (read the ATX spec).

Most redundant power supply systems are really parallel power supply systems, so if you really need something like 800 watts, just look for a case that can take redundant supplies and put two 400s in, or two 480s, or whatever you can afford. :)

4%? (1)

Fweeky (41046) | more than 11 years ago | (#5818780)

Over what period? If that's over anything less than five years, I'd perhaps be looking towards the conditions the drives are in; are they well ventilated, or near any hot components? Keeping a drive cool can reduce failure rate by ~30% (based on a study IBM did on their SCSI drives); keeping them too hot can drastically increase it. Don't underestimate the effects a bit of active cooling on a drive could have on reducing early failures too.

After that, I'd look at maybe trying some different manufacturers. Seagate, for instance, have a very good reputation for low failure rates.

Re:4%? (1)

0x0d0a (568518) | more than 11 years ago | (#5819666)

Over what period? If that's over anything less than five years, I'd perhaps be looking towards the conditions the drives are in; are they well ventilated, or near any hot components?

You haven't bought a consumer IDE hard drive in the last few years, have you? Quality has gone to the dogs.

OCTET Machine (1)

SyFryer (173279) | more than 11 years ago | (#5818835)

I used to be involved with a manufacturer, and we used something called an Octet machine for mastering IDE drives for desktops and laptop computers.

IIRC, there was a feature to test the disks as they were being mastered, but we never ran the machine in this mode due to the time it took to do it.

You could do 8 disks at a time, hence the name, I did a Google, but couldn't find you a manufacturer.

It looks like a elongated cash register, with an area covered with padding to site the drives, it can be connected to a PC where other programs can control it, rather than the limited software built into the machine.

Around 1995, this machine cost about 800 pounds (sterling).

Re:OCTET Machine (0)

Anonymous Coward | more than 11 years ago | (#5828552)

Octet []

Is 576 drives at once enough for you? (0, Flamebait)

Rick the Red (307103) | more than 11 years ago | (#5818953)

Why not do what this guy [] did?

Sheesh -- an Ask Slashdot that's already been answered on Slashdot! Not exactly a duplicate post, but apparantly the Editors aren't the only ones who don't read /.

Case solution (3, Informative)

lusername (248616) | more than 11 years ago | (#5819517)

We built some disk arrays using a front-loading IDE case with drive trays. This one is pretty pricey but it's _nice_ hardware: -IDE

That, plus a couple RAID cards (like 3ware's new 12-port cards) in a 64/66 PCI slot and bonnie++ would do a pretty good job of burning in your drives. You could flip drives in and out in a few seconds.

Re:Case solution (0)

Anonymous Coward | more than 11 years ago | (#5820250)

3WARE makes excellent products!

Why not (2, Interesting)

Froze (398171) | more than 11 years ago | (#5820246)

buy a few IDE raid cards and set them up raid one? This impliments a full mirror of data on the raided devices. Then perform burnin on the raid device.

Note: I have never implimented raid and am not an expert, so this idea would need to be independently verified.

use iometer and an adapter (1)

Dirttorpedo (153764) | more than 11 years ago | (#5820760)

Get several IDE adapters and run the cables out the back of the box. Use an another power suppply to spin the drives. I do not have any ideas about hot swapping. You could do a cheap environmental chamber with a cardboard box and no fans to see how the drives do without any ventilation. Then get iometer and write some tests. Be sure to do sevral passes with different byte patterns (00, AA, 55 ,FF) over the whole media. Also through in a large block of random accesses of varying both length and location. You should also do a butterfly pattern write read, FIRST LBA, LAST LBA, FIRST+1, LAST-1. Loop and let run as long as necessary to make you happy. The SCSI guys will do this kind of thing to drives for weeks-months non stop to figure MTBF and find other problems. They have specialized solutions for software but IOmeter should do unless you want to learn how to code direct disk accesses not file system.

3Ware card + Removable Hard Drive Bays (2, Interesting)

MetricT (128876) | more than 11 years ago | (#5820841)

We have several IDE fileservers at work. Each box is equipped with two 3Ware 8-port controllers, and 16 removable drive bays. Stick a 17th drive in there as an OS drive, install Linux, and run benchmark of your choice. Once you're happy with the drives, just pull the bays and swap in new drives.

Re:3Ware card + Removable Hard Drive Bays (2, Informative)

afidel (530433) | more than 11 years ago | (#5821790)

Yep and use software from Extreme Protocol Solutions or someone like them. Yes you can put together your own testing software, but why bother when there are others out there who have already gone through all the variables and problems. They explicitly support 3Ware cards for IDE testing see This [] link.

Ask one of the big boys... (1)

OnyxRaven (9906) | more than 11 years ago | (#5821927)

A friend of mine interned at the Seagate R&D plant in Longmont, Colorado last year, doing testing runs for harddrive series, all IDE. They had BIG refrigerator looking things that did automatic testing (or actually any other low-level function) based on commands from a terminal.

If I'm not mistaken, they just upgraded their cabinets, so it is likely that either there are surplus cabinets around from the various manufacturers, or theres somewhere making em. They might be a bit expensive, but if you're looking at throughput this might be the key.

Give one of em a ring, find out what they can recommend. I'm sure you can find someone if you just look.

Use old AT power supplies (1)

unitron (5733) | more than 11 years ago | (#5822905)

You can power the drives with old AT power supplies which can be had a lot cheaper than ATX supplies these days.

Here is a few ideas for you (1)

infonography (566403) | more than 11 years ago | (#5823137)

FireWire external cases, Many Many disks can hang off them. FireWire allows you to daisy chain up to 63 external devices

USB, also a usable plan.

Sadly you may need to use Windows has Solaris just isn't right and Linux has horrible Spaghetti Code for this stuff. Windows for all it's oh so many faults will let you get this up quickest.

Hitachi has an OEM tool to do this... (1)

lewiscr (3314) | more than 11 years ago | (#5824836)

I was looking for some support tools for my Deskstar yesterday, and ran acrossed this tool for OEM's from Hitachi.

Hitachi DDD-SI []

Looking at the User's guide, it looks like you could use it's basic features on non IBM/Hitachi drives. You also might want to check out the other manufacturers sites and see if they've got something similiar.

Great way... (0)

rmezzari (245108) | more than 11 years ago | (#5830767)

... of testing HDs is to dd if=/dev/zero of=/dev/hdX bs=1 .This is one of the best stress tests anyone can easyly deploy. I learn it from an old friend, Mr. Nesc
Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account