×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Four X25-E Extreme SSDs Combined In Hardware RAID

kdawson posted more than 5 years ago | from the that's-no-hard-drive dept.

Data Storage 228

theraindog writes "Intel's X25-E Extreme SSD is easily the fastest flash drive on the market, and contrary to what one might expect, it actually delivers compelling value if you're looking at performance per dollar rather than gigabytes. That, combined with a rackmount-friendly 2.5" form factor and low power consumption make the drive particularly appealing for enterprise RAID. So just how fast are four of them in a striped array hanging off a hardware RAID controller? The Tech Report finds out, with mixed but at times staggeringly impressive results."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

228 comments

Oh good (-1, Troll)

LordKaT (619540) | more than 5 years ago | (#26628201)

I'll be sure to do that, and replace them every 5 years when they run out of write operations.

Re:Oh good (4, Insightful)

spazdor (902907) | more than 5 years ago | (#26628265)

'cause regular hard drives usually survive 5 years in an enterprise environment, yep yep.

Re:Oh good (4, Insightful)

LordKaT (619540) | more than 5 years ago | (#26628289)

'cause SSD's don't cost $300-$500 more than their spindle counterparts, yep yep.

Re:Oh good (4, Informative)

FauxPasIII (75900) | more than 5 years ago | (#26628467)

> 'cause SSD's don't cost $300-$500 more than their spindle counterparts, yep yep.

Hint: Enterprise storage purchasing often looks at dollars/IOPS rather than dollars/GB.

Re:Oh good (2, Funny)

Anonymous Coward | more than 5 years ago | (#26628605)

"Enterprise storage purchasing often looks at dollars/IOPS rather than dollars/GB."

Which is good news, as this Intel Slashdot advert says its "compelling value if you're looking at performance per dollar rather than gigabytes.".

Re:Oh good (0)

Anonymous Coward | more than 5 years ago | (#26628485)

in 5 years they wont

Re:Oh good (3, Interesting)

CMonk (20789) | more than 5 years ago | (#26628901)

I think you're comparing against SATA drives. People that worry about IOPS are normally using FC drives which are much more closely aligned in price with SSDs. (btw, been a while since I was in the market for FC drives)

Re:Oh good (1)

CyprusBlue113 (1294000) | more than 5 years ago | (#26628295)

Umm.... yes they do. You need to stop shopping for drives on Ebay.

Re:Oh good (0)

Anonymous Coward | more than 5 years ago | (#26628317)

woosh!

Re:Oh good (0)

Anonymous Coward | more than 5 years ago | (#26628945)

Double woosh!

Re:Oh good (4, Informative)

spazdor (902907) | more than 5 years ago | (#26628447)

Your enterprise environment must not be hitting its drives very hard.

Where SSDs is in disk operations that are usually lagged out by seek times; a big unwieldy database that gets a lot of writes and no downtime, for instance, is happiest when it lives on a striped SSD array.

Coincidentally, this is exactly the type of workload which is most likely to shorten a magnetic drive's life.

Re:Oh good (4, Insightful)

poot_rootbeer (188613) | more than 5 years ago | (#26628319)

I'll be sure to do that, and replace them every 5 years when they run out of write operations.

Winchester drives, on the other hand, use a time-honored complex system of delicate moving parts, and last virtually forever. They certainly do not start experiencing sudden failures if kept in continuous service for more than 5 years.

Re:Oh good (2, Interesting)

bluefoxlucid (723572) | more than 5 years ago | (#26628747)

All modern hard drives are Winchester drives; Winchester drives are just the first iteration, made by IBM, who figured they'd ship two 30MB platters and name the hard drive after the Winchester 30-30 rifle. Who the hell modded you insightful, especially for claiming a system of delicate moving parts lasts virtually forever...

Re:Oh good (1)

JCSoRocks (1142053) | more than 5 years ago | (#26628797)

Who the hell modded you insightful, especially for claiming a system of delicate moving parts lasts virtually forever...

What about watches? /sarcasm

Re:Oh good (1)

bluefoxlucid (723572) | more than 5 years ago | (#26628971)

They break down eventually... watch smiths fix them. A coworker just had her watch (about 10 years old) gutted and re-geared. Very nice hand-crafted piece.

Re:Oh good (0)

Anonymous Coward | more than 5 years ago | (#26629147)

Which require repair / reconditioning from time to time - no pun intended... okay - pun intended...

Re:Oh good (5, Informative)

ChienAndalu (1293930) | more than 5 years ago | (#26628399)

Make that 228 years [intel.com] .

Life expectancy 2 Million Hours Mean Time Before Failure (MTBF)

Hint: learn about "wear leveling"

Re:Oh good (1)

msu320 (1084789) | more than 5 years ago | (#26628897)

Make that 228 years.

Life expectancy 2 Million Hours Mean Time Before Failure (MTBF)

Hint: learn about "wear leveling"

Hint: you should learn what is meant by "Mean Time".

Re:Oh good (1)

Piranhaa (672441) | more than 5 years ago | (#26628943)

MTBF is a highly inaccurate way to show how long you should expect a drive to live. The whole Seagate Fiasco is a prime example of why NOT to believe them.

It can be a good ballpark figure, to differentiate between enterprise class drives and consumer drives, but should NOT be an expected number.

There are too many things to take into account: temperature surrounding the drive, how many days it's on for, how long per day it's on for, how many writes to the drive, how much voltage is supplied, etc etc..

More about it here: http://en.wikipedia.org/wiki/Mean_time_between_failures [wikipedia.org]

Problems with MTBF As of 1995, the use of MTBF in the aeronautical industry (and others) has been called into question due to the inaccuracy of its application to real systems and the nature of the culture which it engenders. Many component MTBFs are given in databases, and often these values are very inaccurate. This has led to the negative exponential distribution being used much more than it should have been. Some estimates say that only 40% of components have failure rates described by this. It has also been corrupted into the notion of an "acceptable" level of failures, which removes the desire to get to the root cause of a problem and take measures to delete it. The British Royal Air Force is looking at other methods to describe reliability, such as maintenance-free operating period (MFOP). Similarly, the National Aeronautics and Space Administration (NASA) is pursuing time to failure research using scenario and condition based methods derived from the field of prognostics.

Re:Oh good (1, Interesting)

Anonymous Coward | more than 5 years ago | (#26629071)

Does this take into account things like, half the drive is filled with stuff that doesn't change or does it imply nothing is on the drive and you just keep writing to it for no reason? ie that would drop those MTBF factors by half? Tried to wrap my brain around wear leveling and can't seem to grasp it.

Re:Oh good (1)

djtachyon (975314) | more than 5 years ago | (#26629331)

Let me know when MRAM [wikipedia.org] gets to gigabyte sizes. With unlimited writes and running at SRAM speeds, it could be the future.

If I keep my current 15K drives that long (3, Interesting)

Shivetya (243324) | more than 5 years ago | (#26628419)

I will be surprised.

See, in the enterprise environment that I work in the majority of our big hardware is leased. I am quite willing to use what I can to maintain performance and reliability. That being said my system is built entirely on 15K drives of various sizes. I am not worried about five years or so of read/write that SSD drives have, all I want to see is a track record. I expect to replace most of the drives I have now within five years so this "five year limit" many like to toss out is immaterial to me. Reliability over that lifetime is of more importance.

Besides, the nice benefit of SSD drives is I don't need special enclosures (read: ones that can handle the torque these puppies can put out)

Re:Oh good (1, Informative)

Anonymous Coward | more than 5 years ago | (#26628573)

So are SSDs ready for prime-time?
Last year I attended IBM's Sydney Technical Conference and listened in to a presentation by ?????, one of the lead designer's of IBM's 3950 chipset.

Part of the presentation was on SSD technology. Whilst viewing the graphs showing SSD closing the gap in cents / gigabyte, someone asked the pointed question: "Is there any future in spinning platter technology?". The presenter actually stopped for well over half a minue (an age in public speaking) before replying carefully "I do not speak officially for IBM now, but I can see no future at all for spinning-platter technology. Not even for bulk storage"

As others have noted that once SSD drives are available in a price-competative form at volumes of around 150GB, tradditional drives will immediately and permamently exit the notebook market.

And I can wait for that to happen

Re:Oh good (1)

Piranhaa (672441) | more than 5 years ago | (#26628741)

http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3403&p=4 [anandtech.com]

[quote]Given the 100GB per day x 5 year lifespan of Intel's MLC SSDs, there's no cause for concern from a data reliability perspective for the desktop/notebook usage case. High load transactional database servers could easily outlast the lifespan of MLC flash and that's where SLC is really aimed at. These days the MLC vs. SLC debate is more about performance, but as you'll soon see - Intel has redefined what to expect from an MLC drive.[/quote]

So.. 100GB/day/drive = 100GBx4 = 400GB/day for (5*365)= 400GB*1825(days) = 730TB of data transferred. If you seriously go through THAT much data, you are either a pirate or you actually own your own movie studio.

Obviously these numbers can change, based on case temperature, wear leveling, etc. HOWEVER, they are what Intel states the drives can handle...

Re:Oh good (0)

Anonymous Coward | more than 5 years ago | (#26629613)

The article that you point out is for the MLC NAND which is what you have shown. The NAND used in the X25-E series is SLC which has anywhere from 10x to 100x the write endurance.

Solid State Slashdot Drive. (2, Funny)

Ostracus (1354233) | more than 5 years ago | (#26628213)

"So just how fast are four of them in a striped array hanging off a hardware RAID controller? The Tech Report finds out, with mixed but at times staggeringly impressive results.""

So in other words I'll get First Post much faster since slashdot switched over.

Actually, that RAID card seems more interesting (4, Interesting)

the_humeister (922869) | more than 5 years ago | (#26628279)

A 1.2 GHz processor with 256 DDR2 memory? Holy crap! That's faster than my new Celeron 220! And the perennial quesion: can this thing run Linux?

Re:Actually, that RAID card seems more interesting (2, Informative)

Wonko the Sane (25252) | more than 5 years ago | (#26628473)

That RAID card was the bottleneck. It can't support 4x the raw transfer rate of a single drive.

Exactly (1)

Ritz_Just_Ritz (883997) | more than 5 years ago | (#26628941)

I suspect the performance would have been a LOT better if they'd used something like the 3Ware 9690SA. 3Ware is also a LOT more Linux friendly.

Cheers,

Re:Exactly (1)

MBCook (132727) | more than 5 years ago | (#26629741)

Of course, they ran all their tests in Windows. I wonder how much of the results in some of the tests (like program installation) are really due to how fast NTFS can handle lots of little files and not due to the drives they were testing.

It would have been nice to see some quick tests under Linux with ext3 / XFS / reiser / ext4 / btrfs / flavor_of_the_month just to see if that was really the drives or a vastly sub-optimal access pattern.

Re:Actually, that RAID card seems more interesting (4, Insightful)

default luser (529332) | more than 5 years ago | (#26629681)

Actually, I felt that the limiting factor was probably the craptastic single-core Pentium 4 EE [techreport.com] they used to run all these benchmarks.

What, you shove thousands of dollars worth of I/O into a system, and run it through the paces with a CPU that sucked in 2005? I'm not surprised at all that most tests showed very little improvement with the RAID.

What I want to see (5, Interesting)

XanC (644172) | more than 5 years ago | (#26628281)

Is 4 of these in a RAID-1, running a seek-heavy database. Nobody does this benchmark, unfortunately.

Re:What I want to see (2, Interesting)

Aqualung812 (959532) | more than 5 years ago | (#26628767)

Why not run RAID-5 (or 50 or 15) if it is seek-heavy? I thought RAID-1 was only used if you had to deal with a lot of writes. Those are slower on 5 than 1, but 5 is much faster for reads.

Re:What I want to see (3, Informative)

Anonymous Coward | more than 5 years ago | (#26629235)

RAID5 has terrible random write performance, because every write causes a write to every disk in the array. it's VERY easy to saturate traditional disks random write capabilities with raid5/6. So, it's rightly avoided like the plague for heavily hit databases.

I'm not certain how much of the performance hit is due to latencies of disks. So i feel it would be an interesting test to also see raid5 database performance.

Also, Raid1 (or 10 to be more fair when comparing with RAID5) in a highly saturated environment. reading data should do marginally better than Raid5 since you don't lose a disk to parity--and any raid controller worth it's salt will send independent reads or round robin your read's to both disks).

Then, there is also that whole disk failure thing. It's a huge performance hit to lose a disk in RAID5. For that reason alone in a heavily hit environment it would probably be best to avoid it.

Disk failure is not an IF, but a WHEN. i Dont care what manufacturers say about MTBF's.

Re:What I want to see (3, Informative)

XanC (644172) | more than 5 years ago | (#26629365)

RAID5's write performance is so awful because it requires so much reading to do a write.

I have to read from _every drive in the array_ in order to do a write, because the parity has to be calculated. Note that it's not the calculation that's slow, it's getting the data for it. So that's multiple operations to do a simple write.

A write on RAID1 requires writing to all the drives, but only writing. It's a single operation.

RAID1 is definitely faster (or as fast) for seek-heavy, high-concurrency loads, because each drive can be pulling up a different piece of data simultaneously.

Re:What I want to see (1)

jgtg32a (1173373) | more than 5 years ago | (#26628881)

A seek-heavy DB would be a great bench mark but why RAID1, it doesn't give any performance boost. It would just be reading off of the primary the entire time.

Do you mean RAID 0?

Re:What I want to see (3, Informative)

afidel (530433) | more than 5 years ago | (#26629011)

Good controllers do read interleaving where every other batch of reads is dispatched to a separate drive.

Re:What I want to see (0)

Anonymous Coward | more than 5 years ago | (#26629025)

This person is asking for information about a real world usage scenario, for a heavily hit database.

Only a fool would use RAID0 in a production environment--even with clustering or log shipping.

Re:What I want to see (1)

XanC (644172) | more than 5 years ago | (#26629399)

It's not simply a matter of interleaving; independent requests can be executed simultaneously. Read performance, especially seeking, can scale linearly with the number of drives in a RAID1.

paging benefits? (1)

Neotrantor (597070) | more than 5 years ago | (#26628339)

i wonder what sort of benefits we could see if an SSD had a paging file on it?

Re:paging benefits? (3, Insightful)

tom17 (659054) | more than 5 years ago | (#26628433)

I really don't get this obsession with page files these days. Say you have 4GB ram and an 4GB page file. Memory is cheap these days, so rather than using 4GB of (relatively slow) SSD, why not just get another 4GB ram?

Re:paging benefits? (1)

xenolion (1371363) | more than 5 years ago | (#26628523)

I think this all has to due to the program or database you are running, a pagefile maybe more usefull than just ram. I all depends on the app you are using and how it works. Some items run better with a swap others just want ram.

Re:paging benefits? (3, Informative)

guruevi (827432) | more than 5 years ago | (#26628629)

SSD shouldn't be for paging. That would become very expensive (even with wear leveling) if you have a minimal amount of RAM (say 256M) to run large (say 16G) operations. It would also be slow since you have the overhead of whatever bus system your hard drive/ssd is connected to.

Technically hard drives aren't supposed to be paging either, it's just a cheap and simple trick to avoid having people pay a lot for (expensive) RAM or have their programs crash when occasionally they run out of RAM. However if your system is paging heavily it's better and faster with more RAM.

Anecdote: I worked at a place once where cheap ($500) hardware was sold as dedicated SQL/IIS servers (you could fit 10 of them in 5U) and a lot of customers thought they could run whatever they wanted (Microsoft ran MSN for a whole country of one for a while) in them but they only supported a maximum of 2G RAM (4G according to BIOS but the modules back then were too expensive). Of course PHB just said: let them swap and besides the heavy slow downs they ran fairly fine. Well, those heavy users all crashed their software-RAID's in less than a year (the heavy load made Windows get the RAID system out of sync and then you had the first hard drive fail). The temperature was fine but simply swapping out was too much for the cheap hard drives (Maxtor and Seagate) and they all failed.

Re:paging benefits? (1, Informative)

bluefoxlucid (723572) | more than 5 years ago | (#26628905)

SSD shouldn't be for paging. That would become very expensive (even with wear leveling) if you have a minimal amount of RAM (say 256M) to run large (say 16G) operations. It would also be slow since you have the overhead of whatever bus system your hard drive/ssd is connected to.

You talk like you know what you're talking about; but then the reader realizes you don't understand what happens when the CPU spends 99% of its life in wait state waiting for paging operations. Swap is not a high-intensity workload; swap workload increases six orders of magnitude faster than CPU workload, meaning when you start swapping, you spend lots of time swapping.

As the hard disk is external, this number increases with CPU speed; a swap operation taking 1,000,000 cycles on a 1GHz CPU (1mS) will take 10,000,000 cycles on a 10GHz (1mS) CPU. Triggering a seek operation between 4 and 9 mS on a 2.0GHz CPU (modern AMD) is a disaster; triggering these continuously, every 10mS, halves your CPU performance and performs 50 operations a second. Write operations take more than read operations, substantially, so we're talking 20-30mS, at which point ... if you're swapping even 2-3 times a second you notice it. AND ALL THAT SEEK WILL KILL HARD DRIVES.

Re:paging benefits? (1)

petermgreen (876956) | more than 5 years ago | (#26628739)

Desktop memory is indeed cheap.

However I don't think i've seen a desktop board that could go over 8 gigabytes and most top out at four. Server boards can go higher BUT

* intel xeon boards requires FBDIMMs which are expensive.
* AMD opteron boards can use ordinary DDR2 but the CPU performance sucks compared to the aforementioned xeons.

Re:paging benefits? (2, Interesting)

Wonko the Sane (25252) | more than 5 years ago | (#26628769)

* AMD opteron boards can use ordinary DDR2 but the CPU performance sucks compared to the aforementioned xeons.

If you go multiprocessor (not multicore) then you get much higher memory bandwidth (NUMA). Sometimes that matters more than CPU power.

Re:paging benefits? (2, Insightful)

Cthefuture (665326) | more than 5 years ago | (#26629205)

I don't think anyone should be using a page file at all if you have 4 GB or more of RAM. Maybe even 2 GB. It just doesn't make sense. With that much memory what good is a 512 MB page file going to do really? And if you're swapping more than 512 MB of RAM to disk your machine is going to be thrashing like mad and unusable anyway.

It's stupid that many OS's allocate 2 times your RAM as a page file. Are you really going to swap 8 GB of RAM to disk? I mean seriously, that would be unusable.

Even when I had 2 GB of RAM I never used a swap file and now with low-end machines running 8+ GB (only about $100 of RAM), page files just don't make sense any more.

Re:paging benefits? (2, Interesting)

Vectronic (1221470) | more than 5 years ago | (#26628479)

I'm no expert, but wouldn't that be a redundant statistic? if it handles normal read/writes faster than a disk drive, then could you presume paging would be faster as well?

Although it would be interesting to see a RAM-less PC try and run on SSD's only... somehow using normal data read/write, and memory read/write on the same SSD (if thats possible). Guess that's what we'll end up with eventually anyways, where your amount of MEM is the amount of free-space you have on your SSD, no longer seperated components.

Re:paging benefits? (1)

bluefoxlucid (723572) | more than 5 years ago | (#26628939)

Most people have this mistaken belief that SWAP is interacted with as often as RAM (hundreds or thousands of times a second, at least; RAM is interacted with sometimes hundreds of thousands of times per second). They think swap is an actual extension of RAM, not a long-term slow storage shelf.

Fusion-io's iodrive is faster!!! (-1, Troll)

Anonymous Coward | more than 5 years ago | (#26628369)

Fusion-io's iodrive blows this thing away... this is junk.

Re:Fusion-io's iodrive is faster!!! (1)

denis-The-menace (471988) | more than 5 years ago | (#26628729)

[citation needed]

Re:Fusion-io's iodrive is faster!!! (0)

Anonymous Coward | more than 5 years ago | (#26629715)

[citation needed]

I was not the original poster - and I can't cite anything - but I can quote some numbers from our testing:

I can't remember the hardware, but memory or cpu was never the bottleneck. Easily seen with iostat -x 1.

Repair of a large MySQL table:
14 SAS disks in RAID-10: ~45 minutes.
2 X25-E in RAID-1: ~30 minutes.
1 IODrive: ~2 minutes.

QPS (70% read, 30% write) for a database consisting mostly of MyISAM tables:
14 SAS disks in RAID-10: ~4.500.
2 X25-E in RAID-1: ~6.000.
1 IODrive: ~15.000.

- This is not very scientific, but our numbers confirms it.

I have a feeling that Intel's Extreme disks suffer the same problem as MTron's higher end disks, namely very bad random write speeds.

Redundant Array of what? (4, Funny)

telchine (719345) | more than 5 years ago | (#26628441)

This is a very expensive solution. What part of Redundant Array of Inexpensive Disks don't they understand?

Re:Redundant Array of what? (1)

evilbessie (873633) | more than 5 years ago | (#26628495)

RAID 0 is not redundant, they are not really 'disks' any more and they could be independent disks rather than inexpensive. Sorry I know you were trying to be funny but I felt you could have more fully reduced the issue.

Re:Redundant Array of what? (1)

wastedlife (1319259) | more than 5 years ago | (#26629601)

I've never understood why they call it RAID 0. Striped Array would suffice. Why is it a Redundant Array of In(expensive|dependent) Disks 0(as in NOT)?

Re:Redundant Array of what? (0)

Anonymous Coward | more than 5 years ago | (#26628513)


lurn 2 akcronym: The "I" in RAID can also stand for "Independent"

Re:Redundant Array of what? (3, Funny)

Ioldanach (88584) | more than 5 years ago | (#26628989)

That's right. Marketing switched "Inexpensive" for "Independent" years after the term was coined, because they couldn't convince people to buy their non-Inexpensive disks for RAID use as easily under the old meaning.

Re:Redundant Array of what? (1)

Artraze (600366) | more than 5 years ago | (#26629583)

Yes, well, that, or maybe it's just that the notion of "expensive" disks is gone. These days, you pay a tiny amount per GB, which usually goes down with increasing size. Oh sure, you may pay a bit more at the top, but it's not much.

It used to be that you could get huge drives. I'm not just talking about the fact that they would store like 20+MB, but they were also physically huge. I used to have one that was 5 platters and two 3.5" slots high (though my memory is fuzzy). These suckers were EXPENSIVE; multiple times what the same capacity would cost across standard disks, thus the creation of RAID.

Since RAID because popular, the expensive disk died leaving us with only 'inexpensive' ones. Yes, WD Raptors are pricey, but they've got performance to back that up (usually). And yes, "RAID" versions are usually slightly (10%) more expensive, but they usually come with a longer warranty and are modified to work better in a RAID (which is a smaller market).

In short, you cynicism is misplaced. Expensive disks don't exist anymore so it only makes sense to update the acronym.

Re:Redundant Array of what? (1, Insightful)

Anonymous Coward | more than 5 years ago | (#26629973)

if you think raptor are pricey, then learn about those SAS disk, before posting your layman comment about what is expensive and what not.

Re:Redundant Array of what? (2, Informative)

grub (11606) | more than 5 years ago | (#26628589)


Independent disks. And remember that some high end SCSI or Fiberchannel RAIDs have never fit the antiquated "Inexpensive" bit.

Re:Redundant Array of what? (4, Informative)

afidel (530433) | more than 5 years ago | (#26629087)

Dude, 4 of these drives can keep up with my 110 spindle FC SAN segment for IOPS. Here's a hint, 110 drives plus SAN controllers is about two orders of magnitude more expensive than 4 SSD's and a RAID card. If you need IOPS (say for the log directory on a DB server) these drives are hard to beat. The applications may be niche, but they certainly DO exist.

Re:Redundant Array of what? (1)

NSIM (953498) | more than 5 years ago | (#26629803)

It's nice to see someone who actually gets it. Yes, SSD is epxensive, but not when you compare it to the price you'd pay for a similar number of hard drives that can match the IOPs performance

Comparisons a little unfair in places (4, Insightful)

heffrey (229704) | more than 5 years ago | (#26628477)

It seemed a little unfair that they only used the nice hardware RAID controller with the Intel SSDs. I would have liked to see them use it with all the other disks to get a more level playing field.

Re:Comparisons a little unfair in places (0)

Anonymous Coward | more than 5 years ago | (#26628963)

The "nice little hardware RAID controller" actually increases seek time and reduces performances by adding delays.

In several enthusiasts home-made comparison tests, the raid card usually gives worse results than the ICH10r controller.

Apparently, not a single engineer at Areca or Adaptec figured this out ages ago and so now there are no real options for "good" SSD-oriented raid cards.

Re:Comparisons a little unfair in places (1)

afidel (530433) | more than 5 years ago | (#26629679)

Or LSI, we get half the IOPS from our LSI based HP P400 than we do from the ICH in our HP workstations when using the X25-e's. Reports on the web lead me to believe that there are NO hardware raid cards that can keep up with these beasts which is a shame because I can't see using them without battery backed write cache. I'm going to look into the big boy P800 and possibly the new P410 but right now I'm kind of underwhelmed by the fact that $5-600 raid cards get beaten badly by the 'free' ICH controllers on motherboards =(

Re:Comparisons a little unfair in places (1)

Nick Ives (317) | more than 5 years ago | (#26629117)

Indeed, telling us to ignore the extra minute in the X25-E RAID0 boot times compared to the other setups is highly disingenuous. RAID setups are slower to boot because you have to load the RAID BIOS first, if you really care about fast booting it's something you need to be aware of. There were also CPU bound case where the RAID0 setup performed slightly worse than the single disk, an obvious sign of a performance hit due to the RAID card.

What was the target for this test??? (1)

LWATCDR (28044) | more than 5 years ago | (#26628529)

Doom levels????
Office tasks???
Okay folks I can only see a few groups using this kind of set up.
Not one Database test?
I mean a real database like Postgres, DB2, Oracle, or even MySQL. Doom3... yea those are some benchmarks.

Doom 3 (0)

Anonymous Coward | more than 5 years ago | (#26629035)

Well to be fair Doom 3 still runs like shit on my rig

Re:What was the target for this test??? (1)

JohnnyBigodes (609498) | more than 5 years ago | (#26629311)

Look in the later pages of the review. There's a bit of everything. There are IOMeter benches there, with very enligthening results.

Terrible review (0)

Anonymous Coward | more than 5 years ago | (#26628541)

Cheap RAID controller... check
CPU-bound benchmarks... check
Couldn't be bothered reading any more... check

What's on TV?

Re:Terrible review (0)

Kushieda Minorin (1453751) | more than 5 years ago | (#26628775)

What's on TV?

An all new Fringe is on FOX tonight at 9pm Eastern. This week, the team deals with a computer virus that is able to infect humans and turn their brains into liquid. I figure this episode should be of interest to many Slashdotters.

Test it on a better system then a OLD P4 cpu (1)

Joe The Dragon (967727) | more than 5 years ago | (#26628545)

Test it on a better system then a OLD P4 cpu.

Re:Test it on a better system then a OLD P4 cpu (1)

DMalic (1118167) | more than 5 years ago | (#26629161)

Yeaaaaaah. Gotta love doom3 level loads taking as long as Crysis would with that configuration on a modern PC.

Fusion-io's iodrive is faster (1, Troll)

dragonbladex (1462881) | more than 5 years ago | (#26628623)

Fusion-io's SSD the iodrive is faster than this by far, I would suggest looking at their speeds on fusionio.com

Re:Fusion-io's iodrive is faster (0)

Anonymous Coward | more than 5 years ago | (#26628691)

Astroturf much?

Re:Fusion-io's iodrive is faster (1)

afidel (530433) | more than 5 years ago | (#26629121)

Fusion-IO is about $30/GB, these are about $20/GB and get about the same IOPS/$ so unless you don't have physical space the Intel SSD's win.

Re:Fusion-io's iodrive is faster (1)

beaviz (314065) | more than 5 years ago | (#26629919)

Fusion-IO is about $30/GB, these are about $20/GB and get about the same IOPS/$ so unless you don't have physical space the Intel SSD's win.

I would say that the Intel Extremes win by the fact that they can be hot-swapped. Not for performing nearly as good as IODrives.

If we take their own specs:
IODrive 160GB: 102.000iops random read, 101.000iops random write.
Intel X25-E: >35,000iops random read, >3,300iops random write.

That is in no way the same performance.

We have both in our servers, I like the Intel's because they are cheap, you can buy them at the local grocery store (well, almost) and they hotswap. But I simply love the IODrives for their performance. There is no practical difference of running a database completely in memory and from IOdrives.

But hey! Buy some and judge for yourself :)

New acronym: RAVED? (1)

RTHilton (1343643) | more than 5 years ago | (#26628669)

Redundant Array of Very Expensive Disks?

Re:New acronym: RAVED? (4, Funny)

CompMD (522020) | more than 5 years ago | (#26628965)

No, its got a cooler acronym, RAVEN: Redundant Array of Very Expensive Not-disks-but-some-silly-stack-of-flash-memory-chips.

Re:New acronym: RAVED? (1)

RTHilton (1343643) | more than 5 years ago | (#26629089)

I like it, but it seems a bit long. Maybe we can just abbreviate Expensive

Re:New acronym: RAVED? (0)

Anonymous Coward | more than 5 years ago | (#26629851)

Extremely
eXpensive
Portable
Erasable
Non-volatile
Storage
Intel
inVigorates with
Electricity?

Re:New acronym: RAVED? (1)

MBCook (132727) | more than 5 years ago | (#26629875)

That was me, I accidentally checked "post anonymously".

I want my karma! *sobs*

Software RAID and a modern rig (0)

Anonymous Coward | more than 5 years ago | (#26629249)

Here is a test with 4-5 drives and Linux software RAID 0:
http://21stcenturystorage.cebis.net

Re:Software RAID and a modern rig (1)

keeboo (724305) | more than 5 years ago | (#26629713)

Interesting.
But, from what I saw there, you didn't make a real RAID0, you benchmarked several mountpoints simultaneously instead.
A md0-like benchmark will be nice to see.

performance per dollar rather than gigabytes (0)

Anonymous Coward | more than 5 years ago | (#26629265)

No one looks at just IOPS/$ without looking at size. It's probably something like IOPS/$/GB with different weights depending on the circumstances. No business is going to pay the same price for a 3x faster drive if it has 1/10,000 of the size unless money is no object. Then you're just concerned with IOPS, not IOPS/$.

Fantastic Slashvertising (2, Insightful)

damn_registrars (1103043) | more than 5 years ago | (#26629655)

Intel's X25-E Extreme SSD is easily the fastest flash drive on the market, and contrary to what one might expect, it actually delivers compelling value if you're looking at performance per dollar rather than gigabytes

I hope someone got a healthy commission from Intel for writing that...

SATA/Flash for RAM? (1)

Doc Ruby (173196) | more than 5 years ago | (#26629659)

Other than just using one of these Flash RAIDs as a swap volume, is there a way for a machine running Linux to use them as RAM? There are lots of embedded devices that don't have expandable RAM, or for which large RAM banks are very expensive, but which have SATA. Rotating disks were too slow to simulate RAM, individual Flash drives probably too slow, but a Flash RAID could be just fast enough to substitute for real RAM. So how to configure Linux to use it that way?

seriously?? (1)

poached (1123673) | more than 5 years ago | (#26629827)

We'll be focusing our attention on RAID 0 today, but the card supports a whole host of other array configurations, including RAID 1, 1E, 5, 5EE, 6, 10, 50, 60, and 36DD. Ok, so maybe not the last one.

I like Tech Report but, seriously? I found that joke to be very juvenile and I have a hard time reading the rest of the article without a prejudicial eye.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...