Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Wear Leveling, RAID Can Wipe Out SSD Advantage

Soulskill posted more than 4 years ago | from the not-so-solid dept.

Data Storage 168

storagedude writes "This article discusses using solid state disks in enterprise storage networks. A couple of problems noted by the author: wear leveling can eat up most of a drive's bandwidth and make write performance no faster than a hard drive, and using SSDs with RAID controllers brings up its own set of problems. 'Even the highest-performance RAID controllers today cannot support the IOPS of just three of the fastest SSDs. I am not talking about a disk tray; I am talking about the whole RAID controller. If you want full performance of expensive SSDs, you need to take your $50,000 or $100,000 RAID controller and not overpopulate it with too many drives. In fact, most vendors today have between 16 and 60 drives in a disk tray and you cannot even populate a whole tray. Add to this that some RAID vendor's disk trays are only designed for the performance of disk drives and you might find that you need a disk tray per SSD drive at a huge cost.'"

cancel ×

168 comments

Sorry! There are no comments related to the filter you selected.

fp (-1, Troll)

Anonymous Coward | more than 4 years ago | (#31381650)

CmdrTaco can raid and wear out a 12-year old boy's ass in under 30 minutes.

Little Flawed study. (4, Insightful)

OS24Ever (245667) | more than 4 years ago | (#31381688)

This assumes that RAID controller manufacturers won't be making any changes though.

RAID for years has relied on millisecond access times. So why spend a lot of money on an ASIC & Subsystem that can go faster? So taking a RAID card designed for slow (relatively) spinning disks and attaching them to SSD of course the RAID card is going to be a bottleneck.

However subsystems are going to be designed to work with SSD that has much higher access times. When that happens, this so called 'bottleneck' is gone. You know every major disk subsystem vendor is working on these. Sounds like a disk vendor is sponsoring 'studies' to convince people not to invest in SSD technologies now knowing that a lot of companies are looking at big purchases this year because of the age of equipment after the downturn.

Re:Little Flawed study. (2, Insightful)

MartinSchou (1360093) | more than 4 years ago | (#31381742)

The article is talking about stuff that's available today. They aren't saying "SSDs will never be suitable", they're saying they aren't suitable today. Why? Because none of the hardware infrastructure available is fast enough.

Re:Little Flawed study. (5, Interesting)

vadim_t (324782) | more than 4 years ago | (#31381898)

Sure, but why do you put 60 drives in a RAID?

Because hard disks, even the high end ones, have quite low IOPS. You can attain the same performance level with much fewer SSDs. If what you need is IOPS and not lots of storage that's a good thing even. You reach the required level with much fewer drives, so you need less power, less space and less cooling.

Re:Little Flawed study. (4, Interesting)

Anpheus (908711) | more than 4 years ago | (#31382088)

I agree. 60 drives in RAID0 are going to see between 150 and 200 IOPS/drive, maybe more for 2.5" drives right? So that's 12,000 IOPS.

The X25-E, the new Sandforce controller, and I believe some of the newer Indilinx controllers can all do that with one SSD.

$/GB is crap, $/IOPS is amazing.

Re:Little Flawed study. (1)

postbigbang (761081) | more than 4 years ago | (#31382552)

The next problem is that large RAID 0 arrays will suffer from word-width. The reason that ATA and SCSI buses were parallel buses had to do with chip fanout and cable length (slew rate). When SATA and SAS arrived, they use single fast clocks to frame out data and re-lengthen the cable by convenience.

When you get a bunch of drives that can now be accessed faster than the interface IO rate, the interface IO rate has to change and that starts putting RAID controller technology into the same realm of difficulty (and cost, and dearth of chipsets) as 10GBE.

So, when you're looking at delicious 4-core and up CPUs, and several of them in a server can, all hungry for data, the disk interface is going to have to climb in clock to feed those hungry CPUs (and likely virtual machines on top of them).

The SSD advantage is a nice problem to have, and their data rates will continue to tax host-bus-adapter technology for a while. Then FC switches and other key delivery components will have to catch-up, too.

just use the edge of the disk (2, Interesting)

petes_PoV (912422) | more than 4 years ago | (#31382554)

Disks are cheap. There's no reason to use the full GB (or TB) capacity, especially if you want fast response. If you just use the outside 20% of a disk, the random I-O performance increases hugely. ISTM the best mix is some sort of journalling system, where the SSDs are used for read oparions and updates get written to the spinning storage (or NV RAM/cache). Then at predetermined times perform bulk updates back to the SSD. if some storage array manu. came up with something like that, I'd expect most performance problems to siomply go away.

Re:just use the edge of the disk (1, Interesting)

Anonymous Coward | more than 4 years ago | (#31383708)

You're thinking of Sun/Oracle "Open Storage," which works precisely as you describe. Volatile SSDs, or "readzillas," are used as L2 read caches, and non-volatile SSDs, or "logzillas," are used to store the filesystem intent logs. The intent logs and, to a certain extent, the nature of the filesystem itself ensure that nearly all disk writes are of the sequential type, so you can go with 7200rpm SATA disks -- which are actually usually faster than 15k SAS disks, for sequential I/O, due to the higher data density on the platters.

Something sort of similar is also used in Oracle's new Exadata platform, though the implementation is completely different.

Re:Little Flawed study. (1)

evilbessie (873633) | more than 4 years ago | (#31382540)

Maybe you need 120TB of space, I don't see any SSDs yet where you can have this much but this is doable with current HDD tech. I can see local SSDs on servers but not in SANs at the moment. We will probably get there sooner or later, at which time the various bottlenecks will have appeared and been solved.

Re:Little Flawed study. (1)

LordLimecat (1103839) | more than 4 years ago | (#31383402)

If you need 120TB of space, you wont be doing it with only 60x 2TB drives if you have any regard for the integrity of your data (ie, enjoy your massive dataloss).

Re:Little Flawed study. (1)

TheLink (130905) | more than 4 years ago | (#31382074)

> The article is talking about stuff that's available today. They aren't saying "SSDs will never be suitable", they're saying they aren't suitable today.

They are suitable today. You just don't raid them using 50-100K RAID controllers.

Anyway, the "Enterprise Storage" bunch will probably stick both SSDs and TB SATA drives in their systems for the speed and capacity (and charge $$$$$$). I think some are doing it already.

Or you could stick a few SSDs in a decent x86 server with 10 Gbps NICs, and now you can have the same amount of IOPS as you would after spending > 100K on RAID controllers and drives.

Not so sure about hotswapping SSDs though - so far I don't see much info on that ;).

Re:Little Flawed study. (4, Interesting)

itzdandy (183397) | more than 4 years ago | (#31382230)

You missed half the point. SSD use wear leveling and other techniques that are very effective on the desktop but in a high IO environment, the current wear leveling techniques reduce SSD performance to well below what you get on the desktop.

I really think that this is just a result of the current trend to put high performance SSD on the desktop. When the market re-focuses these problems will disolve.

This also goes for RAID controllers. If you have 8 ports and SAS 3Gb links, then you need to process 24Gb and a IO/s of current 15k SAS drives. Lets just assume for easy math that this requires a 500Mhz RAID Processor. What would be the point of putting in a 2Ghz Processor? What if you increase the IO/s by 100x and double the bandwidth? now you need to handle 48Gb/s throughput and 100x the IO and that requires 2x 3Ghz Processors.

Its just takes time for the market players to react to each technology increase. New raid controllers will come out that can handle these things. maybe the current raid cpus have been using a commodity chip (powerpc often enough) because it was fast enough to handle these things and the new technologies are going to require more specific processors. Maybe you need to get cell chips or nvidia GPUs in there, whatever it takes.

I admit it would be pretty interesting to see the new Dell/LSI 100Gb SAS powered by Nvidia logo in Gen12 Dell servers.

Re:Little Flawed study. (3, Informative)

sirsnork (530512) | more than 4 years ago | (#31382602)

He may have half missed the point, but so did you.

I clicked on this thinking this guy has done some testing... somewhere. Nope, nothing, no mention of benchmarks or what hardware he used. I'm sure some of he said is true. But I'd really like to see the data that he gets the

I have seen almost 4 to 1. That means that the write performance might drop to 60 MB/sec and the wear leveling could take 240 MB/sec.

from. I'd also really like to know what controllers he's tested with, wheather or not they have TRIM support (perhaps none do yet), what drives he used, if he had a BBU and write-back enabled etc etc etc.

Until he give us the sources and the facts this is nothing but a FUD piece. Yes, wear levelling will eat up some bandwidth, thats hardly news... show us the data about how much and which drives are best

Re:Little Flawed study. (2, Interesting)

itzdandy (183397) | more than 4 years ago | (#31382964)

I dont think I missed the point. I am just a little more patient than most I guess. I don't think SSDs are ready from a cost/performance standpoint vs enterprise SAS 15k drives due to the market's focus.

The OP may not have listed the hardware and disks but each controller has info published on max throughput.

This is very comparable to running U320 SCSI disks on a U160 card. The performance bottleneck is often NOT the U160 interface but rather that the controller was not over engineered for its time. The difference is that the interface bandwidth today is fast enough for the throughput of SSD drives but the controllers arent fast enough to take advantage of the very low access tims especially when many drives are used.

I suspect that the next generation of RAID controllers will be capable of handling a larger array of SSD drives. Until then, you can run MORE raid controllers and smaller arrays but that will increase costs significantly.

SSD drives are a disruptive technology so the infrastructure needs a disruptive adaptation in controller design and/or CPU speed.

Re:Little Flawed study. (1)

TheLink (130905) | more than 4 years ago | (#31382662)

A fair number of the desktop stuff can take sustained writes for quite a long while- e.g. the entire disk or more.

http://benchmarkreviews.com/index.php?option=com_content&task=view&id=454&Itemid=60&limit=1&limitstart=10 [benchmarkreviews.com]

If that's not enough, some of the desktop benchmarks/tests, involve writing to the entire disk first, and then seeing how far the performance drops.

e.g.
http://www.anandtech.com/printarticle.aspx?i=3702 [anandtech.com]

See: "New vs. Used Performance - Hardly an Issue"

They're not cheap, but they sure are cheaper than USD100K.

Re:Little Flawed study. (1)

itzdandy (183397) | more than 4 years ago | (#31383016)

But that isnt indicative of enterprise loads. Enterprise loads such as databases do many many seeks and tend to have long queues as many clients request the data. Size and throughput are less important for these loads than seek time (though still critical).

A desktop system can only(realistically) have a similar load in synthetic benchmarks.

The server vs desktop loads on disks are so different that they cant be directly compared. A great desktop drive can be a terrible server drive and vice versa.

Re:Little Flawed study. (2, Interesting)

gfody (514448) | more than 4 years ago | (#31383464)

This is why software based raid is the way to go for ultimate performance. The big SAN providers ought to be shaking in their boots when they look at what's possible using software like starwind or open-e with host-based raid controllers and SSDs. Just for example, look at this thing [techpowerup.com] - if you added a couple cx4 adapters and run open-e you've got a 155,000iop/s iSCSI target there in what's basically a $30k workstation. 3PAR will sell you something the size of a refrigerator for $500,000 that wouldn't even perform as well.

Re:Little Flawed study. (2, Interesting)

itzdandy (183397) | more than 4 years ago | (#31383600)

how about opensolaris with ZFS. you get a high performance iSCSI target and a filesystem with re-ordered writes that improves IO performance by reducing seeks plus optional deduplication and compression.

Additional gains can be had from seperate log and cache disks and with 8+ core platforms already available you can blow a traditional RAID card out of the water.

One nice thing about software raid is it is completely agnostic to controller failure. If you need to recover a raid after a controller failure, you can even do it with SATA->USB adapters if you used SATA or you can use ANY other SAS/SATA controller that supports your disks.

Re:Little Flawed study. (1)

edmudama (155475) | more than 4 years ago | (#31383808)

Not to be rude, but I'm guessing there's only 20-30 engineers in the world who have any idea what current wear leveling state-of-the-art is, and how it affects performance.

There's a huge variety in the quality of controllers in the marketplace, and just because one design has a given advantage or flaw, doesn't mean others share those attributes.

Re:Little Flawed study. (1)

zappepcs (820751) | more than 4 years ago | (#31382680)

Welcome to the multi-tiered storage world. There are places and applications where SSDs are a perfect fit, and places where they are not. Eventually server builders will find a place where both work in tandem to give you the performance you were wanting to begin with. SSD is a fully cached drive. That's not necessary in all applications. For some applications, TB's of RAM is the better option. Combinations of various storage technology will find their niche market. SSDs are not financially practical for all applications, and might never be. The humble magnetic tape is still hanging in there, and not for performance (speed) reasons. In the near future, SSD options will be like networking options, just pick the one that fits your application. No evangelizing needed.

Re:Little Flawed study. (2, Informative)

rodgerd (402) | more than 4 years ago | (#31382480)

ceph [newdream.net] , XIV, and other distributed storage controller models are available today, and avoid controller bottlenecks.

Re:Little Flawed study. (1)

Twinbee (767046) | more than 4 years ago | (#31383248)

Why is it so hard for developers of ports and interface standards to get it super fast, first time round? It's not like there's a power issue and there's no worry about having to make things small enough (as with say the CPU).

For example, let's take USB:
USB 1: 12 Mbit/s
USB 2: 480 Mbit/s
USB 3: 4 Gbit/s

Same goes for video and SATA etc. Perhaps I'm being naive, but it seems like they're all a bit short-sighted. They should develop for the hardware of the future, not artificially limit the speed to what current hardware is capable of.

Re:Little Flawed study. (2, Informative)

amorsen (7485) | more than 4 years ago | (#31383624)

Why is it so hard for developers of ports and interface standards to get it super fast, first time round? It's not like there's a power issue and there's no worry about having to make things small enough (as with say the CPU).

There IS a power issue, and most importantly there's a price issue. The interface electronics limit speed. Even today, 10Gbps ethernet (10Gbase-T) is quite expensive and power hungry. 40Gbps ethernet isn't even possible with copper right now. They couldn't have made USB 3 40 Gbps instead of 4, the technology just isn't there. In 5 years maybe, in 10 years almost certainly.

USB 1 could have been made 100Mbps, but the others were close to what was affordable at the time.

Re:Little Flawed study. (0)

Anonymous Coward | more than 4 years ago | (#31382030)

However subsystems are going to be designed to work with SSD that has much higher access times.

However subsystems are going to be designed to work with SSD that has much lower access times.

There, fixed that for you. It is actually amazing how many times that type of error is made when people are typing. Things like, "this machine has much higher boot times!" when talking about a faster machine.

Re:Little Flawed study. (1)

Z00L00K (682162) | more than 4 years ago | (#31382358)

Even if the bottleneck moves from disk to controller the overall performance will improve. So it's not that SSD:s are bad, it's just that the controllers needs to keep up with them.

On the other hand - raid controllers are used for reliability and not just for performance. And in many cases it's a tradeoff - large reliable storage is one thing while high performance is another. Sometimes you want both and then it gets expensive, but if you can live with just one of the alternatives you will get off relatively easy.

And if you really want performance enhancement you may want to look into a mix of SSD:s and ordinary disks. It depends on the actual solution how you can tune it for best performance.

Duh (2, Interesting)

Anonymous Coward | more than 4 years ago | (#31381690)

RAID means "Redundant Array of Inexpensive Disks".

Re:Duh (3, Informative)

Anarke_Incarnate (733529) | more than 4 years ago | (#31381754)

or Independent, according to another fully acceptable version of the acronym.

Re:Duh (1)

NNKK (218503) | more than 4 years ago | (#31382640)

Fully acceptable to illiterates, you mean.

Re:Duh (1)

MartinSchou (1360093) | more than 4 years ago | (#31382864)

Yes, because 15k RPM SAS drives are OH so inexpensive, right?

Starting out at 1.9$/GB for a 73.5 GB drive [newegg.com] is certainly inexpensive. Especially when you have to pay an insane 9.332 cent/GB for a 750 GB hard drive [newegg.com] .

By your definition, you could NEVER EVER use RAID on expensive hard drives. Which obviously means that you are an idiot.

Re:Duh (1)

Anarke_Incarnate (733529) | more than 4 years ago | (#31383092)

No, but it is a fully acceptable and more reasonable word than inexpensive. It is, in fact, my preferred variant of what RAID stands for, as the expense of disks is relative and the industry thinks that RAID on SAN disks is fine. Since many of those are thousands of dollars per drive, I would expect that using inexpensive would be a deprecated variant of RAID.

Re:Duh (2, Funny)

LordLimecat (1103839) | more than 4 years ago | (#31383430)

Maybe the I stands for Illiterate?

Re:Duh (1)

asdf7890 (1518587) | more than 4 years ago | (#31382656)

Yes, but the word inexpensive is being used in a relative sense here - the idea being that (ignoring RAID0 which doesn't actually match the definition at all due to not offering any redundancy) a full set of drives including a couple of spares would cost less than any single device that offered the same capacity and long-term reliability. And the expense isn't just talking about the cost of the physical drive - if you ask a manufacturer to guarantee a high level of reliability they will in turn ask a higher price for the device (both to cover R&D on making it more reliable and to cover insurance for in case it fails too early and you require replacement and/or compensation). Even if the individual devices in the array are very expensive, they are probably not so compared to a any single device that claims the same capacity and longevity properties.

Correction: (5, Informative)

raving griff (1157645) | more than 4 years ago | (#31381694)

Wear Leveling, RAID Can Wipe Out SSD Advantage for enterprise.

While it may not be efficient to slap together a platter of 16 SSDs, it is worthwhile to upgrade personal computers to use an SSD.

Re:Correction: (1)

morgan_greywolf (835522) | more than 4 years ago | (#31381850)

No one ever said otherwise. The needs of enterprise customers will ensure that magnetic HDDs will continue to exist for years to come.

And it's not always worthwhile to upgrade a PC. Hard drives will continue to exist there as long as there is a significant price difference between HDDs and SSDs. Some people, like gamers, will pay for the extra performance. Someone using their PC for word processing, Web browsing and e-mail gains no advantage on a desktop, and little advantage on a laptop.

Re:Correction: (1)

mlscdi (1046868) | more than 4 years ago | (#31381956)

No one ever said otherwise. The needs of enterprise customers will ensure that magnetic HDDs will continue to exist for years to come.

And it's not always worthwhile to upgrade a PC. Hard drives will continue to exist there as long as there is a significant price difference between HDDs and SSDs. Some people, like gamers, will pay for the extra performance. Someone using their PC for word processing, Web browsing and e-mail gains no advantage on a desktop, and little advantage on a laptop.

Someone who wants a fast-booting, reliable, rugged laptop with good battery life will see a massive advantage. Believe it or not, that's the majority of students and business users. Ever wonder why the EEEs were so popular?

Re:Correction: (1)

Aranykai (1053846) | more than 4 years ago | (#31382576)

Except the EEE's often had drives that were slower than most USB 2.0 Flash drives. I know, I have a 900 and the first thing I did was replace it with a higher performance drive.

Re:Correction: (1)

mlscdi (1046868) | more than 4 years ago | (#31383486)

Maybe I picked a bad example then...but the point is still valid.

Re:Correction: (3, Insightful)

causality (777677) | more than 4 years ago | (#31382438)

No one ever said otherwise.

I see this rather often on Slashdot and elsewhere. It's becoming a part of our collective culture it seems.

Increasingly, it's not good enough that you said what you did say, and chose not to say what you clearly haven't said. There's this unspoken expectation that you also have to actively disclaim things you clearly are not claiming, otherwise some clever individual who really wants to be "right" is going to assume that your lack of a disclaimer amounts to tacit support of whatever was not disclaimed. This leads to a great deal of both intentional trolling and unintentional creation of strawmen. Both invite unnecessary follow-up posts designed to correct unfounded assumptions.

I wonder if this comes from modern politics where the audience is generally "hostile" in the sense that it's eager to twist words and demagogue positions with which it may disagree. That's a poor substitute for good reasoning, for showing that there are substantive reasons to disagree. So much of politics is done by handling complex, nuanced issues with 20-second soundbites that I can see how it happens there. On Slashdot, it seems to lower the quality of discussion for no good reason.

Re:Correction: (1)

phoenix321 (734987) | more than 4 years ago | (#31382596)

Someone using their PC for word processing, Web browsing and email will see significant gains in overall system responsiveness, load times and above all system boot times.

If you ever witnessed a common Vista32 laptop booting in under 40 seconds, you know the use of SSDs.

Re:Correction: (1)

aztracker1 (702135) | more than 4 years ago | (#31383378)

I opted for both.. 80GB SSD, a 1TB 7200 RPM drive for my virtual machines and project (work) files, and a 5400 RPM 1.5TB drive for media storage (I have a 4GB NAS box for backup and mass media storage as well). It's actually sitting there at home, just put together this morning. Can't wait to get back tonight, install an OS, and see how well it runs. :D

Re:Correction: (1)

amorsen (7485) | more than 4 years ago | (#31383658)

The needs of enterprise customers will ensure that magnetic HDDs will continue to exist for years to come.

I just don't see it happening. HDD's are lousy for the enterprise, simply because of the laughably low IOPS. Yes you can compensate by buying 10 times as many disks, but SSD's aren't 10 times as expensive as 15k disks anymore. And yes, SSD's are going to saturate the RAID controllers -- but why would it ever be an advantage for HDD's that they're too slow to even saturate the lousy 500MHz PPC chips that are sold under the pretense of making RAID faster?

Re:Correction: (1)

BikeHelmet (1437881) | more than 4 years ago | (#31383194)

I agree. You shouldn't be using consumer grade SSDs for servers - unless it's a game server or something. (Ex: TF2)

Do you know why RE (RAID Edition) HDDs exist? They strip out all the write recovery and stuff, which could mess up speeds, IOPS, and seek times, and instead streamline the drives for performance predictability. That makes it far easier for RAID controllers to manage dozens of them.

SSDs have a similar thing going. You're an enterprise and need massive IOPS? Buy enterprise-level SSDs - like the ioDrive, with built-in RAID capabilities, piped right through the PCIe bus. Magnitudes faster than a consumer grade SSD, and magnitudes more efficient. The IOPS you get vs CPU usage is amazing. Toss a couple together, and you can literally get hundreds of thousands of IOPS with gigabytes per second of read/write bandwidth. It'll hammer your CPU, but CPUs are cheap compared to these RAID cards.

You're an enterprise. Buy enterprise level stuff. Don't just go with "Intel" because you heard Intel SSDs are the fastest. They aren't. They're just the best affordable ones for us little guys.

Re:Correction: (1)

amorsen (7485) | more than 4 years ago | (#31383700)

The lousy thing about PCIe SSD's is that modern servers don't have enough PCIe slots. 1U servers often have only one free slot, and blade servers often have zero. The only blade vendor with decent PCIe expandability is Sun, and their blade density isn't fantastic.

Oops. I forgot to plan the array (1)

symbolset (646467) | more than 4 years ago | (#31381718)

He's got a point - the embedded RAID controllers in boxes like the HP MSA70 just aren't up to the challenge of sustaining the IOPS of SSDs. They weren't designed for that, so you can't get a million I/Os per second by accident. You have to know what you're doing and build out an architecture that can support it.

OTOH: Who pays 100K for one of those? That has to be including the Enterprise 120GB SSD's at $4k each, right?

What 200k IOPs might look like [techpowerup.com] (not mine).

Re:Oops. I forgot to plan the array (0)

Anonymous Coward | more than 4 years ago | (#31381820)

HP MSA70 is junk anyways

Re:Oops. I forgot to plan the array (1)

jd2112 (1535857) | more than 4 years ago | (#31381922)

OTOH: Who pays 100K for one of those? That has to be including the Enterprise 120GB SSD's at $4k each, right?

That $100K gets you more than bare drives, You get the flexibility to carve out partitions however you like, configuring them for maximum performance or whatever level of redundancy you need. You get snapshot backups, offsite replication, etc.(At additional cost of course...)

And, of course you also get the letters 'E', 'M', and 'C'.

Re:Oops. I forgot to plan the array (1)

Anpheus (908711) | more than 4 years ago | (#31382124)

A lot of those features are available for a lot less than $100,000. But what you don't get, usually, is the same level of support.

Re:Oops. I forgot to plan the array (1)

rubycodez (864176) | more than 4 years ago | (#31382050)

MSA are low performance crap anyway. here's a quarter kid, get yourself an EVA

Re:Oops. I forgot to plan the array (1)

symbolset (646467) | more than 4 years ago | (#31382116)

Is that the EVA that caps out at 8 SSD drives? That doesn't sound like it's going to get the IOPs.

Re:Oops. I forgot to plan the array (1)

rubycodez (864176) | more than 4 years ago | (#31382632)

just put regular spinning disk in it. Point is working for an HP VAR I've had the misfortune to set up many database and middleware systems on MSA and performance is appalling compared to EVA

Only A Matter of Time (2)

WrongSizeGlass (838941) | more than 4 years ago | (#31381740)

Scaling works both ways. Often technology that benefits larger installations or enterprise environments gets scaled down to the desktop after being fine tuned. It's not uncommon for technology that benefits desktop or smaller implementations to scale up to eventually benefit the 'big boys'. This is simply a case of the laptop getting the technology first as it was the most logical place for it to get traction. Give SSD's a little time and they'll work their way into RAID as well as other server solutions.

This is a rhetorical question, right? (0)

Anonymous Coward | more than 4 years ago | (#31381752)

:)

Seek time (4, Informative)

1s44c (552956) | more than 4 years ago | (#31381760)

The real advantage of solid state storage is seek time, not read/write times. They don't beat conventional drives by much at sustained IO. Maybe this will change in the future. RAID just isn't meant for SSD devices. RAID is a fix for the unreliable nature of magnetic disks.

Re:Seek time (3, Informative)

LBArrettAnderson (655246) | more than 4 years ago | (#31382016)

That hasn't been the case for at least a year now. A lot of SSDs will do much better with sustained read AND write speeds than traditional HDs (the best of which top out at around 100MB/sec). SSDs are reading at well over 250MB/sec and some are writing at 150-200MB/sec. And this is all based on the last time I checked, which was 5 or 6 months ago.

Re:Seek time (0)

Anonymous Coward | more than 4 years ago | (#31382360)

Compared with the decrease in seek time gained from using SSDs a 2,5 increase in sustained speed is nothing much.

Re:Seek time (1)

Kjella (173770) | more than 4 years ago | (#31382420)

True, though if what you need is sequential read/write performance then RAID0 will do that well at less cost and much higher capacity than an SSD. Normally the reason why you want that is because you're doing video capture or something similar that takes ungodly amounts of space, so RAID0 is pretty much a slam dunk here. It's the random read/write performance that is the reason for getting an SSD. In the 4k random read/write tests - which are easier for me to understand than IOPS as reading and writing lots of little files - the SSDs are king. And the reason they are so much better is mostly IOPS and seek time, not so much top speed though I'm sure that helps too.

Re:Seek time (1)

BikeHelmet (1437881) | more than 4 years ago | (#31383258)

The new 64MB cache WD Black drives have wicked sustained read speeds. Close to 140MB/sec.

But when dealing with small files, you still notice the IOPS limit.

The cheaper SSDs won't do as well with a sustained write situation (Ex: Recording 12 security camera feeds) as a traditional HDD will.

Re:Seek time (1)

Rockoon (1252108) | more than 4 years ago | (#31382168)

They don't beat conventional drives by much at sustained IO.

umm, err?

Which platter drive did you have in mind that performs similar to a high performance SSD's? Even Seagates 15K Cheetah only pushes 100 to 150MB/sec sustained read and write. The latest performance SSD's (such as the SATA2 Colossus) are have sustained writes at "only" 220MB/sec and with better performance (260MB/sec) literally everywhere else.

Re:Seek time (1)

scotch (102596) | more than 4 years ago | (#31382798)

~ 2x performance for 10x the cost is the definition of "not by much"

Re:Seek time (1)

Rockoon (1252108) | more than 4 years ago | (#31383516)

Perhaps you need to be clued in on the fact that fast platter drives go for over $1 per gigabyte.

$100 for a terabyte sounds great and all, but you cant get a fast one for that price. You wont be doing sustained writing 120MB/sec writing to those 7.2K drives. You will be lucky to get 80MB/sec on the fastest portions of the drive and will average around 60MB/sec.

That SSD thats pushing 220MB/sec sustained writing is 4x the performance on that one metric, and even faster on every other metric.

Re:Seek time (1)

rcamans (252182) | more than 4 years ago | (#31382248)

RAID is meant to increase throughput and reliability. Single drives did not have anywhere as much throughput as an array of drives on a good RAID controller. . But RAID controllers were designed expecting msec seek times, and SSDs have usec seek times. So RAID controllers need redesign for the faster seek times.

OT: "fast performance" is redundant (1)

noidentity (188756) | more than 4 years ago | (#31381762)

wear leveling can eat up most of a drive's bandwidth and make write performance no faster than a hard drive

It's not the performance that's no faster, it's the writing. So he should either say "...and make writes no faster than a hard drive's" or "...and make write performance no better than a hard drive's". Whenever I read this kind of redundancy, I can't help but imagine the author having trouble with indirection in a programming language, writing things like foo_ptr > *bar_ptr.

Wear? (-1, Offtopic)

l3ert (231568) | more than 4 years ago | (#31381784)

The usually employed term is 'gear'. And what the hell is SSD? I hope the article doesn't mean SSC, that place is trivial now, even at level 70. No reasons to wipe a raid there.

Re:Wear? (2, Funny)

DMUTPeregrine (612791) | more than 4 years ago | (#31381842)

Super Sonic Device. They're hard drives that spin so fast the edge of the platter goes faster than sound.

This study seems deeply confused in a specific way (5, Insightful)

fuzzyfuzzyfungus (1223518) | more than 4 years ago | (#31381800)

This study seems to have a very bad case of "unconsciously idealizing the status quo and working from there". For instance:

"Even the highest-performance RAID controllers today cannot support the IOPS of just three of the fastest SSDs. I am not talking about a disk tray; I am talking about the whole RAID controller. If you want full performance of expensive SSDs, you need to take your $50,000 or $100,000 RAID controller and not overpopulate it with too many drives. In fact, most vendors today have between 16 and 60 drives in a disk tray and you cannot even populate a whole tray. Add to this that some RAID vendor's disk trays are only designed for the performance of disk drives and you might find that you need a disk tray per SSD drive at a huge cost."

That sounds pretty dire. And, it does in fact mean that SSDs won't be neat drop-in replacements for some legacy infrastructures. However, step back for a minute: Why did traditional systems have 50k or 100k RAID controllers connected to large numbers of HDDs? Mostly because the IOPs on an HDD, even a 15K RPM monster, sucked horribly. If 3 SSDs can swamp a RAID controller that could handle 60 drives, that is an overwhelmingly good thing. In fact, you might be able to ditch the pricey raid controller entirely, or move to a much smaller one, if 3 SDDs can do the work of 60HDDs.

Now, for systems where bulk storage capacity is the point of the exercise, the ability to hang tray after tray full of disks off the RAID controller is necessary. However, that isn't the place where you would be buying expensive SSDs. Even the SSD vendors aren't even pretending that SSDs can cut it as capacity kings. For systems that are judged by their IOPS, though, the fact that the tradition involved hanging huge numbers (of often mostly empty, reading and writing only to the parts of the platter with the best access times) HDDs off extremely expensive RAID controllers shows that the past sucked, not that SSDs are bad.

For the obligatory car analogy: shortly after the début of the automobile, manufacturers of horse-drawn carriages noted the fatal flaw of the new technology: "With a horse drawn carriage, a single buggy whip will server to keep you moving for months, even years with the right horses. If you try to power your car with buggy whips, though, you could end up burning several buggy whips per mile, at huge expense, just to keep the engine running..."

Re:This study seems deeply confused in a specific (4, Insightful)

volsung (378) | more than 4 years ago | (#31381942)

And we don't have to use Highlander Rules when considering drive technologies. There's no reason that one has to build a storage array right now out of purely SSD or purely HDD. Sun showed in some of their storage products that by combining a few SSDs with several slower, large capacity HDDs and ZFS, they could satisfy many workloads for a lot less money. (Pretty much the only thing a hybrid storage pool like that can't do is sustain very high IOPS of random reads across a huge pool of data with no read locality at all.)

I hope we see more filesystems support transparent hybrid storage like this...

Re:This study seems deeply confused in a specific (3, Insightful)

fuzzyfuzzyfungus (1223518) | more than 4 years ago | (#31382122)

My understanding is that pretty much all the serious storage appliance vendors are moving in that direction, at least in the internals of their devices. I suspect that pretty much anybody who isn't already a sun customer doesn't want to have to deal with ZFS directly; but that even the "You just connect to the iSCSI LUN, our magic box takes it from there" magic boxes are increasingly likely to have a mix of drive types inside.

I'll be interested to see, actually, how well the traditional 15K RPM SCSI/SAS enterprise screamer style HDDs hold up in the future. For applications where IOPS are supreme, SSDs(and, in extreme cases, DRAM based devices) are rapidly making them obsolete in performance terms and price/performance terms are getting increasingly ugly for them. The costs of fabricating flash chips are continuing to fall, the costs of building mechanical devices that can handle what those drives can aren't as much. For applications where sheer size or cost/GB are supreme, the fact that you can put SATA drives on SAS controllers is super convenient. It allows you to build monstrous, and still pretty zippy for loads that are low on random read/write and high on sustained read or write(like backups and nearline storage), storage capacity for impressively small amounts of money.

Is there a viable niche for the very high end HDDs, or will they be murdered from above by their solid state competitors, and from below by vast arrays of their cheap, cool running, and fairly low power, consumer derived SATA counterparts?

Also, since no punning opportunity should be left unexploited, I'll note that most enterprise devices are designed to run headless without any issues at all, so Highlander rules cannot possibly apply.

Re:This study seems deeply confused in a specific (2, Insightful)

Thundersnatch (671481) | more than 4 years ago | (#31382612)

We haven't purchased 15k disks for years. In most cases, it is actually cheaper to buy 3x or even 4x SATA spindles to get the same IOPS. Plus you get all that capacity for free, even when you factor in extra chassis and power costs. We use all that capacity for snapshots, extra safety copies, etc. If your enterprise storage vendor is charging you the same price for a 1TB SATA spindle as a 300GB 15K spindle, you need to find a new vendor. Look at scale-out clustered solutions instead of the dinosaur "dual fiber controllers and a bunch of disk" offerings.

Re:This study seems deeply confused in a specific (1)

LordLimecat (1103839) | more than 4 years ago | (#31383502)

My (possibly incorrect?) understanding was that 3-4x 7200rpm drives arent a drop-in replacement for a 15k in all situations-- the slower drives still have a higher rotational latency, do they not? Even if you throw 50 slower drives at the problem, there are still situations where the 15k drive will respond faster simply because of its rotational latency.

Correct me if i am incorrect.

Re:This study seems deeply confused in a specific (1)

7213 (122294) | more than 4 years ago | (#31382772)

Your dead on. Fibre channel drives are dead, they will cease to exist in the near/medium term future. SAS & SATA will live on. Fibre Channel as a transport (i.e. SANS) will be dead in the medium to long term future, giving way to the expansion of 10Gb CEE (maybe holding on in FCoE for a while).

The problem in 'the enterprise' is not the ability to find the different technologies (SSD, FC, SAS, SATA) for your workloads... the problem is finding which workload belongs on which of your technologes. Every application vendor & DBA I've ever dealt with wants raid 10 for everything, and in a shared SAN environment, in most cases it's unnecisary and in some cases it's counterproductive.

What we're seeing from some of the enterprise hardware vendors is two fold. a) using SSDs in the disk subsystem as a form of second stage cache for cache friendly workloads and b) intelligently reviewing every block by use and moving each block to the appropriate technology (SSD, Sata, FC, etc) to best service IO. Sounds promising, but I'll believe it when I see it.

Getting business & application folk to 'classify' there data for IO usage & throughput, especially before they've installed or written the app, is like herding rabid cats. So you'll end up buying SSDs for an app that will never leverage them or SATA for an app that needs SSDs, depending on what budget these folk could justify to there PHBs.

Re:This study seems deeply confused in a specific (1)

LoRdTAW (99712) | more than 4 years ago | (#31382214)

All I want to know is who is making RAID cards that cost $50,000 to $100,000? Or is he describing a complete system and calling it a RAID card?

Re:This study seems deeply confused in a specific (1)

Wesley Felter (138342) | more than 4 years ago | (#31383082)

He's clearly talking about SAN controllers like EMC Clariion or IBM DS5000; if you don't look too carefully you might mistake them for RAID controllers.

Well you know it is confused (1)

Sycraft-fu (314770) | more than 4 years ago | (#31383476)

Just based on the face that it says "$50,000 or $100,000 RAID controller." Ummm what? Where the hell do you spend that kind of money on a RAID controller? A RAID controller for a few disks is a couple hundred bucks at most. For high end controllers you are talking a few thousands. Like Adaptec's 5805Z which has a dual core 1.2GHz chip on it for all the RAID calculations and supports up to 256 disks. Cost? About $1000 from Adaptec. Or how about the 3Ware 9690SA-8E, 8 external SAS connectors for shelves with 128 disk support. Going for about $700 online.

So anyone who's trying to pretend like RAID controllers cost 5-6 figures is just making shit up. Yes, you can pay that much for a NAS, but you aren't paying for a RAID controller. You are paying for a computer with custom OS, controllers, shelves, disks, monitoring and so on. A complete solution, in other words. Also, if you are spending that kind of money, it is a really serious NAS. We bought a NetApp 2020 and it didn't cost $50,000.

Then, as you say, it is not bad to hit the performance limits. While on a small scale you may be mostly buying RAID for performance reasons, that isn't the reason on a large scale. The reason is space. We got our NetApp because we need a lot of reliable central storage for our department. Yes, it needs to have reasonable performance as well, but really the network is the limit there, not the NAS. The point of it is that it holds a ton of disks. So, if we filled it full of SSDs and those were higher performance than it could handle, we'd not care. Performance with magnetic disks is already as good as we need it to be.

In other news... (4, Funny)

bflong (107195) | more than 4 years ago | (#31381812)

... researchers have found that putting a Formula One engine into a Mack truck wipes out the advantages of the 19,000 rpm.

Re:In other news... (1)

evilbessie (873633) | more than 4 years ago | (#31382654)

Square pegs found to not fit in round holes.

Re:In other news... (0)

Anonymous Coward | more than 4 years ago | (#31383736)

But homosexuals have managed to put the round pegs into the star-shaped holes.

Hold on now... (1)

chronosan (1109639) | more than 4 years ago | (#31381882)

That guy from Samsung (?) who had a billion SSDs RAIDed up for a demo didn't seem to be doing too bad... right?

why not skip wear leveling (0)

Anonymous Coward | more than 4 years ago | (#31381912)

and use something along the lines of "http://en.wikipedia.org/wiki/UBIFS"

Because SSDs aren't spinning platter drives, what if we skip the part in making the SSDs try to impersonate them.

Thoughts?

Software RAID? (1)

MikeUW (999162) | more than 4 years ago | (#31381932)

So does anyone know if this applies to software RAID configurations?

Just curious...

Re:Software RAID? (1)

TClevenger (252206) | more than 4 years ago | (#31381992)

That was my first thought. Run standard SATA controllers, put one or two drives on each controller, and RAID-0 them. At least then you're CPU-bound. Doesn't fix the TRIM problem, though.

Re:Software RAID? (1)

Anpheus (908711) | more than 4 years ago | (#31382160)

This. I'm surprised no one has mentioned it. I don't think there's a RAID controller on the market that supports pass-through TRIM. Which is going to be one hell of a wakeup call when an admin finds the batch job took ten times longer than usual. I had this happen with an X25-M, I had stopped paying attention to the log file's end time for various steps, and one day I woke up to it running past 9AM (from the initial times of taking a mere ten minutes when starting at 5AM.)

ZFS sidesteps the whole RAID controller problem (4, Insightful)

haemish (28576) | more than 4 years ago | (#31381962)

If you use ZFS with SSDs, it scales very nicely. There isn't a bottleneck at a raid controller. You can slam a pile of controllers into a chassis if you have bandwidth problems because you've bought 100 SSDs - by having the RAID management outside the controller, ZFS can unify the whole lot in one giant high performance array.

Re:ZFS sidesteps the whole RAID controller problem (1)

Anpheus (908711) | more than 4 years ago | (#31382176)

That's not the problem, the problem is a lot of the high end controllers have 8, 16, 24, etc SAS ports. If you were to plug SSDs into all of those ports, you'd swamp the card, whether you treat the disks as JBOD or let the controller handle it. And the storage vendors who make real nice SANs did the same thing. They have one controller managing dozens of HDDs because their performance is so abysmal.

Re:ZFS sidesteps the whole RAID controller problem (3, Interesting)

Anonymous Coward | more than 4 years ago | (#31382256)

If you use ZFS with SSDs, it scales very nicely. There isn't a bottleneck at a raid controller. You can slam a pile of controllers into a chassis if you have bandwidth problems because you've bought 100 SSDs - by having the RAID management outside the controller, ZFS can unify the whole lot in one giant high performance array.

If performance is that critical, you'd be foolish to use ZFS. Get a real high-performance file system. One that's also mature and can actually be recovered if it ever does fail catastrophically. (Yes, ZFS can fail catastrophically. Just Google "ZFS data loss"...)

If you want to stay with Sun, use QFS. You can even use the same filesystems as an HSM, because SAMFS is really just QFS with tapes (don't use disk archives unless you've got more money than sense...).

Or you can use IBM's GPFS.

If you really want to see a fast and HUGE file system, use QFS or GPFS and put the metadata on SSDs and the contents on lots of big SATA drives. Yes, SATA. Because when you start getting into trays and trays full of disks attached to RAID controllers, arrays that consist of FC or SAS drives aren't much if any faster than arrays that consist of SATA drives. But the FC/SAS arrays ARE much smaller AND more expensive.

Both QFS and GPFS beat the living snot out of ZFS on performance. And no, NOTHING free comes close. And nothing proprietary, either, although an uncrippled XFS on Irix might do it, if you could get real Irix running on up-to-date hardware. (Yes, the XFS in Linux is crippleware...)

Warcraft (0)

Anonymous Coward | more than 4 years ago | (#31381968)

Was I the only one who saw the words leveling, raid, and wipe, and spent several seconds thinking the story was somehow related to WoW?

Raid controllers obsolete? (1)

vlm (69642) | more than 4 years ago | (#31381972)

Even the highest-performance RAID controllers today cannot support the IOPS of just three of the fastest SSDs.

In the old days, raid controllers were faster than doing it in software.

Now a days, aren't software controllers faster than hardware? So, just do software raid? In my very unscientific tests of SSDs I have not been able to max out the server CPU when running bonnie++ so I guess software can handle it better?

Even worse, it seems difficult to purchase "real hardware raid" cards since marketing departments have flooded the market with essentially multiport win-SATA cards that require weird drivers because they're non-standard?

Re:Raid controllers obsolete? (4, Informative)

TheRaven64 (641858) | more than 4 years ago | (#31382080)

The advantage of hardware RAID, at least with RAID 5, is the battery backup. When you write a RAID stripe, you need to write the whole thing atomically. If the writes work on some drives and fail on others, you can't recover the stripe. The checksum will fail, and you'll know that the stripe is damaged, but you won't know what it should be. With a decent RAID controller, the entire write cache will be battery backed, so if the power goes out you just replay the stuff that's still in RAM when the array comes back online. With software RAID, you'd just lose the last few writes, (potentially) leaving your filesystem in an inconsistent state.

This is not a problem with ZFS, because it handles transactions at a lower layer so you either complete a transaction or lose the transaction, the disk is never in an inconsistent state.

Re:Raid controllers obsolete? (1)

rrohbeck (944847) | more than 4 years ago | (#31383260)

Just my thought. Hardware RAID adds latency and limits throughput if you use SSDs. On the other hand, server CPUs often have cycles to spare and are much faster than the CPU on the RAID controller. I've yet to see the dual quad cores with hyperthreading going over 40% in our servers.
Now all we need is a VFS layer that smartly decides where to store files and/or uses a fast disk as a cache to a slower disk. Like a unionfs with automatic migration?

Oye (0)

Anonymous Coward | more than 4 years ago | (#31381980)

Firstly, "$50,000 or $100,000 RAID controller"? I think the author means Storage Array. Regular RAID controllers cost nowhere near that number. In fact, most enterprise Storage Arrays cost far more than "$50,000 or $100,000".

Secondly, they are also typically only certified for vendor provided disks (at $ludicrous), which seldom include SSDs as offering.

Thirdly, no one in their right mind is going to be using very expensive SSDs for sequential load applications, which regular disks are perfectly capable of for a fraction of the price. The only load that makes sense at that price point for the enterprise are database applications and others that utilize heavy random i/o workloads. Once you have that type of load, the performance of each SSD is going to be a fraction of the top sequential speed, but still far faster than a regular disk.

The article is FUD.

Obvious (1)

anza (900224) | more than 4 years ago | (#31382010)

Any idiot newb knows this. Whenever you raid and are not the right level, you invariably get wiped out. Duh.

Re:Obvious (1)

TheJokeExplainer (1760894) | more than 4 years ago | (#31382246)

Parent is, of course, referring to gaming usage of the term "raid" [wikipedia.org] wherein players undertake a type of mission in an MMORPG like World of Warcraft where the objective is to use a very large number of people, relative to a normal team size set by the game, to defeat a boss monster.

If the players attempting the raid are of insufficient level, they will tend to die or "get wiped out".

Parent is also very clever because at the same time, he *also* refers to the technology acronym aspect of RAID (Redundant Array of Independent/Inexpensive Disks) [wikipedia.org] by implying that attempting to create a RAID without sufficient experience will often lead to disaster.

RAID = Speed? (1)

TangoMargarine (1617195) | more than 4 years ago | (#31382102)

I suppose it would be more important for enterprises, but personally, I wouldn't see speed as the primary purpose of having a RAID setup. Obviously it wouldn't be cool if it was really slow, but isn't data redundancy the primary purpose?

Bandwidth limit doesn't "wipe out" SSD advantage (0)

Anonymous Coward | more than 4 years ago | (#31382158)

Bandwidth is the limiting factor for some SSD RAIDs today, but it doesn't "wipe out the advantage" of SSDs. 8 mirrored pairs of 15K RPM hard drives would have about 150*8=1200 random writes a second. A *single* second-generation Intel X-25M has 6,000 write IOPS a second, and a single 6Gbps SATA RAID connection can handle at least 60K IOPS assuming 4k blocks and every block gets sent out twice (software RAID).

The way to deal with wear leveling, and the otherSSD controller problems the linked article raises, is to get an SSD with a good controller and large write cache; Intel has the best, then Indilinx. (You can see for yourself by looking at the SSD performance charts at Tom's Hardware or any number of comparisons out there. Note that controller maker != brand on the SSD box; you have to Google a bit.) The good SSDs aren't much more expensive per gig than the JMicron ones, so there isn't much excuse.

And sure, it would be great if RAID cards understood SSDs' nonstandard SMART statistics and used them to autoprovision spares for the drives most likely to fail next, but if you really need thousands more IOPS -- i.e., your database is crashing under crazy load -- and the cost doesn't stop you, then a little thing like hardware autoprovisioning of spares won't stop you.

Why on earth does the article even mention RAID-5 or 6 with SSDs? If you want SSDs or even 15K disks, you certainly don't want RAID-5 or 6, because your RAID performance will be limited by the speed of the parity disks. End of story.

Finally, as other commenters mentioned, enterprise disk interfaces are certainly gonna catch up as disks get faster.

The article's tone sounds like your basic kneejerk contrarianism -- "everyone says SSDs are great; here's why they're wrong" -- but it's mostly just incomplete (and, as always, posted to Slashdot with an even more fragmentary/contrarian/exaggerated summary) rather than outright wrong; you should certainly think about your SSD controller maker and RAID card before your company goes and shells out 200 Benjamins for big fast SSD arrays for your main and backup DB servers. But reports of the death of enterprise SSDs have been greatly exaggerated.

In other news, if you *really* want a reason to consider holding off on SSDs, weigh their cost-effectiveness against the other ways to keep your app running nicely under load: getting more RAM or paying employees to add caching and tune their DB accesses, or maybe even doing scale-out with tons of DB servers (which has plenty of expense of its own in development time). What's right for you mostly depends on the size of your data and working set, your workload, and how expensive scale-out and optimizations would be on the software side.

ditch the controller (1)

bl8n8r (649187) | more than 4 years ago | (#31382290)

kernel based software raid or zfs gives much better raid performance IMHO. The only reason I use hw raid is to make administration simpler. I think there is much more benefit to be had letting the os govern partition boundaries, chunk size and stripe alignment. Not to mention the dismal firmware upgrades supplied by closed source offerings.

Fusion IO = better than SSD + RAID (0)

Anonymous Coward | more than 4 years ago | (#31382294)

http://www.fusionio.com/

We used these to solve a problem with a horrendously mismanaged (but exceedingly crucial) MySQL DB. We compared solutions on a dollar-per-IOPS basis and these came out ahead by far. For about $17k we got 320GB of space but at well over 100,000 IOPS. The fastest arrays we could cram into a server would only reach into the low tens of thousands.

... because it is SSD + RAID (1)

InvisiBill (706958) | more than 4 years ago | (#31382936)

While the ioDrive may offer great performance, I hate their marketing.

http://www.fusionio.com/products/iodrive/ [fusionio.com]

  • Not an SSD - easily outperforms dozens of SSDs and a single server
  • From 80GB - 320GB of enterprise-grade, solid-state Flash

"It's not SSD + RAID, it's solid state memory in parallel channels!"

No, it's not X25-M's on an Adaptec card. However, it is NAND flash with a bunch of parallel channels. It's the exact same idea behind SSD + RAID, it's just above the level that you'll get with "regular" SSD + RAID.

RAID for what? (1)

v(*_*)vvvv (233078) | more than 4 years ago | (#31382336)

If using RAID for mirroring drives, well, you must also consider the fail rate of drives, as it is all about fault tolerance, no? It is reported that SSDs are far more durable, so the question should be, what does it take to match the fault tolerance of HDD RAID with an SSD RAID, and only after that, can we truly compare the pros and cons of their performance sacrifices.

On a side note, you can now get a sony laptop that comes equipped with a RAID 0 quad SSD drive.
http://www.sonystyle.com/webapp/wcs/stores/servlet/CategoryDisplay?catalogId=10551&storeId=10151&langId=-1&categoryId=8198552921644570897 [sonystyle.com]

I assume you would only do this with SSDs, given that they have a much lower failure rate than HDDs.

Re:RAID for what? (1)

FuckingNickName (1362625) | more than 4 years ago | (#31383228)

Do SSDs really have a lower failure rate than HDDs? I mean, how many times can it be assumed that I can write to a specific sector on each? I'd be interested in a report in which this has been tested by writing various devices to destruction, rather than by quoting manufacturer predictions.

Don't give me wear levelling arguments, as they assume that I'm not frequently changing all the data on the medium.

That Assumes SSD's will only be Flash (0)

Anonymous Coward | more than 4 years ago | (#31383396)

How about PCM (Phase Change Memory)?
No wear leveling, much longer life predicted, possible even higher density.
Most of the big players see as their future.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>