Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Samsung SSD 840 EVO 250GB & 1TB TLC NAND Drives Tested

Soulskill posted 1 year,2 days | from the stores-like-a-storage-device dept.

Data Storage 156

MojoKid writes "Samsung has been aggressively bolstering its solid state drive line-up for the last couple of years. While some of Samsung's earlier drives may not have particularly stood-out versus the competition at the time, the company's more recent 830 series and 840 series of solid state drives have been solid, both in terms of value and overall performance. Samsung's latest consumer-class solid state drives is the just-announced 840 EVO series of products. As the name suggests, the SSD 840 EVO series of drives is an evolution of the Samsung 840 series. These drives use the latest TLC NAND Flash to come out of Samsung's fab, along with an updated controller, and also feature some interesting software called RAPID (Real-time Accelerated Processing of IO Data) that can significantly impact performance. Samsung's new SSD 840 EVO series SSDs performed well throughout a battery of benchmarks, whether using synthetic benchmarks, trace-based tests, or highly-compressible or incompressible data. At around $.76 to $.65 per GB, they're competitively priced, relatively speaking, as well."

cancel ×

156 comments

Sorry! There are no comments related to the filter you selected.

Call me old fashion (3, Interesting)

Taco Cowboy (5327) | 1 year,2 days | (#44598983)

How many effective READ/WRITE cycle can the chip in SSD perform, before they start degrading ?

Has there been any comparison made in between the reliability (eg read/write cycles) of old fashion spinning-plate HD versus that of SSD ?

Re:Call me old fashion (1)

quarrel (194077) | 1 year,2 days | (#44598993)

Ok, you're old fashioned.

This was a thing, yes, but only for that brief period when you actually got your slashdot id. Since then? Not so much ...

--Q

Re:Call me old fashion (2)

pipatron (966506) | 1 year,2 days | (#44599035)

The question is still relevant. Manufacturers talk about erase cycles, but are there any massive-scale studies done by a third part on SSD failure modes?

Re:Call me old fashion (2, Informative)

Anonymous Coward | 1 year,2 days | (#44599051)

Wearout is not a significant failure mode. Nearly all failures are to due to non-wearout effects such as firmware bugs and i/o circuit marginality.

Re:Call me old fashion (1)

moosehooey (953907) | 1 year,2 days | (#44600091)

Hence why we want to see a study that compares overall failure to old-fashioned drives, taking all failure modes into account.

I would like to see some evidence that SSDs are more reliable.

Hot vs Crazy (4, Informative)

bdwoolman (561635) | 1 year,2 days | (#44600277)

Here's the thing. SSDs are now more reliable than when this guy logged this report. [codinghorror.com]

But are still maybe not as steady Eddie as a good-quality HDD. But we still want them because having an SSD boot drive changes the whole computing experience due to their awesome speed. And since we are good about backups (Are we not?) we can be relaxed as we ride the SSD smokin' fast Roller Coaster. SSD or HDD then what's the problem if we have data security. Both are gonna FAIL. So what if Miss SSD stabs me for no good reason? It was a helluva ride, Bro. And well worth the stitches. I do wish SLC NAND was not priced out of reach, but, hey, when it comes to hottness we take what we can get. Right?

Okay. This is Slashdot we get no hottiness...no hottiness at all.. No no no hottiness. It's pathetic really. ....

Re:Call me old fashion (2)

beelsebob (529313) | 1 year,2 days | (#44599067)

The problem with large scale studies on this is that it takes too long to happen to actually study. You need to study real world usage patterns, and in the real world it takes decades before the flash actually wears to death. Controller failure (as is possible with HDDs too) will generally happen long before the flash becomes unwritable.

Re:Call me old fashion (4, Informative)

AmiMoJo (196126) | 1 year,2 days | (#44599171)

It depends on what you use it for. I managed to wear out an Intel XM-25 160GB SSD a few years ago by hitting the 14TB re-write limit.

Modern SSDs so a lot of compression and de-duplication to reduce the amount of data they write. If your data doesn't compress or de-duplicate well (e.g. video, images) the drive will wear out a lot faster. I think what did it for me was building large databases of map tiles stored in PNG format. Intel provide a handy utility that tells you how much data has been written to your drive and mine reached the limit in about 18 months so had to be replaced under warranty.

Re:Call me old fashion (5, Insightful)

Rockoon (1252108) | 1 year,2 days | (#44599327)

Intel provide a handy utility that tells you how much data has been written to your drive and mine reached the limit in about 18 months so had to be replaced under warranty.

You were (amplified?) writing 32.8 GB per day, on average.

Clearly you will run into SSD erase-limit problems at such a rate, but such workloads normally turn out to not be tasks that actually benefit from an SSD to begin with (32.8GB/day = 380KB/sec, so the devices speed wasnt actually an issue for you)

You were either very clever and knew you would hit the limit and get a free replacement, or very foolish and squandered the lifetime of an expensive device when a cheap deice would have worked.

In any event, in general the larger the SSD the longer its erase-cycle lifetime will be. For a particular flash process its a completely linear 1:1 relationship, where twice the size buys twice as many block erases (a 320GB SSD on the same process would have lasted twice as long as your 160GB SSD with that work load)

Re:Call me old fashion (1)

beelsebob (529313) | 1 year,2 days | (#44599373)

Clearly you will run into SSD erase-limit problems at such a rate, but such workloads normally turn out to not be tasks that actually benefit from an SSD to begin with (32.8GB/day = 380KB/sec, so the devices speed wasnt actually an issue for you)

Actually, most devices will survive several years at such a rate. GP was unlucky to see failures quite

Re:Call me old fashion (4, Insightful)

hackertourist (2202674) | 1 year,2 days | (#44599465)

(32.8GB/day = 380KB/sec, so the devices speed wasnt actually an issue for you)

That's an odd way to look at it. You assume that GP spreads out his writes evenly over 24h, and has no desire to speed things up.

Re:Call me old fashion (3, Informative)

AmiMoJo (196126) | 1 year,2 days | (#44599503)

In my case having an SSD made a huge impact. I was using offline maps of a wide area build from PNG tiles in an sqlite database with RMaps on Android. Compiling the databases was much faster with an SSD. I was doing it interactively, so performance mattered.

I can only tell you what I experienced. I installed the drive and I didn't think about it wearing out, just carried on as normal. The Intel tool said that it had written 14TB of data and sure enough writes were failing to the point where it corrupted the OS and I had to re-install.

I was using Windows 7 x64, done as a fresh install on the drive when I built that PC. I made sure defragmentation was disabled.

I'm now wondering if the Intel tool doesn't count bytes written but instead is some kind of estimate based on the amount of available write capacity left on the drive. I wasn't monitoring it constantly either so perhaps it just jumped up to 14TB when it noticed that writes were failing and free space had dropped to zero.

It was a non-scientific test, YMMV etc etc.

Re:Call me old fashion (1)

rsmith-mac (639075) | 1 year,2 days | (#44600117)

Modern SSDs so a lot of compression and de-duplication to reduce the amount of data they write.

That is only true for SandForce based drives as the tech behind it is LSI's "secret sauce". Samsung, Marvell, and Toshiba do not do any kind of compression or dedupe; they write out on a 1:1 basis.

The latter group could probably create their own compression and dedupe tech if they really desired to, but it's a performance tradeoff rather than something that has clear and consistent gains. SandForce performance is more bursty than 1:1 writing, since the content matters.

Only some do (2)

Sycraft-fu (314770) | 1 year,2 days | (#44600739)

New Intel drives do, as they use the Sandforce chipset. However Samsung drives don't. Samsung makes their own controller, and they don't mess with compression. All writes are equal.

Also 14TB sounds a little low for a write limit. MLC drives, as the XM25 was, are generally spec'd at 3000-5000 P/E cycles. Actually should be higher since that is the spec for 20nm class flash and the XM25 was 50nm flash. Even assuming 1000, and assuming a write amplification factor of 3 (it usually won't be near that high) you are talking 52TB if the drive has no internal overprovisioning, which it probably does.

As an example, AnandTech tested a Samsung 840 TLC drive. The 250GB drive was able to take about 266TB of incompressible data, which translates to a bit more than 1000 P/E cycles.

If you have a high write workload, their MLC drives aren't that much. A 512GB 840 Pro drive will run you like $450. That should get you somewhere in the realm of 1.5PB of writes before it fails, maybe more.

Re:Call me old fashion (0)

Anonymous Coward | 1 year,2 days | (#44599655)

I work for a company that makes flash arrays. We push them much harder than using one for your c:\ drive, and yes, they DO wear out.

Re:Call me old fashion (1)

ericloewe (2129490) | 1 year,2 days | (#44599167)

Some reviewers take popular devices and see if they can kill them by bombarding them with writes.

So far, the consensus is that, for typical consumer workloads, the limits on NAND writes are high enough not be a problem, even with Samsung's TLC NAND.

Same should apply to heavy professional workloads when using decent devices (Samsung 830/840 Pro and similar).

As for servers, the question is a bit more difficult to answer, but even assuming a very bad case, SSDs make sense if they can replace a couple of mechanical drives (Throughput is most important, not the amount of data stored)

Re:Call me old fashion (2, Informative)

Anonymous Coward | 1 year,2 days | (#44599047)

Well said.

Nothing lasts forever. If a hard-driven SSD lasts 3-4 years, I don't really care that if it's used up some large fraction of it's useful lifetime, because I'm going to replace it just like I'd replace a 4 year old spinning disk.

And the replacement will be cheaper and better.

And if the SSD was used to serve mostly static data at the high speed they provide, then it's not going to have used up its write/erase cycle lifetime by then anyway.

Re:Call me old fashion (3, Insightful)

Rockoon (1252108) | 1 year,2 days | (#44599339)

Indeed. I just don't see how the erase-limit issue applies for most people. The most common activity where it might apply is in a machine used as a DVR (dont use an SSD in a DVR), with the next being a heavily updated database server (you may still prefer the SSD if transaction latency is important.)

For people that use their computers for regular stuff like browsing the web, streaming video off the web, playing video games, and software development.. then get the damn SSD -- its a no-brainer for you folks.. you will love it and it will certainly die of something other than the erase-limit long before you approach that limit.

Re: Call me old fashion (2)

Zeinfeld (263942) | 1 year,2 days | (#44600547)

Hmmm I replace my hard drives when I start to see RAID errors. I don't plan to run SSD raid as the on board fault tolerance should be ok.

Would be nice to have hard data on expected failures so that I know whether to plan for a three or a six year lifespan. I generally replace my main machine on a six year cycle as I have a lot of expensive software. Looking to upgrade this year when the higher performance intel chips launch.

1tb is quite a lot. Probably more than I need in solid state. The price is also quite a bit more than the $0.05/gig for Hard drives. But it's getting a lot narrower. And RAID 1 doubles that cost anyway...

Re:Call me old fashion (1)

MobSwatter (2884921) | 1 year,2 days | (#44599207)

Ok, you're old fashioned.

This was a thing, yes, but only for that brief period when you actually got your slashdot id. Since then? Not so much ...

--Q

All things considered, if they do not shatter the barrier of flash memory 200k R/W bit failure, there is no algorithm that can improve upon standard HDD platters. Sure they can make a flash bit live a little longer and provide better throughput to the drive but the technology overall cannot survive a superior storage technology. Stop kicking a dead horse and move on to something new.

Re:Call me old fashion (2)

Tapewolf (1639955) | 1 year,2 days | (#44599269)

Ok, you're old fashioned.

This was a thing, yes, but only for that brief period when you actually got your slashdot id. Since then? Not so much ...

--Q

Technically it becomes less and less reliable each time they do a die shrink on the flash. Adding a whole extra bit level makes things worse still. In the early 2000s you were looking at 100'000 P/E cycles, maybe a million for the really good stuff. Good TLC memory seems to be rated around 3000, with a figure of 1000 being widely quoted, and in some cases, less.

Realistically, they've designed the drive to fight tooth and nail to avoid doing rewrites, and in actual fact it looks like they've put a layer of fast SLC cache in front (i.e. the million-cycle stuff). What could be more interesting is the retention period - if the thing is left powered off for three months it could well be left unreadable.

Re:Call me old fashion (4, Interesting)

Rockoon (1252108) | 1 year,2 days | (#44599393)

Technically it becomes less and less reliable each time they do a die shrink on the flash. Adding a whole extra bit level makes things worse still. In the early 2000s you were looking at 100'000 P/E cycles, maybe a million for the really good stuff. Good TLC memory seems to be rated around 3000, with a figure of 1000 being widely quoted, and in some cases, less.

Lets not neglect the fact that while every die shrink does reduce the erase-limit per cell, it also (approximately) linearly increases the number of cells for a given chip area. In other words, for a given die area the erase limit (as measured in bytes, blocks, or cells) doesnt actually change with improving density. What does change is overall storage capacities and price.

When MLC SSD's dropped from ~2000 cycles per cell to ~1000 cycles per cell, their capacities doubled (so erases per device remains about constant) and prices also dropped from ~$3/GB to about ~$1/GB. Now MLC SSD's are around ~600 cycles per cell, their capacities are larger still (again erases per device remain about constant), and they are selling for ~$0.75/GB (and falling.)

By every meaningful measure these die shrinks improve the technology.

So now lets take it to the (extreme) logical conclusion, where MLC cells have exactly 1 erase cycle (we have a name for this kind of device.. WORM: Write Once Read Many.) To compensate, the device capacities would be about 600 times that of todays current capacities, so in the same size package as todays 256 GB SSD's we would be able to fit a 153 TB SSD WORM drive, and it would cost about $200.

Re:Call me old fashion (5, Informative)

beelsebob (529313) | 1 year,2 days | (#44599057)

Yes, many sites have done the maths on such things. The conclusion "finite life" is not the same thing as "short life". SSDs will in general, outlast HDDs, and will in general die of controller failure (something which affects HDDs too), not flash lifespan.

The numbers for the 840 (which uses the same flash, with the same life span) showed that for the 120GB drive, writing 10GB per day, you would take nearly 12 years to cause the flash to fail. For the 240/480/960 options for the new version you're looking at roughly 23, 47 and 94 years respectively. Given that the average HDD dies after only 4 years (yes yes yes, we all know you have a 20 year old disk that still works, that's a nice anecdote), that's rather bloody good.

Re:Call me old fashion (1)

jamesh (87723) | 1 year,2 days | (#44599073)

have they solved the corruption-on-power-failure issues yet?

Re:Call me old fashion (3, Informative)

beelsebob (529313) | 1 year,2 days | (#44599083)

Yes, they were solved in a firmware patch a long time ago.

Re:Call me old fashion (1)

Anonymous Coward | 1 year,2 days | (#44599109)

Really so ZDNet was testing obsolete drives this March? http://www.zdnet.com/how-ssd-power-faults-scramble-your-data-7000011979/

Re:Call me old fashion (1)

beelsebob (529313) | 1 year,2 days | (#44599131)

Noting that march is a long time ago in tech terms, and that one of the (incredibly small sample of 2) HDDs suffered issues as well.

Re:Call me old fashion (0)

Anonymous Coward | 1 year,2 days | (#44599157)

They do indeed suffer from this though, at least earlier drives did. See Vertex 2.

Re:Call me old fashion (1)

beelsebob (529313) | 1 year,2 days | (#44599163)

Yes, the 840 did indeed suffer from this, but as I said up the thread, the firmware was patched to fix the issue.

Re:Call me old fashion (0)

Anonymous Coward | 1 year,2 days | (#44599599)

If so, that's a very important claim. Got a link to the Samsung release note or other official documentation that confirms it?

Re:Call me old fashion (1)

Immerman (2627577) | 1 year,2 days | (#44599679)

Better yet, have you got a link to an independent test that confirms whatever Samsung may be claiming?

Re:Call me old fashion (3, Informative)

Anonymous Coward | 1 year,2 days | (#44599335)

Power failure?

You don't have a UPS or other standby power source available? You know its 2013 right...

Willing to spend hundreds on an ultra fast STORAGE device and have no backup power available? really? come on...

That's some messed up priorities there... Spend a hundred bucks on a UPS already.

Then you don't ever have to worry about data corruption. Or the much more common... Loss of unsaved work due to power failure...

Re:Call me old fashion (1)

Immerman (2627577) | 1 year,2 days | (#44599735)

I guess I'm old school, got used to saving regularly back before UPSes were a consumer product. If I lose power 3 times a year I've lost a total of maybe 15 minutes of work, and it's a rare year where I have three power outages while working. So the insurance of a UPS would cost me $100/ 0.25 = $400/hour. Even if it lasts a decade that comes to $40/hour for saving my ass from some inconvenience. Pretty steep price.

Re:Call me old fashion (1)

AlternativeIdeas (3013367) | 1 year,2 days | (#44599917)

And let's not forget, when using an SSD on a laptop the UPS is "built-in".

Re:Call me old fashion (0)

Anonymous Coward | 1 year,2 days | (#44600421)

Unless your battery dies.

Re:Call me old fashion (1)

AmiMoJo (196126) | 1 year,2 days | (#44600359)

We get interruptions to our supply less than once every five years. Even at 95% efficiency a UPS would cost a fair bit to run. It would be better if, like hard drives, SSDs were simply designed not to die in the event of unexpected power failure.

Data corruption isn't an issue with modern file systems. I suppose there is loss of work, but the cost/benefit ratio is too low.

UPSes are usually near 100% efficient (1)

Sycraft-fu (314770) | 1 year,2 days | (#44600533)

Most UPSes these days are line-interactive. That means they are not doing any conversion during normal operation. The line power is directly hooked to the output. They just watch the line level. If the power drops below their threshold, they then activate their inverter and start providing power. So while their electronics do use a bit being on, it is very little. The cost isn't in operation, it is in purchasing the device and in replacing the batteries.

That aside SSDs don't have problems with it (it was a firmware bug, Samsung fixed it) and if your data is important, you probably don't want to rely on your journal to make sure it is intact. When you get in to real high end, reliable, storage, power backup is a big thing. Our Equallogic has dual full redundant power supplies on all units, which they wanted plugged in to separate circuits (one is line only, one is generator backed), redundant controllers, and the NAS has internal batteries backing the cache in case of power failure, ones that last quite awhile.

There's a big difference between "a journal that means the filesystem isn't in an inconsistent state (usually)" and "a setup where one doesn't lose any data."

If you are concerned about efficiency costing you money in your computers (it likely costs less than you think) then your PSU is the place to look. If you didn't specifically buy a good one, it is probably 80% or less efficient. You can get them a bit above 90% if you try, and match them to the load.

Re:Call me old fashion (1)

maxwell demon (590494) | 1 year,2 days | (#44599139)

Given that the average HDD dies after only 4 years

I guess I must have had exceptionally good HDDs. I only had one HDD failure (and that happened after a power source failure, so I suspect it was caused by that), and with the exception of some of those currently in use, all of my HDDs (including the failed one) have been in use for more than four years. And in almost all of my computers, I had more than one HDD.

Re:Call me old fashion (0)

Anonymous Coward | 1 year,2 days | (#44599381)

TLC is the worst tech though. Bring back SLC, or even MLC :/

Re:Call me old fashion (1)

beelsebob (529313) | 1 year,2 days | (#44599521)

Why, once again, actually figuring out the life of these devices shows that TLC devices will live a perfectly acceptable length of time, so why should we use SLC or MLC for consumer devices?

Re:Call me old fashion (1)

Traciatim (1856872) | 1 year,2 days | (#44599443)

Of course that math is done on the assumption that 10GB per day can be spread over the entire drive which isn't the case once you have 100GB of data on it, suddenly that 10 years gets reduced to 1.7 years years and that's the estimated mean time to failure meaning the actual failure rate is probably withing +- 50% of that, so somewhere between 0.85 and 2.55 years is likely. That's bordering on the realm of "Not a reliable place to put data". Of course, your important data should probably be stored in multiple locations locally, as well as an additional copy in another physical location anyway if you really want to keep it, but citing those figures is not anywhere near a reasonable usage pattern of most drives.

Re:Call me old fashion (1)

beelsebob (529313) | 1 year,2 days | (#44599531)

Actually, no, that maths was done with the assumption that wear levelling would be able to do the average case job for a consumer drive which is reasonably full. If you assume ideal conditions the life span is in fact much longer than that.

Re:Call me old fashion (1)

Traciatim (1856872) | 1 year,2 days | (#44600343)

You mean this [xtremesystems.org] test where almost all the drives are used at very low amounts filled, and the drives that are used with large static data fail in extremely short periods of time?

Re:Call me old fashion (1)

beelsebob (529313) | 1 year,2 days | (#44600603)

No, I mean this [anandtech.com] , which has a detailed explanation of what's going on, and why you shouldn't care.

Re:Call me old fashion (0)

Anonymous Coward | 1 year,2 days | (#44599699)

The conclusion "finite life" is not the same thing as "short life". SSDs will in general, outlast HDDs, and will in general die of controller failure (something which affects HDDs too), not flash lifespan.

This is total bullshit. Every single SSD I had owned has failed within a year. Just from normal development work and use with virtual machines.

On the other hand I have hard drives with over 70000 powered-on hours still going strong. Hell, the primary RAID array in my workstation is made from drives that all have over 50000 powered-on hours on them.

Re:Call me old fashion (1)

beelsebob (529313) | 1 year,2 days | (#44599899)

This is total bullshit. Every single SSD I had owned has failed within a year.

Damn it, I covered off one anecdote approach, and you found a different one.

For those who don't know how anecdotes work –if your sample size is incredibly small, you can not draw meaningful results from it. I don't care if you have a single 20 year old disk, 20 6 year old disks, or 20 SSDs that failed within the first year, none of these give you a full enough picture to actually tell you what's going on.

The actual stats on return rates show that SSD return rates are around 0.5% of all drives per year, while HDD return rates are about 5% of all drives per year. There's one notable exception, which is that if you bought OCZ drives a couple of years ago, you were looking at about a 10% failure rate.

The bottom line is, SSDs are now more reliable, and more long lasting than hard disks in a consumer setting. In an enterprise setting, you need to be careful and make sure that you get enterprise level SSDs because your write patterns will likely involve far more writing than any consumer will ever do.

Re:Call me old fashion@beelsebob (0)

Anonymous Coward | 1 year,2 days | (#44600321)

Do your math... return rates of 5% "of all drives sold" (with less SSD's sold per year than HDD's) is a much higher return rate...(or did you mean "of all drives of the same type?")

I don't buy it when you discount a large number of anecdotal experiences with short lived SSD's... add up all the anecdotes and eventually you reach reality when most of the stories match...

In the end, speed is nice, if you need it... and if you care about your data you will have a backup or three...

Personally I buy new drives to use as offline backup every year... (For some reason I still need more and more space, despite deleting old data/programs that have lost relevancy) Currently 3TB for Data and 2TB for Applications... At this rate I should need near 20TB for a full backup by 2018, which is fine.

If rotating magnetic media were to go away, I don't think I would be bothered.... I'd use SSD's in the same fashion, turn on once a month or so, backup, turn off... The lifetimes of such data should be decades, something no magnetic based drive can match.

I expect the industry will not phase out spinning media until there is a more reliable medium for consumers.

It's AM radio I am HOPING will be phased out and re-purposed very soon... currently it's just a bunch of noise. ;-)

What I find interesting in the OP: The title says drives tested, yet the post has no numbers for read/write speed or metrics other than price per GB....

Been here long enough to know many /. titles don't match the articles well at all...

Cheers, everyone...

Re:Call me old fashion@beelsebob (1)

beelsebob (529313) | 1 year,2 days | (#44600559)

I don't buy it when you discount a large number of anecdotal experiences with short lived SSD's... add up all the anecdotes and eventually you reach reality when most of the stories match...

The problem is that for each person with anecdotal evidence of SSDs failing, there's 200 other people not commenting about their entirely working SSD.

Re:Call me old fashion (1)

moosehooey (953907) | 1 year,2 days | (#44600141)

What is it with SSD controller failure?

The processor, bridges, memory controller and memory, and all the other chips in a modern computer, can run flat-out for many years without failure.

What makes the controller chips in a SSD fail so often? (And I don't believe you about the controllers in a HDD failing, I've never had one fail, or even known anyone who had one fail, out of hundreds of hard drives run for many years, but I've heard of several SSDs failing in just the few that my friends have tried). Do they spend so much on the Flash chips that they have to go that cheap on the controller chip?

Re:Call me old fashion (1)

sribe (304414) | 1 year,2 days | (#44600163)

Given that the average HDD dies after only 4 years...

Sorry, not even close.

Re:Call me old fashion (1)

iggymanz (596061) | 1 year,2 days | (#44600795)

you have hard facts. 2007 google study said about six years for all enterprise and consumer grade magnetic disks, however for low utilization disks most fail in only three years (contrary to most people's expectations)

Also most people don't write as much as they think (1)

Sycraft-fu (314770) | 1 year,2 days | (#44600167)

Usually, once you have your computer set up with your programs, you don't write a ton of data. A few MB per day or so. Samsung drives come with a little utility so you can monitor it.

As a sample data point I reinstalled my system back at the end of March. I keep my OS, apps, games, and user data all on an SSD. I have an HDD just for media and the like (it is a 512GB drive). I play a lot of games and install them pretty freely. In that time, I've written 1.54TB to my drive. So around 11GB per day averaged out, though realistically about 500GB of that was done the first day, since I installed the system, put the apps on, then changed my mind with regards to UEFI boot, and reinstalled the system.

I think some people believe that since they have a lot of data, they must write a lot and thus the write limit would be problematic. However the data you have is usually largely static. Your delta is fairly small and thus not that problematic to flash.

So while I wouldn't want to use TLC flash drives in a backup system or something, there really isn't an issue in a desktop. If you do have an atypical situation where you have very write intensive workloads, well you can always have a magnetic (or SLC flash) secondary disk for them. But for desktop usage, you just aren't going to write that much to your disk.

Re:Call me old fashion (1)

Kjella (173770) | 1 year,2 days | (#44600269)

I hear people say that, but my first SSD I used as a scratch disk for everything since it was so fast, it burned through the 10k writes/cell in 1.5 years. My current SSD (WD SiliconEdge Blue 128GB) has been treated far more nicely and has been operational for 1 year 10 months, SSDLife indicates it'll die in about 2 years for a total of 3 years 10 months. Granted it's been running almost 24x7 but apart from downloads running to a HDD it's been idle most of that time, unlike a HDD where the bearings wear out that shouldn't have much to say for SSD life time only bytes written. I'd not buy any consumer disk unless you consider it "expendable" and will die in <5 years, personally I'll be looking at an enterprise disk next time around.

Re:Call me old fashion (0)

Anonymous Coward | 1 year,2 days | (#44599069)

I am more worried about what happens when the power is pulled suddenly or a spike/dip comes through. I recall reading an article which showed the drive could be bricked like that...http://www.zdnet.com/how-ssd-power-faults-scramble-your-data-7000011979/
I know some areas are very susceptible to power outages...

Re:Call me old fashion (2)

Joce640k (829181) | 1 year,2 days | (#44599091)

Modern SSDs have big capacitors in them to avoid that (well, some of them do...). They can complete pending writes on capacitor power alone.

Re:Call me old fashion (1)

Anonymous Coward | 1 year,2 days | (#44599405)

No, they don't !
Only some server SSD have capacitors. They are considered too "expensive" to be put on the consumer version.

Re:Call me old fashion (1)

Joce640k (829181) | 1 year,2 days | (#44600455)

Isn't "some" the word I used, AC?

Re:Call me old fashion (0)

Anonymous Coward | 1 year,2 days | (#44600701)

Power spikes can ruin any electronic equipment, it's not limited to SSDs. If you have power issues, get a UPS, they aren't expensive.

Re:Call me old fashion (1)

bemymonkey (1244086) | 1 year,2 days | (#44599071)

For an OS drive (page file off, 8+GB of RAM), I don't see any "premium" (i.e. non-cheapo) SSD with 120+GB of capacity failing within 10 years... There are a few forum posts where people have actually tested how much data they could write to SSDs (i.e. permanently writing at max. speed until the drive fails), and the results are pretty good. The few drives that did eventually break just switched to read-only mode... Can't for the life of me find the damned thread though. Maybe someone here knows which one I'm talking about? It was in one of the usual tech forums...

As a cache drive for audio/video or similar "write-multiple-gigabytes-per-minute" applications, well, make sure you've got enough cash in the bank for the replacement drive...

Re:Call me old fashion (3, Interesting)

beelsebob (529313) | 1 year,2 days | (#44599087)

The problem with such tests of writing as much as you can as fast as you can is that they're rather deceptive. They don't allow TRIM and wear levelling to do their thing (as they normally would), and hence show a much worse scenario than you would normally be dealing with. Actual projections of real life usage patterns writing ~10GB to these drives per day show you can get their life span in years (specifically the 840 we're talking about here) by dividing the capacity (in gigabytes) by 10.

Re:Call me old fashion (1)

bemymonkey (1244086) | 1 year,2 days | (#44599107)

That's definitely true, but with the drives not showing any signs of abnormally early failure even in the worst-case scenario, I'd say that's a good thing. :)

Re:Call me old fashion (5, Interesting)

Gaygirlie (1657131) | 1 year,2 days | (#44599117)

How many effective READ/WRITE cycle can the chip in SSD perform, before they start degrading ?

They don't start degrading, per se. Performance-degradation is all due to wear-levelling and the amount of free blocks on the drive, and that varies between manufacturers. Generally the advice is to have atleast 20% of the drive free at all times for wear-levelling and TRIM to work efficiently and in such a situation there should be no performance-degradation.

As for reading and writing cells? Well, you can read a cell indefinitely. You cannot write to cells forever, however, and once the limit comes there is 100% degradation -- so to speak -- as in that that cell cannot be written to ever again. It just goes from 100% to 0%, so using the term "degradation" for that still seems useless. I'll repeat, though, that it can still be read from even if it can't be written to.

Has there been any comparison made in between the reliability (eg read/write cycles) of old fashion spinning-plate HD versus that of SSD ?

Plenty, but how much those comparisons actually cover and how reliable they are is subject to debate. Generally the consensus is that SSDs are more reliable nowadays as full-on controller-failures are very rare and since the SSDs can still be read from even if they hit the maximum amount of writes that means your data is quite a lot safer in the long run -- if a regular, mechanical drive can't write to some sector it most likely can't read it either, and that means your data is as good as gone.

Re:Call me old fashion (-1)

Anonymous Coward | 1 year,2 days | (#44599121)

They're just like these newfangled horseless carriages! Work of the devil I say.

Seriously gramps - you're living in the past. Now get off my lawn!

Re:Call me old fashion (0)

gigaherz (2653757) | 1 year,2 days | (#44599181)

Read cycles? many millions. Write cycles? not so many, but a lot. ERASE cycles? around the 10k order of magnitude, for the latest generation TLC. And it will get smaller as they shrink the capacity of the floating gates (electrons get stuck there, and it fills quicker).

Re:Call me old fashion (1)

citizenr (871508) | 1 year,2 days | (#44599217)

ERASE cycles? around the 10k order of magnitude, for the latest generation TLC. And it will get smaller as they shrink the capacity of the floating gates (electrons get stuck there, and it fills quicker).

haha, try 300-500 for latest smallest TLC

Re:Call me old fashion (1)

gigaherz (2653757) | 1 year,2 days | (#44599289)

With wear leveling?

Re:Call me old fashion (1)

Immerman (2627577) | 1 year,2 days | (#44599809)

Isn't wear leveling mostly orthogonal to erase cycles? I.e. it tries to spread out the erasures evenly among the flash blocks, but doesn't actually change how many times any given block can be erased.

In fact I would think wear leveling would actually *decrease* the total number of user visible erase cycles for a drive since it means that infrequently changing data is continuously being shuffled from low-usage areas to high-usage ones to make the low-usage areas available, and every one of those user-invisible moves is another erase/write cycle lost.

Re:Call me old fashion (1)

gigaherz (2653757) | 1 year,2 days | (#44599963)

Not quite: in any practical usage, there's a lot of sectors that are only written once, and a few sectors that are written a lot more often. As an example: System files, and the TEMP folder, respectively. With wear leveling, the controller is free to swap them around, so that all the sectors are used more.

Of course, that sector rotation could result in the TOTAL number of writes/erases going up, but without wear leveling, all the sectors corresponding to the often-written files (TEMP, logs, pagefile/swap, ...) would die out quite quickly.

Re:Call me old fashion (1)

Immerman (2627577) | 1 year,2 days | (#44600745)

Well yeah - which is why I said it's orthogonal: wear leveling has absolutely zero effect on the number of erase cycles a given block can handle, it just spreads the load around since otherwise some blocks will get hammered while others are almost never modified.

Similarly the number of write cycles available has almost no effect on wear leveling beyond setting the "danger limits" for each block.

Ah, okay, I think I see the source of confusion - yes, in most usage patterns wear leveling will dramatically increase the number of erases before the first block failure, but it will also dramatically decrease the number of erases before the *last* block failure. Which would be relevant in some hypothetical drive that gracefully lost capacity as blocks "died"

Re:Call me old fashion (1)

Rockoon (1252108) | 1 year,2 days | (#44599435)

around the 10k order of magnitude, for the latest generation TLC.

You are thinking of SLC, not TLC, and are also probably off by a generation.

Re:Call me old fashion (1)

Immerman (2627577) | 1 year,2 days | (#44599869)

So what exactly is the difference between a write cycle and an erase cycle in practical terms? Yeah I know and erase block typically contains many write blocks, but for any given write block I can't write to it a second time until it's erased, therefore it would seem the number of write cycles can't possibly exceed the number of erasures.

Or is there some marketing mumbo-jumbo being applied to the terms now?

Re:Call me old fashion (0)

gigaherz (2653757) | 1 year,2 days | (#44599931)

It's a bit like this:

Flash memory has a special transistor with a gate that's disconnected from the circuit. This gate can be loaded with electrons by sending them at a high enough voltage, or "sucked" empty by quantum tunneling.

I don't know exactly which is write and which is erase but: the write operation is less costly, but it can only turn bits off but not back on (or the other way around). When you need to reset the bits, so that they can be turned back off, you have to erase the sectors. The erase operation is more costly, and wears down the "floating gate" a lot quicker. Since it's more costly, it's done on a larger group of sectors at the same time. This means that you can reset a whole group, but then only write a piece of it. Here's where the famous TRIM comes to help: without TRIM, the flash controller doesn't know which sectors of an erase group are still used, so when the OS asks for a sector write that requires an erase operation, it has to read all of the sectors, erase, and then write all of them. With TRIM, it can ignore the sectors that are marked as empty.

The reason why erase cycles are limited, is because in those "inject"/"vacuum" cycles, some electrons get stuck inside the gate, and the smaller the gates get, the quicker they fill up.

Re:Call me old fashion (1)

Immerman (2627577) | 1 year,2 days | (#44600659)

Certainly, but my point is that you can't meaningfully write to a cell without first erasing it, so the number of writes to a cell cannot exceed the number of times it block has been erased.

Perhaps it will be clearer if you consider an SSD where each cell can be erased independently - it would be more expensive to make, but in principle there's no reason you couldn't do such a thing.

Re:Call me old fashion (2)

Rockoon (1252108) | 1 year,2 days | (#44600159)

So what exactly is the difference between a write cycle and an erase cycle in practical terms?

The difference is that there is no such thing as a "write cycle." The guy that you are replying to doesnt actually know much about what he is talking about.

In regard to the general difference between writes and erases the terminology on the table are supposed to be write amplification, block size (typically 256KB), and page size (typically 4KB.) Write amplification occurs when data smaller than a page is frequently written or "modified."

In practice write amplification is typically below 2x on modern controllers, and obviously always larger than 1x.

Worst case write amplification is horrendous at 4096x but the typical scenario where near-worst-case amplification does occur turns out to be low volume traffic in practice (for every near-worst-case page write the SSD experiences, it experiences thousands of near-best-case page writes.)

The worst cases turn out to be log file writes, but thats very small change compared to other write activity (its like worrying about that $1 surcharge on your $150/month cable bill.)

EXACTLY what's held ME back from Flash SSD (0)

Anonymous Coward | 1 year,2 days | (#44600337)

"How many effective READ/WRITE cycle can the chip in SSD perform, before they start degrading? Has there been any comparison made in between the reliability (eg read/write cycles) of old fashion spinning-plate HD versus that of SSD?" - by Taco Cowboy (5327) on Sunday August 18, 2013 @05:24AM (#44598983)

See my subject-line: When I see conclusive & consistent evidence of Flash based SSD's outlasting/showing superior overall longevity + performance over decades? Then, I'll convert over.

Until then, however? I'll keep using the setup I have for @++ decades (since it has lasted for that timeframe across 5 different systems that still run to this very day using HDD's + a "True SSD" NOT based on Flash (DDR-2 currently)):

Since 1992 or so, 1st using separate HDDs (slower seek/access by FAR) & then using software ramdisks per the list below (on a MS-DDK based one I wrote in fact, on how I apply them):

Then applying Software-Based Ramdrives to database work with EEC Systems/SuperSpeed.com to do MOST of the following in the enumerated list below (minus pagefile placement on software based ramdrives, I just didn't have enough RAM onboard to justify it was all though).

Then, once I got a CENATEK "RocketDrive" (PC-133 SDRAM based) with 4gb onboard, & now currently a Gigabyte IRAM 4gb DDR-2 based hardware ramdisk card??

That all changed (to a better way to do it, via dedicated RAM in hardware, NO "flush" from memory pressure by other processes writing ram & the memory mgt. subsystem responding to it, flushing software caches & their FIFO queue like algorithms).

I move the following off my wd Velociraptor sata II 10,000 rpm 16mb buffered harddisks that are driven off a Promise Ex-8350 128mb ECC ram caching raid sata 1/2 controller (which defers/delays writes via said cache, & also lessens physical head movement on disks & this is where I am going to make it even faster via lessening its workloads, read on & reduces fragmentation as well in the same stroke - "bonus") onto my 4gb DDR2 Gigabyte IRAM PCIExpress ramdisk card:

A.) Pagefile.sys (partition #1 1gb size, rest is on 3gb partition next - this I didn't do on software ramdrives though)
B.) OS & App level logging (EventLogs + App Logging)
C.) WebBrowser caches, histories, sessions & browsers too
D.) Print Spooling
E.) %Temp% ops (OS & user level temp ops environmental variable values alterations)
F.) %Tmp% ops (OS & user level temp ops environmental variable values alterations)
G.) %Comspec% (command interpreter location, cmd.exe in this case, & in DOS/Win9x years before, command.com also)
H.) Lastly - I also place my custom hosts file onto it, via redirecting where it's referenced by the OS, here in the registry (for performance AND security):

HKLM\system\CurrentControlSet\services\Tcpip\Parameters

(Specifically altering the "DataBasePath" parameter there which also acts more-or-less, like a *NIX shadow password system also!)

* All of which lessen the amount of work my "main" OS & programs slower mechanical hard disks have to do, "speeding them up" by lessening their workload, fragmentation, and speeding up access/seek latency for the things in the list above too.

APK

P.S.=> HDD's concentrate on program &/or data fetches that are still hdd bound (& not kernelmode diskcaching subsystem cached in 4gb of DDR3 system ram here either yet) done on a media that has no heads to move, & thus, more mechanical latency + slower seek/access as you get on hard disks + reduced filesystem fragmentations due to that all, also & it works!

... apk

Missing disclaimer (0)

Anonymous Coward | 1 year,2 days | (#44598991)

Wow, MojoKid's lines read like an advertorial. So where's the disclaimer?
I'm missing the "Im not the author of the linked article, or part of the publishing organization, and I do not receive kickback fees for the extra slashdot traffic."

Still put off by price. :( (0)

Anonymous Coward | 1 year,2 days | (#44599007)

I bought a samsung 840 128 gig drive( not pro ) and Iove it. Newegg had a deal going for like $109 bucks and I jumped on it. It's fast enough and big enough for any OS. I use another rotational drive( seagate 2 tb's ) for my vm's, progs, and files. I really am satisfied with 128gigs using a larger secondary drive for everything else. The price on SSD's really make you think just how much sp ace versus perfomance do I need. I think I'm going to wait a little longer for the prices to come down.

Note: Anyone here about any programs like spinrite or other for drive recovery for SSD's? I'm really curious. Thanks. - A.B

Re:Still put off by price. :( (2)

Gaygirlie (1657131) | 1 year,2 days | (#44599055)

Note: Anyone here about any programs like spinrite or other for drive recovery for SSD's?

There is no such a thing except for the few that are just trying to dupe you into giving them money. Why? Well, as long as the drive's controller itself is working and the drive's internal state isn't corrupted you can read the cells indefinitely. You cannot write to cells indefinitely, but all major manufacturers these days promise that even if all the cells failed in the whole drive you should be able to read them. On the other hand, if the drive's controller goes bonkers or the internal state gets messed up there is *no software whatsoever* that can fix it. You'd have to open the drive and work with the actual flash-chips themselves in the hopes of recovering your data, and due to the nature of SSDs where the cells can be re-located at any given time for wear-levelling purposes that'd be one helluva task.

Now, if your filesystems or such get messed up any tool that works on mechanical HDDs works just fine on SSDs. There is no difference.

Re:Still put off by price. :( (1)

Immerman (2627577) | 1 year,2 days | (#44600043)

>all major manufacturers these days promise that even if all the cells failed in the whole drive you should be able to read them
and you should take such promises as having the same integrity as all other marketing claims - i.e they're probably not blatant lies.

A Samsung 840 endurance test posted in a comment above: They ran for ~3000 erase cycles until encountering the first unrecoverable read error at which point they declared the drive "dead" - it still seemed to work, but data had already been lost and more losses were inevitable.
http://uk.hardware.info/reviews/4178/10/hardwareinfo-tests-lifespan-of-samsung-ssd-840-250gb-tlc-ssd-updated-with-final-conclusion-final-update-20-6-2013 [hardware.info]

There's also unpowered data retention to consider - you're storing your data as a partial charge in a capacitor - 1 bit/cap (2 levels) for SLC, 2bits (4 levels) for MLC, or 3bits (8 levels) for TLC. And every one of those capacitors starts losing charge the instant it's written to - it does so very slowly, and typically starts out lasting for years (IIRC), but as the caps start wearing out and the charge levels get "fuzzier" that number can eventually drop to days, though supposedly that's typically well past the drive's rated erase cycles (i.e. folks hammering it 24-7 and hitting 30,000x erase cycles)

But yeah, you're absolutely correct that data recovery tools aren't going to be much use for an SSD where data is stored in discrete "bins", and either it's there or it isn't. Unlike a HDD where data is stored as magnetic "footprints" on a continuous media where even intentionally erasing it will tend to leave a portion of the print behind in the gap between tracks, which can then generally be recovered through repeated head repositioning and statistical analysis.

Re:Still put off by price. :( (1)

Joce640k (829181) | 1 year,2 days | (#44599065)

I bought a samsung 840 128 gig drive( not pro ) and Iove it.

Yeah, we all like the look of the "hot" axis on the graph.

We're worried about the "crazy" axis. Most of us have a long term relationship with our data.

(for those who don't know: http://www.codinghorror.com/blog/2011/05/the-hot-crazy-solid-state-drive-scale.html [codinghorror.com] )

nb. I've been using a 40Gb Intel SSD as my boot drive for a couple of years, it's still going strong AFAICT but there's not too many writes to that drive (swapfile and $home is on a velociraptor).

Not paying for TLC at that price (0)

Anonymous Coward | 1 year,2 days | (#44599143)

These are "1 year and out" drives. 1000-2000 P/E cycles will fly by in under a year in most prosumer applications. I'll pay hard disk per GB prices for a TLC SSD, but no more.

Re:Not paying for TLC at that price (3, Informative)

beelsebob (529313) | 1 year,2 days | (#44599165)

Except if you actually bothered to educate yourself, you'd find that at the capacities samsung is offering you, if you write to them at 10GB a day, every day, they'll last entirely respectable times (12,23,47,94 years respectively for 120,240,480 and 960GB drives).

Re:Not paying for TLC at that price (0)

Anonymous Coward | 1 year,2 days | (#44600369)

And how am I supposed to edit my HD videos?

10GB??? That's a short video indeed... and if I am editing it I will be using more space... somehow an SSD doesn't sound so good.

I make a lot of videos, and it's not worth killing a drive in months. Make it magnetic media for me, thanks.

The Muzzies are coming (-1)

Anonymous Coward | 1 year,2 days | (#44599245)

The Muzzies are coming [youtube.com]
The Muzzies are coming
Everyone keep calm
They're Evil and they're violent
And they mean to do us harm

use a XEON next time toms hardware! (0)

Anonymous Coward | 1 year,2 days | (#44599331)

lol. it's nice to have somebody do all the new hardware testing thru the years.
I suppose it can get abit "tiring" and "boring" after so many years.
but i must say that a z68 chipset and a core i7-2600K are a really bad TEST bed for I/O
tests, non?
the z68 and i7-2600K only have 16 pci-e lanes and pci-e version two.
the "DMI" is 5 GT/sec only.
seriously,it be abit more of a "test" if the test bed weren't constraining the whole I/O.
i'd recommend a beefy recent XEON (which today has a sandy bridge-e core) and a socket
2011 chipset.
this should FOR SURE eliminate all I/O CPU and chipset bottle necks. afterall the xeons for socket 2011
do 40 pci-e lanes and pci-e version 3 and QPI.
i'm sorry if i just blurted out a well keep secret : )

Re:use a XEON next time toms hardware! (0)

Anonymous Coward | 1 year,2 days | (#44599361)

oh maybe also use the windows SERVER version.
i could imagine M$ doing their little "pay-more" tricks to
artificially limit speeds on desktop version of windblows. (assumption, but should try/test)
also don't install dotNET since it seems to do some
unique hardware compiling in the background to flatten out
the responsiveness of the system so that "slow" chips
and "fast" chips become equal, so that all vendors with
the gazillion hardware combinations
feel regularly "responsive" ... muha^^mono^^haha^^poly^^haha

Re:use a XEON next time toms hardware! (1)

beelsebob (529313) | 1 year,2 days | (#44599413)

You're talking about testing a device that doesn't even saturate 3 PCIe lanes, and complaining the test bed "only" has 16? Really?

Re:use a XEON next time toms hardware! (0)

Anonymous Coward | 1 year,2 days | (#44600077)

proof is in eating the puddling, sir.
everything is theoretical until physically test, non? oui?
anyways, my anecdote is that a i5-760 will max out at 250 MB/sec.
the ssd is connected via correct sata 3 cable and (advertised) sata 3 port.
the same ssd has done more then 300 MB/sec on another chipset/cpu platform,
as test by toms hardware ...
the ark says that GT/s for the i5-760 is 2.5 GT/sec.
coincidence? i guess not (250MB/sec = 2.5GigaTransfers/sec = 2500 MegaTransfers/sec (= 2500/10)).
my assumption is, that the chip responsible for doing the SATA 3 (in my case), is connected only
via 1(one) pci-e lane, maybe?
furthermore, if during the test ANYTHING else connected via pci-e bus (sata (raid?), video-card, sound, networkcard(s), etc.)
wants to send some data, then at least one pci-e lane is used (and becomes unavailable to the other stuff connected).
connected a GFX titan to a system/cpu/chipset that IN TOTAL only has 16 pci-e lanes
is a very strange thing to do, because the grafic card all by itself can/will/want to use
16 lanes.
to be honest i don't really understand this whole pci-e 2/3, DMI, QPI, HyperTransport thing (and it will
become more difficult to understand because everything is moving inside the CPU (northbridge, memory controller, video)).
good tests will become even MORE relevant in the future.
as a last note: I assume that the GT/sec value is for 1(one) pci-e lane.
if this is wrong and it is a TOTAL value for ALL advertised pci-e lanes (16 in toms ahrdware test case) then ...*holy bottleneck batman*.
further enlightenments are welcome: )

Re:use a XEON next time toms hardware! (0)

Anonymous Coward | 1 year,2 days | (#44600767)

Another armchair engineer talking about things he doesn't understand. Used to be cute, now it's just tiring.

NAND, or Exclusive NOR? (1)

smitty_one_each (243267) | 1 year,2 days | (#44599363)

The argument went on for about half an hour in EE lab, before the teacher came along and announced: "Yes."

Was the RAPID sw used throughout the test (1)

Marrow (195242) | 1 year,2 days | (#44599467)

One assumes this is windows software. Did the competing drives have their drivers installed too? I would want to see its performance without drivers installed and used as a plain SATA drive. And I would like to see with and without RAPID numbers.

Is RAPID a sophisticated buffer cache that is doing lazy writes to the SSD?

Is anyone building home SANs out of SSDs yet? (1)

swb (14022) | 1 year,2 days | (#44599937)

In the 2-5 TB range?

I previously would have maybe wanted this but not been willing to spend the money or expose my storage to disk failure with consumer SSD.

I'm thinking now it's getting to the point where it might be reasonable. I usually do RAID-10 for the performance (rebuild speed on RAID-5 with 2TB disks scares me) with the penalty of storage efficiency.

With 512GB SSD sort of affordable, I can switch to RAID-5 for the improved storage efficiency and still get an improvement in performance.

Re:Is anyone building home SANs out of SSDs yet? (0)

dfghjk (711126) | 1 year,2 days | (#44600089)

A RAID array is not a SAN and "home SANs" are moronic. Arrays of SSDs are not a new idea. I've been using SSD arrays longer than I've been using single SSDs.

Applying traditional RAID redundancy techniques to SSD is stupid.

Re:Is anyone building home SANs out of SSDs yet? (1)

swb (14022) | 1 year,2 days | (#44600323)

A RAID array is not a SAN but I have yet to see an actual SAN configured JBOD only, if it's even an option. I'm not sure how you would aggregate the storage of single SSDs without RAID.

And I don't know what's moronic about home SANs, mine has ~7 TB storage and volumes exported via iSCSI and NFS to 3-4 systems.

What's wrong with RAID redundancy techniques for SSDs? Between the aggregation required for larger LUNs, I would think you would want to hedge the risk of a device failure.

Re:Is anyone building home SANs out of SSDs yet? (0)

Anonymous Coward | 1 year,2 days | (#44600443)

NFS makes it a NAS, not a SAN.

captcha: cheapens

Re:Is anyone building home SANs out of SSDs yet? (1)

Anonymous Coward | 1 year,2 days | (#44600801)

There's nothing wrong with putting SSDs in RAID, and home SAN/NAS is not "moronic". dfl;hasdo has no idea what he's talking about, like an ever increasing number of /. posters, sadly.

Why? (1)

Sycraft-fu (314770) | 1 year,2 days | (#44600513)

You'd need a better network to have any use. A modern 7200rpm drive is usually around the speed of a 1gbit link, sometimes faster, sometimes slower depending on the workload. Get a RAID going and you can generally out-do the bandwidth nearly all the time.

SSDs are WAY faster. They can slam a 6gbit SATA/SAS link, and can do so with nearly any workload. So you RAID them and you are talking even more bandwidth. You'd need a 10gig network to be able to see the performance benefits from them. Not that you can't have that in your house, but you don't because it is damn expensive. Lacking that, you'll notice very little improvement over magnetic drives.

Also to be technically correct (the best kind of correct) you probably don't have a SAN in your home. A SAN is a separate network, purely for storage devices, not connected to your LAN. It is a FC/FCoE/iSCSI/whatever backend that your storage devices talk on, and then there's a different network that your clients use to talk to the storage server (which is on both networks).

Re:Why? (1)

DigiShaman (671371) | 1 year,2 days | (#44600627)

Depends on what he's doing over that SAN via gigabit link. If he's fetching large files, I agree. If it's the latency he's trying to reduce (say, SQL queries), SSDs will help immensely; even over a limited gigabit link.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>