Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Taking a Hard Look At SSD Write Endurance

timothy posted about a year and a half ago | from the now-it's-just-a-budget-question dept.

Data Storage 267

New submitter jyujin writes "Ever wonder how long your SSD will last? It's funny how bad people are at estimating just how long '100,000 writes' are going to take when spread over a device that spans several thousand of those blocks over several gigabytes of memory. It obviously gets far worse with newer flash memory that is able to withstand a whopping million writes per cell. So yeah, let's crunch some numbers and fix that misconception. Spoiler: even at the maximum SATA 3.0 link speeds, you'd still find yourself waiting several months or even years for that SSD to start dying on you."

cancel ×

267 comments

Sorry! There are no comments related to the filter you selected.

Holy idiocy batman (4, Insightful)

Anonymous Coward | about a year and a half ago | (#42943825)

100000 writes? 1M writes?

What the fuck is this submitter smoking?

Newer NAND flash can sustain maybe 3000 writes per cell, and if it's TLC NAND, maybe 500 to 1000 writes.

Re:Holy idiocy batman (1, Troll)

h4rr4r (612664) | about a year and a half ago | (#42943843)

Citation needed.

Re:Holy idiocy batman (5, Informative)

ioconnor (2581137) | about a year and a half ago | (#42944135)

Citation needed? The manufacturers typically tell you. For instance here http://www.newegg.com/Product/Product.aspx?Item=N82E16820239045 [newegg.com] it states "Budget-minded gamers and enthusiasts will benefit from the lower price of Kingston’s new HyperX 3K SSD. This solid-state drive combines premium 3000 program-erase cycle Toggle NAND with the second-generation SandForce controller" So it gets only 3% of the authors most optimistic graph! Kind of funny article actually. Like the mad scientist doing lots of good math but overlooking the most obvious information the ding bat brought along for comedy plot complications sees in a flash. I wrote a tutorial yesterday on how to make a ram drive on linux so as to avoid using your fancy fast flash drive. It can be found here: https://ioconnor.wordpress.com/2013/02/18/tutorial-on-automatically-moving-home-to-ram-drive-and-back-on-startup-and-shutdown/ [wordpress.com]

Re:Holy idiocy batman (5, Interesting)

ebh (116526) | about a year and a half ago | (#42944711)

RAM disks are cool and all, but except on live CDs they're usually unnecessary. The kernel's buffer cache and directory-name-lookup cache (in RAM) can often outperform RAM disks on second reads and writes.

(Claimer: I worked on file systems for HP-UX, and we measured this when we considered adding our internal experimental RAM FS to the production OS.)

Re:Holy idiocy batman (5, Informative)

Anonymous Coward | about a year and a half ago | (#42943847)

  • SLC NAND flash is typically rated at about 100k cycles (Samsung OneNAND KFW4G16Q2M)
  • MLC NAND flash used to be rated at about 5k – 10k cycles (Samsung K9G8G08U0M) but is now typically 1k – 3k cycles
  • TLC NAND flash is typically rated at about 1k cycles (Samsung 840)

Re:Holy idiocy batman (1, Flamebait)

h4rr4r (612664) | about a year and a half ago | (#42943877)

That is not a citation. That is you again making some claims. Please do provide a link to a source for this.

I am not disputing the accuracy, just that it coming from an AC does not inspire confidence.

Re:Holy idiocy batman (5, Informative)

jyujin (2701721) | about a year and a half ago | (#42944059)

I specifically had SLCs in mind when I ran the numbers. As for the 100k writes I used my original calculations, I took those from this PDF here: http://www.datasheetcatalog.org/datasheets2/16/1697648_1.pdf [datasheetcatalog.org] - see section 1.5, it lists "Endurance : 100K Program/Erase Cycles" As for the 1M write cycles: http://investors.micron.com/releasedetail.cfm?ReleaseID=440650 [micron.com] - that one came out in 2008, so using it as a baseline for "newer" SLCs didn't seem that far off. I'll have to revise the article to include those links methinks...

Re:Holy idiocy batman (0)

Anonymous Coward | about a year and a half ago | (#42944505)

90% of SSDs are not SLC because the average person can't accord $1k for 60GB. The average case is 1k-3k write cycles, until they get that 800c flash heated NAND.

Re:Holy idiocy batman (3, Insightful)

Anonymous Coward | about a year and a half ago | (#42944205)

He referenced specific models. A hyperlink is not the only way to refer to a source. You were given enough information to find the source easily.

Re:Holy idiocy batman (4, Funny)

craznar (710808) | about a year and a half ago | (#42943949)

Obviously the TLC NAND is named for the Tender Loving Care you need to give it during use.

I think the Slack Lazy Careless stuff is more robust.

Re:Holy idiocy batman (1)

afidel (530433) | about a year and a half ago | (#42944389)

True, those are typical values for value oriented parts, there's also high endurance SLC at ~1M cycles and eMLC at ~30k cycles, the downside is a much higher $/GB so it only makes sense to use them in environments where you know you'll have long periods of high write intensity (like write cache for a SAN or ZIL for a ZFS volume).

Re:Holy idiocy batman (5, Insightful)

CajunArson (465943) | about a year and a half ago | (#42943967)

The AC is dead-on right. At 25nm the endurance for high-quality MLC cells is about 3,000 writes. That's a relatively conservative estimate so you are pretty much guaranteed to get the 3K writes and likely somewhat more, but it's a far far cry from the 100K writes you can get from the highly expensive SLC chips. Intel & Micron claimed that one of the big "improvements" in the 20nm process was hi-K gates that are claimed to maintain the 3K write endurance at 20nm, which otherwise would have dropped even more from the 25nm node.

The author of the article went to all the time & trouble to do his mathematical analysis without spending 10 minutes to find out the publicly available information about how real NAND in the real world actually performs....

Re:Holy idiocy batman (0)

Anonymous Coward | about a year and a half ago | (#42944241)

He proves things by experimenting, a real physicist!

Re:Holy idiocy batman (2, Interesting)

Anonymous Coward | about a year and a half ago | (#42944007)

A quick glance at wikipedia tells me that you're being rather pessimistic...

"Most commercially available flash products are guaranteed to withstand around 100,000 P/E cycles before the wear begins to deteriorate the integrity of the storage. Micron Technology and Sun Microsystems announced an SLC NAND flash memory chip rated for 1,000,000 P/E cycles on 17 December 2008."

http://en.wikipedia.org/wiki/Flash_memory#Memory_wear [wikipedia.org]

Re:Holy idiocy batman (2)

steviesteveo12 (2755637) | about a year and a half ago | (#42944173)

Micron Technology and Sun Microsystems announced an SLC NAND flash memory chip rated for 1,000,000 P/E cycles on 17 December 2008."

Only if you're using SLC NAND, which is the fast, expensive, long lasting stuff. The other kinds (MLC/TLC) wear out much quicker.

Re:Holy idiocy batman (0)

Anonymous Coward | about a year and a half ago | (#42944717)

December 2008 was 5 years ago which can easily be a few generations of FLASH and chips technologies earlier.

Re:Holy idiocy batman (1)

beelsebob (529313) | about a year and a half ago | (#42944351)

On the other hand, while you're right that they're an order of magnitude and a half out with that, they're also deliberately 3-4 orders of magnitude or more out with the rate at which you write data, so in reality, the likelihood is actually lifespans much longer than those listed in the article.

Tried It - Disappointed (1)

Anonymous Coward | about a year and a half ago | (#42943837)

I've done the math and always come out with years of expected use.

Each time I've tried an SSD it's failed after a year.

Now I use spinning platters. Cheaper, cooler, seem to last for ever. I miss the speed, but I need my disks to last longer than a year. I've got 10 year old 40GB disks still running fine.

Re:Tried It - Disappointed (2, Insightful)

Anonymous Coward | about a year and a half ago | (#42943897)

I have never had a laptop hard drive last more than two years, and only had one last more than eighteen months. Maybe your spinning-metal-one-micron-away-from-the-drive-head drives work well in a stationary, temperature-controlled environment, I guess.

I Know People Like You (1, Insightful)

Anonymous Coward | about a year and a half ago | (#42944027)

I have never had a laptop hard drive last more than two years, and only had one last more than eighteen months. Maybe your spinning-metal-one-micron-away-from-the-drive-head drives work well in a stationary, temperature-controlled environment, I guess.

I know people like you. Their laptops never last, their screens are always splattered and often cracked, their iPods and ear buds are always breaking, their power cords are always twisted and frayed.

But my stuff lasts for years. My present laptop, a Dell Precision, is dated 2007. It's been all over the world, in filthy closets, big server rooms, up radio towers, on boats... I'd like to get a new one. But, I can't justify the replacement because my present laptop is in MINT condition save for the battery. Mint.

Some people take care of their stuff, many people don't.

Re:I Know People Like You (1)

Anonymous Coward | about a year and a half ago | (#42944129)

Do you own your stuff, or does your stuff own you? If the laptop wears out at the same rate as it becomes technologically obsolecent, what's the problem? And as you pointed out, the battery degrades over time and a replacement, four or five years later, costs much more than the laptop is worth, so there's no winning anyway (because, after all, what good is a portable computer without a battery?)

Regardless, I noticed this problem about myself and splurged on a used Toughbook. Nothing on it breaks because it's actually well-built, to a level unlike any other laptop I've ever seen. Except the hard disk, because that's the only thing on it with delicate moving parts. Thankfully, SSDs are now cheap enough to stuff one in there as a replacement and forget about it, until I need to buy a new laptop because getting a new battery costs almost the same amount.

Re:I Know People Like You (2)

BlackSnake112 (912158) | about a year and a half ago | (#42944499)

For laptop batteries I have been told that they (the batteries) will not get a memory. I have yet to find a rechargeable battery that doesn't get a memory. With a laptop it is easy to determine. You charge the laptop battery until fully charged. Then when running the laptop on the battery the low power warning pops up in 5-10 minutes (often less then 5 minutes). This is why I usually make a drain battery power setting plan. This power plan has no auto shut off. I can usually run the laptop with 0% battery life for 1-2 hours. Then the laptop shuts off. Turn the laptop on and repeat. When you might get the laptop to post and then it is off again you can charge the laptop battery for the number of hours it takes to get a full charge. This is a pain if you did not note that when you got the laptop. Mine is 12 hours. I have seen 8 hours, 24 hours, 6 hours you need to know what you laptop battery takes for a full charge. You can over charge it (I did on older batteries) it you leave them charging for too long. After fully charged hour-wise I use the laptop until the power runs out again. Then I charge it and change the power setting back to what I normally use. I get my full life out of the battery again. I usually drain the battery 1-2 times a year. I have a 8 year old laptop still on its original battery. I get 3 hours of no power saving use on it and 6 hours of power saver setting use. I do the same thing with my newer laptop. Until I see otherwise I'll keep doing what I am doing.

I know that is not what the laptop companies tell you. My own experience with about 100+ laptops and few thousand other rechargeable batteries is they all get a memory at some point. Draining them, the timed recharge, then use until out of power resets the memory.

Re:I Know People Like You (1)

vipw (228) | about a year and a half ago | (#42944669)

Replacement batteries aren't usually that expensive. Definitely cheaper than a new laptop.

Re:I Know People Like You (1)

Anonymous Coward | about a year and a half ago | (#42944747)

OEM batteries are hilariously expensive.

Aftermarket batteries from some anonymous Chinese reseller on eBay are not. These may have more capacity than your five-years-degraded battery that you're replacing. Or not.

Re:I Know People Like You (0)

Anonymous Coward | about a year and a half ago | (#42944191)

I've had two laptops over the last 7 years that makes regular travel between home and work. The first has a case that is still in mint physical condition (ok, keyboard could probably be cleaned again), a battery that still lasts for 2+ hours, and over the years it had only two kinds of physical problems. It was killed after 5 years of use due to a soldering defect on the GPU chip, that usually takes 1-2 years of use to result in a bad connection from slight flexing of the motherboard based on the class action lawsuit won against the manufacturer, but took 5 years to start to show up at all for me. The second issue was the hard drive, which had been replaced three times in those 5 years. The replacement, with SSD now, has not had any issues in 2 years, still mint condition case, and battery at 90+% condition.

Such anecdotal information is not great for trying to figure out if laptop hard drives suffer much more than desktop ones, but it is a counterexample to your idea that high failure rate of one component must mean the device is being trashed by the owner.

Re:Tried It - Disappointed (3, Insightful)

Luckyo (1726890) | about a year and a half ago | (#42944221)

I have a very old (I think I bought it circa 2004 or so, it has turion cpu). Display hinges failed in it as well as cooling so I can't play games on it anymore (discreet GPU).

Hard drive is trucking on fine.

Some hard drives obviously last less. However if you have systemic problem with hard drives lasting less then two years, it's time to take a look at the factor that remains the same between these hard drives: user.

Re:Tried It - Disappointed (0)

Anonymous Coward | about a year and a half ago | (#42944693)

I'm currently using one of the company's disposable laptops which gets reimaged every time someone goes traveling. It's nearly six years old and the hard drive still works fine.

My old XP laptop from 2007 still works OK. One of the shift keys fell off the keyboard, but the hard drive is fine.

My 486 laptop's twelve-year-old hard drive was still 95% readable in 2007, after sitting in a cupboard for six years.

My current Toshiba laptop's drive did start to fail in just over a year, though.

Re:Tried It - Disappointed (0)

Anonymous Coward | about a year and a half ago | (#42943903)

Cheaper

In terms of dollars to gigabyte, sure. In terms of power consumption, not so much. In terms of, "How much space do I need for my bloated OS?", there's little difference these days.

Bulk storage (where the dollars to gigabyte metric is actually useful) isn't the problem SSDs are currently intended to solve.

cooler

No, they're not.

I've got 10 year old 40GB disks still running fine.

And I've had 1TB disks fail after three months. ZOMGSPINNYDISKSBAD.

Re:Tried It - Disappointed (2)

clark0r (925569) | about a year and a half ago | (#42943913)

I didn't do the maths and just installed an SSD as my OS disk... in 2010. It's still there now despite being used daily and having been re-installed a couple of times (yes, Windows).

Re:Tried It - Disappointed (4, Informative)

neokushan (932374) | about a year and a half ago | (#42943979)

Had an SSD in my laptop for just over a year and a half now, no issues what so ever. Daily use as well.

Re:Tried It - Disappointed (0, Troll)

Anonymous Coward | about a year and a half ago | (#42944017)

If you're having to rebuild a Windows system that often (or any system) you're doing it wrong.

Re:Tried It - Disappointed (1)

bobbied (2522392) | about a year and a half ago | (#42944119)

How's that? Oh, he should install Linux then?

Re:Tried It - Disappointed (0)

Anonymous Coward | about a year and a half ago | (#42944691)

Reading comprehension a bit lower than optimal? The "(or any system)" would cover Linux as well. I bet you were one of those people that had a hard time finding the "any key" weren't you?

Re:Tried It - Disappointed (4, Informative)

CajunArson (465943) | about a year and a half ago | (#42943921)

Obviious Troll is Obvious but... while SSDs can & do fail (just like old hard drives can & do fail), the reason for SSD failure in the real world is very rarely due to flash memory wear. Hint: If your flash drive suddenly stops working one day, that ain't due to flash wear, which would manifest as gradual failure over time.

Re:Tried It - Disappointed (2)

Luckyo (1726890) | about a year and a half ago | (#42944257)

The issue people point out is that "even if controller is good enough to last you until wear out, your SSD will fail much sooner then a hard drive".

Fact that controllers fail ridiculously often on budget drives doesn't improve SSD reliability. It is however somewhat understandable, as SSD controllers are significantly more complex then hard drive ones.

Re:Tried It - Disappointed (2)

citizenr (871508) | about a year and a half ago | (#42944283)

You are right, they usually die of
-electronic failure (power supply, rarely controller chip itself)
-firmware bug triggered by ... wait for it .... flash memory wear (most likely firmware not being able to recognize damaged cell and insisting on using it).

Re:Tried It - Disappointed (3, Interesting)

blueg3 (192743) | about a year and a half ago | (#42944705)

Actually, better SSD controllers sense that a page has reached its rewrite limit. The end effect of this is that the size of the overprovisioned space gets reduced by one page. (The controller stops ever writing to the used-up page.) The write performance of the SSD degrades until it goes below a certain amount of overprovisioned space, at which point it refuses to write any more. The disk is still entirely readable, so it's a binary failure mechanism, but a pretty safe one.

Gradual failure over time means either you have a crap controller or that your electronics are failing in ways other than running out of write cycles.

Re:Tried It - Disappointed (2)

h4rr4r (612664) | about a year and a half ago | (#42943941)

Cheaper? Maybe per GB but not for the IO.
How many platters am I going to have to raid to get even near what a single SSD can do? Am I ever going to be able to get random reads that high and fit it all in one WTX case?

Re: Tried It - Disappointed (2)

dugancent (2616577) | about a year and a half ago | (#42944043)

Reliability always trumps speed, for me anyway.

Re: Tried It - Disappointed (4, Insightful)

h4rr4r (612664) | about a year and a half ago | (#42944071)

So then you only use magnetic tape for storage?
How long does it take to boot from that?

I have backups, so I can always restore.

Re: Tried It - Disappointed (4, Funny)

gman003 (1693318) | about a year and a half ago | (#42944617)

No, magnetic tape is too vulnerable to EMP. He boots from punch card.

Is there no "Hyperbolic bollocks" mod? (0)

Anonymous Coward | about a year and a half ago | (#42944733)

No, he said reliability trumps speed.

You can't boot a PC off magtape.

Stupid fuckwit.

Re:Tried It - Disappointed (1)

jbeaupre (752124) | about a year and a half ago | (#42944519)

Maybe. Sorta. Kinda. Not really.

Some of the hybrid systems are a nice compromise. A Momentus XT 750 for $129 has worked great for me. No, it isn't as fast as a SSD for all situations. And I really wish it had more than 8GB of flash. But for boot and launching some applications, it's fantastic. Price and storage volume are decent.

Until price, capacity, and robustness of SSD matches spinning media, we're going to see more of these hybrid systems.

Re:Tried It - Disappointed:WTF! (-1)

Anonymous Coward | about a year and a half ago | (#42944121)

Well the NeoCons will insure that any future Winchester style hard drive wont last any longer than five years, by using flash as the cache so once the cache dies it'll be horrendously slow. Then you'll be forced to either buy an new flash ssd drive every 18-months or pay for cloud storage monthly so either way the Royals{British} will screw you Royally.

Re:Tried It - Disappointed (1)

Lumpy (12016) | about a year and a half ago | (#42944749)

I also have never had a SSD last more than 24 months. Most last less than a spinning hard drive.

I use them for the speed, but anyone claiming they are reliable are smoking some strong peyote.

100,000? (5, Informative)

rgbrenner (317308) | about a year and a half ago | (#42943849)

100,000 is only for SLC NAND. MLC, what is currently in most SSDs, is only 3,000, and TLC (found in usb drives, samsung 840, and probably more SSDs soon because it's cheaper) is only 1,000.

Is 1,000 fine for most people, yes.. but you should be aware of it. I have a fileserver that writes 200gb per day.. which would kill a Samsung 840 in about 6-7 months.
http://www.anandtech.com/show/6459/samsung-ssd-840-testing-the-endurance-of-tlc-nand [anandtech.com]

Re:100,000? (0)

Anonymous Coward | about a year and a half ago | (#42943959)

Yes, the less expensive drives use MLC or TLC memory cells, which can take closer to 3k-5k cycles or 500-1000 cycles before dying. While drive makes use various tricks to expand those numbers, they don't expand to 100k. The expensive drives do use SLC NAND which can get that and more, but nothing like the 1M talked about in the article.

Re:100,000? (1)

rgbrenner (317308) | about a year and a half ago | (#42944151)

but nothing like the 1M talked about in the article.

You are right.. My guess is that he mixed up the cell write endurance with the MTBF of new SSDs. The MTBF for a crucial m4 is 1.2m hours, for example.

If that isn't it, then I have no idea where he got that number from.

Re:100,000? (AWS?) (1)

minkie (814488) | about a year and a half ago | (#42943995)

Which technology is Amazon using for their AWS instances? Their instance description page (http://aws.amazon.com/ec2/instance-types/) doesn't say one way or the other.

Re:100,000? (AWS?) (3, Informative)

rgbrenner (317308) | about a year and a half ago | (#42944055)

Almost certainly MLC. SLC is really only found in industrial SSDs these days. Enterprise and consumer SSDs are all MLC, with the exception of Samsung 840, the first SSD to use TLC.

Re:100,000? (0)

Anonymous Coward | about a year and a half ago | (#42944051)

No one should be buying the 840. Pay more for the 840 Pro, or fall back to the 830.

Re:100,000? (5, Informative)

rgbrenner (317308) | about a year and a half ago | (#42944125)

I own 2 840s... they are fine. If you're really concerned, samsung has a tool that will let you adjust the spare space.. so you can take a 256gb drive, set aside 20gb to use for spares as cells wear out, and use 236gb for your data.

If you read the article I linked to, an 840 128gb drive will last for about 272TB in writes... or about 11.7 years at 10gb/day.

It's much more likely that another part will wear out before the cells do.

Holy crap batman (0)

Anonymous Coward | about a year and a half ago | (#42944133)

So basically, my data crunching app, running on an Android tablet (Transformer Infinity), with 64 GB of flash, running full tilt continuously updating. Will die very very quickly. I haven't measure the volume of updates, but it will be 20mbps or more.

I think that's about 8-9 months before I start seeing failures.

Oh well, I'm upgrading it to a 2Mb tablet soon, I'll hold the data all in RAM and only write out a daily backup. No big deal holding it in RAM since the tablet has a battery and seems rock solid.

Well *unless* Slashdot announces RAM dies after 100,000 writes...

Re:100,000? (1)

Luthair (847766) | about a year and a half ago | (#42944213)

The numbers are no doubt mean time before failure, so inevitably many drives will fail before this.

Re:100,000? (1)

citizenr (871508) | about a year and a half ago | (#42944297)

Heavy database caching kills MLC SSDs in couple of months max. TLC wont last more than few weeks.

Re:100,000? (5, Interesting)

beelsebob (529313) | about a year and a half ago | (#42944401)

Luckily, while he's about 30 times out for the write endurance on the bad side, he's about 100-1000 times out on the speed at which you're likely to ever write to the things, on the good side, so in reality, SSDs will last about 3-30 times longer than he's indicating in the article. The fact that he's discussing continuous writes at max sata 3 speed suggests that he's really concerned with big ass databases that are writing continuously, and use SLC NAND. The consumer case is in fact much better than that, even despite MLC/TLC.

1 million is smaller than 100,000? Wrong numbers!! (0)

Anonymous Coward | about a year and a half ago | (#42943867)

>It's funny how bad people are at estimating just how long '100,000 writes'
>It obviously gets far worse with newer flash memory that is able to withstand a whopping million writes per cell.

So TFA claims that 100,000 is a bigger number than 1,000,000!? The author got the 2 numbers mixed up as new FLASH cells have 100,000 or less erase/write cycles vs the old 1 million.

Re:1 million is smaller than 100,000? Wrong number (2)

neokushan (932374) | about a year and a half ago | (#42944019)

"It obviously gets far worse" is referring to "how bad people are at estimating", not the lifespan of the Flash Memory.

What about swap? (1)

Anonymous Coward | about a year and a half ago | (#42943873)

Some phones use an internal flash chip partition as swap, I always wonder about the lifetime of these devices.

Re:What about swap? (1)

Virtucon (127420) | about a year and a half ago | (#42943939)

Don't expect a cell phone to have swap I/O demands of a server. Also, since most people chuck their cell phones after a couple of years anyway I don't think it would be a problem.

Re:What about swap? (3, Informative)

h4rr4r (612664) | about a year and a half ago | (#42944099)

I don't expect most servers to swap at all. If your server is swapping, buy more ram. Cell phones are still ram starved enough to need to do that.

Re:What about swap? (1)

Virtucon (127420) | about a year and a half ago | (#42944231)

OMG, it's like living in the 70s again? I don't want to start a swapping war, and man these are futile arguments.

You tune for appropriate needs of the server and allow the O/S to manage that. There was an operating system called CP6, owned by Honeywell. It was originally known as CP5 on the Xerox Sigma systems from the 70s. Anyway, it had a philosophy at the time, no swapping. https://en.wikipedia.org/wiki/Time-sharing [wikipedia.org] Yeah, a great O/S on a DPS 8/70 Honeywell Mainframe that had 8 Megawords (36 bit word) you could support about 40 users before the system became unusable. At the time, 32MB on an IBM 3081 could support over 1000 users. I hate to tell you this, but CP-6 died a long time ago because of this and other inflexible thinking.

Swapping is a fact in all multi-tasking O/S systems unless you have realtime processing requirements. So, tune your workloads for the best balance and your requirements. There are systems where "swapping" is very, very bad because of the workload/application conditions and requirements. In other cases, it makes sense because even though memory is less expensive than it was 30 years ago, it's still not infinite and as they say YMMV.

Re:What about swap? (1)

h4rr4r (612664) | about a year and a half ago | (#42944303)

I am sorry I was not more clear, I did not literally mean no swapping ever.

What I really meant was I expect on average my phone with only 1GB of RAM to swap more than my servers which I add RAM to if I notice considerable swapping. Unless I am limited by cost of course. These days though 128GB of RAM is pretty cheap in the server world.

Re:What about swap? (1)

Virtucon (127420) | about a year and a half ago | (#42944619)

;-) Well, that depends on who you get your memory from and what server you're talking about but yes prices have come down a bit. I remember when 8MW on a DECsystem 20 was hella expensive. $50,400 for 265K Words.. http://bitsavers.trailing-edge.com/pdf/dec/pdp10/lcg_catalog/LCG_Price_List_Jan82.pdf [trailing-edge.com] That would come out to be $1,612,800 for 32 modules not including the expansion cabinets necessary to hold it all.

SO for a DL360-G6... (not too old.)

8GB module, $75... From here.
Same module $162... From here. [costcentral.com]

With an MSRP of $850.. so 128GB at the low end $1200... at the price listed above about $2600. Forget sucker MSRP... LOL

If SSd is nearly full? (2)

Zorpheus (857617) | about a year and a half ago | (#42943905)

But if your SSD is nearly full with data that you never change, wouldn't all the writing happen in the small area that is left? This would significantly reduce lifetime.

Re:If SSd is nearly full? (4, Interesting)

Colonel Korn (1258968) | about a year and a half ago | (#42944021)

But if your SSD is nearly full with data that you never change, wouldn't all the writing happen in the small area that is left? This would significantly reduce lifetime.

I believe all the major brands actually move your data around periodically, which costs write cycles but is worth it to keep wear balanced.

Re:If SSd is nearly full? (1)

Luckyo (1726890) | about a year and a half ago | (#42944281)

Indeed. This is called "wear leveling" and is aimed at preventing a scenario where you have a chunk of data that is never moved or deleted taking a lot of drive space making all wear focus on small area which is worn out very quickly.

Re:If SSd is nearly full? (3, Informative)

Anonymous Coward | about a year and a half ago | (#42944041)

actually they thought about that never SSD drives have special wear leveling algorithm that if it notices you write some parts a lot and remainder of disk is static they just move static part to used-up space and use underused (ex-static part of disk for writing stuff that changes a lot, more or less you can expect that every cell will be used equal number of times even if you write to just 1 file big 1MB and rest is static

Re:If SSd is nearly full? (1)

PhrstBrn (751463) | about a year and a half ago | (#42944381)

If you use TRIM, then your drive will know what parts of the disk are empty, and what parts are not. With wear leveling, the SSD will always write to free blocks with the most write cycles available first, and it will just remap blocks in whatever order it wants (blocks don't need to be in linear order like on HDDs). I think they start moving data around once the cells get to the end of their write cycles or it thinks drive is full (no TRIM or the drive is actually full).

Re:If SSd is nearly full? (1)

neokushan (932374) | about a year and a half ago | (#42944053)

I could be wrong and it probably depends on the SSD itself. A lot of SSD's these days have a reserved area that's used when cells start to die (Which is why you'll see SSDs with say 120GB of storage instead of 128GB). They all attempt to evenly write over all of the cells as well, instead of just hammering a select few. Of course you're probably right about when the SSD itself is nearly full but as far as I'm aware, ultimately what starts to happen is either the space decreases slowly over time or the SSD just plain refuses to write any more data (locked to read-only). I've never seen an SSD fail like this before so I can't comment. I've only ever seem them fail outright, usually due to the controller doing something it shouldn't be doing.

Re:If SSd is nearly full? (1)

KiloByte (825081) | about a year and a half ago | (#42944353)

SSDs with say 120GB of storage instead of 128GB

And then they use drivermakers' gigabytes instead of regular ones, so people see a nice round number like 128 and assume they don't get cheated.

Oh, sorry, they sponsored some commission to redefine pi^Hkilobyte, so when they get sued, they can claim they don't falsely advertise.

Re:If SSd is nearly full? (1)

neokushan (932374) | about a year and a half ago | (#42944419)

Appropriate username is appropriate.

Re:If SSd is nearly full? (4, Interesting)

higuita (129722) | about a year and a half ago | (#42944083)

SSD should work at maximum of 75% of their capacity... 50% or less is recommended

some chips try to move blocks to rotate the writes, have a lot of spare zones, so it can remap/use other sectors on write... but that is a problem, working in a full SSD will shorten its live

Re:If SSd is nearly full? (0)

Anonymous Coward | about a year and a half ago | (#42944109)

If you have a cell with 100 writes left you move that constant data from those cells into the "nearly dying" cells, freeing cells up that have only seen a handful of writes yet.

There are other reasons this guy's analysis is wrong, but I think wear-levelling does cope with this case. Not sure how it copes with superblocks or wear-levelling metadata so easily, though.

Re:If SSd is nearly full? (1)

Anonymous Coward | about a year and a half ago | (#42944115)

> if your SSD is nearly full with data that you never change, wouldn't all the writing happen in the small area that is left?

The "sectors" that are exposed to the SATA controler are re-arranged by the SSD firmware to the physical "sectors" (which for flash are bigger, typically in the MiB range these days). When the same "SATA-side sectors" are rewritten over and over again, the firmware in the SSD does some wear leveling, by moving seldom-changing physical sectors to often-changing physical sectors, and redirects the often-changing "SATA-side sectors" to those newly freed physical sectors.

Thus, even if you only even write to a single "SATA-side sector", the writes will eventually be spread out over the entire SSD.

Now, whether the SSD firmware is smart enough in doing its wear leveling is another issue. ;-)

PS. Sorry for my crude English, I'm not a native speaker.

Re:If SSd is nearly full? (1)

Anonymous Coward | about a year and a half ago | (#42944595)

PS. Sorry for my crude English, I'm not a native speaker.

That's evident by your superior command of the language.

Our first age-related failure was a 2008 drive. (5, Interesting)

urbanriot (924981) | about a year and a half ago | (#42943951)

Our company experienced what we believe was its first age-related failure in October of 2012, an office PC with an Intel SSD drive in the value oriented line of 2008 (which was still high at the time). Basically the drive behaved as a mechanical drive would behave with an occasional bad sector and we were able to successfully image the data to a new one. Out of 200 Intel drives, that's pretty good. (We did have one failure in 2010 but that was an outright dead drive and we were able to RMA it). Not sure if this contributes anything to the conversation but I figured I'd throw this out there.

The Intel X25's in my PC, from 2009, are still humming along nicely and my last benchmark produced the same results in 2012 as they did in 2010. But I've gone so far as to set environment variables for user temp files to a mechanical drive, internet temp files to a RAM drive and system temp files to a RAM drive, offsetting the wear leveling.

Re:Our first age-related failure was a 2008 drive. (1)

Luckyo (1726890) | about a year and a half ago | (#42944291)

Aye, intel drives are known for two things: their reliability and their high prices.

If you tried budget vendors like OCZ, you'd likely have a very different story to tell us.

Re:Our first age-related failure was a 2008 drive. (0)

Anonymous Coward | about a year and a half ago | (#42944355)

I have an X25 in my netbook, which is configured to put /tmp, /var/tmp and the Firefox cache into a RAM drive. Last I looked, after about three years it was reporting 2% write usage... so it will probably still have 95% of its life left by the time we replace the netbook.

Life is tricky for flash (5, Interesting)

Anonymous Coward | about a year and a half ago | (#42944045)

meaningful life specs are tough to come by for flash. Yes, as noted above, SLC NAND has a rated life of 100k erases/page on the datasheet, but that's really a guaranteed spec under all rated conditions, so in reality, it lasts quite a bit longer. If you were to write the same page once a second, you'd use it up in a bit more than a day.

However, in real life, the "failure" criteria is when a page written with a test pattern doesn't read back as "erased" in a single readback. Simple enough, except that flash has transient read errors: that is, you can read a page, get an error, read the exact same page again and not get the error. Eventually, it does return the same thing every time, but that's longer than the "first error".

There's also a very strong non-linear temperature dependence on life. Both in terms of cycles and just in terms of remembering the contents. Get the package above 85C and it tends to lose its contents (I realize that the typical SSD won't be hot enough that the package gets to 85C, although, consider the SSD in a ToughBook in Iraq at 45C air temp..)

In actual life, with actual flash devices on a breadboard in the lab at "room temperature", I've cycled SLC NAND for well over a million cycles (hit it 10-20 times a second for days) without failure. This sort of behavior makes it difficult to design meaningful wear leveling (for all I know, different pages age differently) and life specs, without going to a conservative 100k/page uniform standard, which, in practice, grossly understates the actual life.

What you really need to do is buy a couple drives and beat the heck out of them with *realistic* usage patterns.

Re:Life is tricky for flash (1)

Matt_Bennett (79107) | about a year and a half ago | (#42944147)

The temperature dependence is a very strong factor that does seem to be missing from the analysis- to add to what the AC parent said, my experience is that the minimum number of erase cycles is when the device is at maximum temperature, take it down to room temperature, and the typical number of erase cycles goes up by an order of magnitude. Most computers have an internal temperature of over 40C when run in a normal environment,

Your drive will fail, SSD or HD. You must be prepared for that.

Re:Life is tricky for flash (1)

Luckyo (1726890) | about a year and a half ago | (#42944379)

In all the honesty, this is badly wrong. Laptops running under heavy load may clock these numbers on the hard drive temp (not ambient inside the case but temperature sensor on the hard drive which essentially all modern hard drives have). Hard drives generate significantly more heat then SSDs due to mechanical issues.

I'm typing this on a machine that has 4x3.5" hard drives stacked on top of each other, and openhardwaremonitor pretty much instantly tells me which drives are on the top and bottom and which are in the middle. Middle ones report 34C and edge ones are 31C. Room temperature is at around 22-24C.

That said, SMART data says that "pre failure" threshold for my seagate drives is 45C. So 40C sounds quite close to it.

Number crunching != empirical evidence (0, Interesting)

Anonymous Coward | about a year and a half ago | (#42944061)

In fact file systems need superblocks and they can't just evenly distribute everything across the platter. The superblock is obviously the first to go, so you'd need to cope with that by having various possible locations for it. Where do you store the location? In a superduperblock? How long does that last? Where do you store data on how many writes have hit each block? How many times do you overwrite that?

After all this basic housekeeping, maybe, you can spread everything else across the platter.

This calculation is best-case start to finish. Drives are not written with perfect evenness - that would be very very hard if not impossible to achieve. So you need actual figures for how well this can be done in practice. Any conclusions you make without that empirical data are likely to be overstated.

Re:Number crunching != empirical evidence (1)

h4rr4r (612664) | about a year and a half ago | (#42944123)

Any reason you ignore wear leveling that all modern SSDs do? The drive controller will move the superblock, if that is the most written block, around. It will remap it so the OS is none the wiser.

Re:Number crunching != empirical evidence (0)

Anonymous Coward | about a year and a half ago | (#42944441)

Any reason you ignore wear leveling that all modern SSDs do?

What, you mean like where I say "After all this basic housekeeping, maybe, you can spread everything else across the platter."

The drive controller will move the superblock, if that is the most written block, around. It will remap it so the OS is none the wiser.

And where does it store the remap information? And where does it store the remap information for the remap information?

Try reading my post again and then comment on the bit I actually missed, rather than parts I explicitly refer to.

Re:Number crunching != empirical evidence (1)

h4rr4r (612664) | about a year and a half ago | (#42944497)

It stores the remap info in reserved blocks. Yes they could wear out, but since it is so little it is unlikely. Most of these drives have a lot of reserved blocks.

Try understanding how these wear leveling systems actually work.

Re:Number crunching != empirical evidence (3, Interesting)

bobbied (2522392) | about a year and a half ago | (#42944405)

Which is why most SSD drives implement some kind of wear leveling. They will move the often written sectors around the physical storage space in an effort to keep the wear even.

Rotating media drives do similar things and can physically move "bad" sectors too, but this usually means you loose data. Many drives actually come from the factory with remapped sectors. You don't notice it because these sectors are already remapped on the drive onto the extra space the manufacturers build into the drive, but don't let you see.

Reminds me of when I interviewed with Maxtor, years ago. They where telling me that the only difference between their current top of the line storage (which was like 250G at the time) and their 40 Gig OEM drive was the controller firmware configuration and the stickers. Both drives came off the same assembly line and only the final drive power up configuration and test step was different, and then only in the values configured in the controller and what stickers got put on the drive. If you had the correct software, you could easily convert the OEM drive to the bigger capacity, by writing the correct contents to the right physical location on the drive. The reason they did this was it was cheaper than having to stop and retool the production line every time an OEM wanted 10,000 cheap drives.

I'm sure drive builders still do that sort of thing today. Set up a 3Tb drive line, then just down size the drives which are to be sold as 1Tb drives.

Will die for a zillion other reasons (-1)

Anonymous Coward | about a year and a half ago | (#42944085)

Ever considered cooling issues because of a continous 6Gbit/sec write?
How about flash life at elevated temperatures?

Vajk

Curve or Cliff? (2)

StoneyMahoney (1488261) | about a year and a half ago | (#42944131)

Does anyone know whether the failure count for cells picks up along a nice smooth curve or is like running into a cliff? Intel seem to be suggesting in their spec sheets that the 20% over-provisioning on some of their SSDs (I'm assuming for bad-block remapping when failure is detected) can increase the expected write volume of a drive by substantial amounts:

http://www.intel.co.uk/content/www/us/en/solid-state-drives/solid-state-drives-710-series.html [intel.co.uk]

This seems to go against the anecdotal evidence of sudden total SSD failures being attributed to cell wear - something else must be failing in those, most likely the normal expected allotment of mis-manufactured units.

Re:Curve or Cliff? (3, Informative)

Luckyo (1726890) | about a year and a half ago | (#42944445)

Sudden failures are controller failures. Especially budget controllers tend to fail before flash does.

Flash failure is "usually" about not being able to write to the disk, but being able to read from the disk. Problem is that when you're getting it, that means you've gone through all the reserve flash and controller no longer has any flash to assign to use from reserve. I.e. drive has been failing for a while.

Modern wear leveling also means that failure would likely cascade very quickly.

Gosh, so all is fine, then? (0)

Anonymous Coward | about a year and a half ago | (#42944245)

If after buying the SSD it will take months, even years before it dies, I guess that there's no problem, eh?

BULLSHIT.

Hard or solid (1)

Dan East (318230) | about a year and a half ago | (#42944343)

Story should have been entitled "Taking a Solid Look At SSD Write Endurance".

Badabing! I'll be here all week.

SSD write to death chart P/E Cycles. (1)

Andrew Lindh (137790) | about a year and a half ago | (#42944383)

This chart is almost 2 years old now, but it is a fun read and has some good testing information:
  http://www.xtremesystems.org/forums/showthread.php?271063-SSD-Write-Endurance-25nm-Vs-34nm [xtremesystems.org]

I have read that some of the newer SSD have only 500-1000 P/E cycles (eg. Kingston V300, Samsung 840), but I don't have proof. It is well documented that most of the current MLC drives have 3000 to 5000 P/E cycles while may of the SLC units are 100000 (eg. Intel X25-E, SuperSSpeed SLC S301).

Here is another good article about TLC SSD:
  http://www.anandtech.com/show/6459/samsung-ssd-840-testing-the-endurance-of-tlc-nand [anandtech.com]

You have to buy and use the correct type of SSD for your application. A new TLC SSD should not be used in any write intensive application (eg. ZFS ZIL) but it may be great for that new fast laptop that can use the speed and does not do a lot of writes to disk. For most standard uses a good SSD will outlast the laptop/desktop where it is installed. The key for good SSD use is detection of pre-failure (SMART is a good start). The SSD is now a consumable part, just like the battery or brakes on a car. We all know drives fail, but standard hard drives don't have the same fixed life expectancy as an SSD.

Don't forget about Write Amplification. It can help kill a drive faster than total bytes written:
  http://en.wikipedia.org/wiki/Write_amplification [wikipedia.org]

Hmm (2)

AdmV0rl0n (98366) | about a year and a half ago | (#42944469)

SSD here has been rejected on multiple and continuous failure rates. Now it only gets given to end users who provide a 'light' write environment - and thats the only place where consumer level 25 and sub level nm write cycle gear can be used sanely (ie, without having a plan for swap out/replacement and higher costs).

I'm expecting a fairly severe level of failure on new equipment shipping today that uses SSD as cache.

I frankly love the speed. But the claims about how long an 'average' user would take to wear out these disks has failed with abysmal rate failures where I work. Admittedly, our users are mid to heavy use cases, but the failure rates have been high, and the life time shorter than anyone would contemplate.

Either the cost of the drives has to fall (which to be fair - it has been), or the reliability question and write limits needs to change substantially.

I no longer consider SSD for front line heavy use. And I'd need serious work to be convinced on contemplating it again with lower nm flash. And SLC level gear is simply beyond the cost level we can attain.

Data, not theory...? (0)

Anonymous Coward | about a year and a half ago | (#42944515)

There's a really handy post with actual data on this compiled by some people at the xtremesystems forum. It's nicer than this theory stuff, they've tested most consumer level drives available and literally written them to death to see how far they go. Still not perfect (some of the drives are different sizes, with different amounts of "static" data) but it's good for a ballpark anyway.

http://www.xtremesystems.org/forums/showthread.php?271063-SSD-Write-Endurance-25nm-Vs-34nm

Heavier user (1)

Murdoch5 (1563847) | about a year and a half ago | (#42944535)

I'm on the heavier end of the normal computer user and I still have a Vertex 1 drive still alive and kicking.

they wear out but they're costly (0)

Anonymous Coward | about a year and a half ago | (#42944649)

Seriously, if I'm going to pay $185 for a 256 GB SSD (and that's a GREAT sale - they're usually more), I wish the thing would last longer than that. An internal 7200 RPM 3 TB HDD sounds like a better choice!
Newer technology needs to be introduced that can be written to tens of millions of times. If scientists can pull this off, drop the cost of current SSD technology to 10 cents a gig! And, for the long-lived SSD storage, I'd be willing to shell out big bucks!

Blocks in use (0)

Anonymous Coward | about a year and a half ago | (#42944659)

Don't forget to take into account the number of blocks which are in use. If you've got 50% disk use then expect the lifetime to be cut in half because the used blocks cannot be part of the remapping.

I've seen small devices burned out because 80% of the disk was the baseline before any user data got added.

Good Overview, but... (0)

Anonymous Coward | about a year and a half ago | (#42944769)

The review didn't mention the write factor in the calculations. Drives have a write factor which roughly relates the number of actual writes to user intended writes. This write factor has several variables (wear leveling, cell usage, garbage collection, TRIM, etc) that determine the ratio, but it's not uncommon for a drive to perform up to 5 or more physical flash memory writes for one user intended write. Assuming a 1:1 ratio of user intended writes to physical writes is an oversight that can greatly alter the results.

Having said that, it still takes a long time for most SSDs to start failing.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>