×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

SanDisk Announces 4TB SSD, Plans For 8TB Next Year

timothy posted about 7 months ago | from the no-moving-parts dept.

Data Storage 264

Lucas123 (935744) writes "SanDisk has announced what it's calling the world's highest capacity 2.5-in SAS SSD, the 4TB Optimus MAX line. The flash drive uses eMLC (enterprise multi-level cell) NAND built with 19nm process technology. The company said it plans on doubling the capacity of its SAS SSDs every one to two years and expects to release an 8TB model next year, dwarfing anything hard disk drives can ever offer over the same amount of time. he Optimus MAX SAS SSD is capable of up to 400 MBps sequential reads and writes and up to 75,000 random I/Os per second (IOPS) for both reads and writes, the company said."

Sorry! There are no comments related to the filter you selected.

Oh goody (5, Funny)

Anonymous Coward | about 7 months ago | (#46906017)

Now you can pay $4000 for a drive that won't last 2 years! Yeah.. sign me up.

Re:Oh goody (1)

epyT-R (613989) | about 7 months ago | (#46906047)

only 4k? probably more like 20..

Re:Oh goody (0)

Anonymous Coward | about 7 months ago | (#46906919)

1TB SSDs are less than $1k and have plenty of room to fit more NAND. It's not unreasonable to expect a ballpark figure south of $5k.

Re:Oh goody (1)

Eunuchswear (210685) | about 7 months ago | (#46908305)

Not with SAS interfaces they aren't.

Re:Oh goody (0)

Anonymous Coward | about 7 months ago | (#46907019)

This is more like it, enterprise storage vendors (EMC) small SSDs start in the $4000+ range.

Re: Oh goody (4, Informative)

Anonymous Coward | about 7 months ago | (#46906049)

My primary OS is running on an SSD going on 4 years old now... Out of 5 that I have only one had had issues, which was actually it's controller catastrophically failing and not a NAND issue - could have just as easily happened to a HDD.

Re: Oh goody (2, Insightful)

epyT-R (613989) | about 7 months ago | (#46906071)

I went through three different intel ssds within a year before I gave up and went back to raid spinning disks. They're fine for laptop use, and there's a place for them in data centers as caching drives, but they still suck for heavy workstation loads.

Re: Oh goody (5, Interesting)

Bryan Ischo (893) | about 7 months ago | (#46906155)

False. Your one anecdotal story does not negate the collective wisdom of the entire computer industry.

As far as anecdotal evidence goes, here's some more worthless info: I've owned 8 SSD drives going all the way back to 2009 and not a single one has ever failed. They're all currently in use and still going strong. I have:

- 32 GB Mtron PATA SLC drive from 2009
- 64 GB Kingston from 2010 (crappy JMicron controller but it was cheap)
- 80 GB Intel G2 from 2010
- 80 GB Intel G3 from 2011
- 2x 80 GB Intel 320 from 2011
- 2x 240 GB Intel 520 in my work computer, it gets pretty heavily used, from 2012
- Whatever is in my Macbook Pro from 2012
- Just purchased a 250GB Samsung 840 Evo

Not a single failure on any of them, even the old 32 GB Mtron and the piece of crap JMicron controller Kingston.

But this evidence doesn't really matter; it's the broad experience of the industry as a whole that matters, and I assure you, SSDs have already been decided as ready for prime time.

For a recent example, linode.com, my data center host for like 10 years now, just switched over to all SSDs in all of their systems.

Re: Oh goody (5, Informative)

shitzu (931108) | about 7 months ago | (#46906329)

We have ~100 SSDs installed in our company, workstations, laptops and servers. Over five years only 3 of them died, all Kingstons. Samsung and Intel have been spotless. All of those that died had the following symptoms - if you accessed a certain sector the drive just dropped off - as if you switched off its power. The drive did not remap them as it always dropped off before it could do so. Otherwise the drive remained functional. Got them replaced under warranty.

Re: Oh goody (0)

Anonymous Coward | about 7 months ago | (#46918211)

out of 60 Intel 510 & 330 series, about 2 DOA and 2 failed in less than 6 months.
Out of 130 Intel 320 series about 7 failed, known bug that should have been solved with latest firmware, but it's still there. the 8Mb bug...https://www.google.be/search?q=intel+320+8mb

Re: Oh goody (-1, Troll)

Anonymous Coward | about 7 months ago | (#46906361)

For a recent example, linode.com, my data center host for like 10 years now, just switched over to all SSDs in all of their systems.

Figures. They only host Linux servers, so it's not like they care about reliability or economy. Anyone who's serious about enterprise-quality hosting is using Windows Server.

Re: Oh goody (0, Troll)

Anonymous Coward | about 7 months ago | (#46906573)

For a recent example, linode.com, my data center host for like 10 years now, just switched over to all SSDs in all of their systems.

Figures. They only host Linux servers, so it's not like they care about reliability or economy. Anyone who's serious about enterprise-quality hosting is using Windows Server.

APK, is that you?

Oh my gosh (0)

Anonymous Coward | about 7 months ago | (#46909059)

You can't be serious. Linux has been proven to be very reliable for enterprise quality hosting.

Re: Oh goody (1)

kimvette (919543) | about 7 months ago | (#46910921)

> Anyone who's serious about enterprise-quality hosting is using Windows Server.

You obviously don't know the first thing about anything. Microsoft claims the lowest downtime by redefining downtime to not include "scheduled maintenance windows" not by being realistic.

Re: Oh goody (1)

Lord Chaos EOG (1683154) | about 7 months ago | (#46913563)

As a web server, *nix is awesome. But for any serious enterprise management. Windows Server 2012 beats it by miles.

Re: Oh goody (-1)

Anonymous Coward | about 7 months ago | (#46906411)

BTW, that's not "evidence" that's ANECDOTE.

Anecdotes are a FORM of evidence (1)

Anonymous Coward | about 7 months ago | (#46906535)

Anecdotes are a FORM of evidence. Not very strong evidence (to put it mildly), but still... evidence.

Re: Oh goody (-1)

Anonymous Coward | about 7 months ago | (#46906429)

_Your_ tale of completely flawless SSD operation is anecdotal. What uncited "collective wisdom" gains a Code Horror [codinghorror.com] post? Every SSD hard drive that I've owned has failed within a year, and I don't see the value in that when <spoiler>my neighbor's Windows 98 box still chugs out IE6 within 10 seconds.

Re: Oh goody (1)

ketomax (2859503) | about 7 months ago | (#46906953)

Every SSD hard drive that I've owned has failed within a year

What do you do with the hard drives, sit on it?

Re: Oh goody (0)

Anonymous Coward | about 7 months ago | (#46908375)

Probably a shitty power supply.

Re: Oh goody (0)

Anonymous Coward | about 7 months ago | (#46942051)

And did you look at when that was written? 2012 or three years ago.

Worse look at http://blog.codinghorror.com/t... [codinghorror.com] and what do we find? The same person who was talking how bad SSDs are using one as his boot drive. :)

Re: Oh goody (-1)

Anonymous Coward | about 7 months ago | (#46906447)

I've had a 64GB Kingmax PATA SSD (Jmicron controller) and a G-skill 32GB MicroSDXC card fail in the past couple years. Flash isn't infallible.

Re: Oh goody (1)

Anonymous Coward | about 7 months ago | (#46906721)

Your name is misspelled. Your parents must have been fucking idiots.

Re: Oh goody (4, Funny)

Anonymous Coward | about 7 months ago | (#46907429)

It's highly unlikely that his parents were fucking yours.

Re: Oh goody (0)

dugancent (2616577) | about 7 months ago | (#46906789)

"Collective wisdom"? Is that slang for circle-jerking and fanboyism? That's all I really see.

Re: Oh goody (0)

Anonymous Coward | about 7 months ago | (#46912617)

I've only had one SSD fail, a 256g Crucial. Was using it in a real time server backup of sensitive data on a 2tb volume. I had previously been using spinners and decided to try the SSDs. The Crucial lasted 2 weeks, after that it was toast, couldn't be accessed by anything, couldn't be detected, couldn't mount.

The cause was the almost continuous writing from the server updates and the lack of idle time for the firmware "Trim" to clean it up. SSDs are great but there limitations.

Re: Oh goody (0, Troll)

Anonymous Coward | about 7 months ago | (#46906359)

It's all about loads. Read the article, these are designed for 90% read-heavy, and very little writing. These are not designed for "using" computers just cold storage.

eg, I'm going to make a hot-backup of my 4TB RAID array every day for the next 3 years, that should burn out the drive after a theoretical 30 years.

But no, this is not designed for web servers unless it's being used as a storage drive, not an operating system/swap/logging/tmp drive. The problem is that CMS systems need to do a lot of static caching which means extremely-busy sites will blow through the write performance like wet tissue paper and eventually kill the drive after 3 years.

Re: Oh goody (1, Offtopic)

sillybilly (668960) | about 7 months ago | (#46906887)

Exactly - these drives are almost read-only drives, you can write to them a few thousand times, but you should avoid writing to them at all cost, consider them kinda like CD-R's. As they are chip based, and come with built-in wifi, anything you write to them inside a city or anywhere there is wifi present, will be sent through the subliminal wifi network to the powers that be. The only secure way to write to these is in the middle of the desert with nobody around for 100's of miles (but you still got the satellites watching you) or in a deep underground cavern or mine where all electromagnetic (including satellite) waves are blocked out (it's not safe to masturbate in the desert because you can be caught on satellite camera and be posted all over the internet, a mineshaft or deep underwater in a submarine is a lot safer place to do it). You have to be careful with the timestamps though, and stick to something like year 2005 in your computer bios, as there might be a zero day code built into the chip that erases everything in, say, year 2050, then self destructs. Even if you watch your timestamps, the device may have its own clock regardless, and know when 2050 is here even if you don't tell it. The way to deal with that is to "starve" the device of any residual capacitive storage to run a clock from, say freezing it for 20-40 years, at which point its own internal clock should give out due to lack of capacitive battery power and stop counting the progress of time, and when 2050 hits, some of these devices will self destruct, but the deep frozen ones will be behind on their internal clock, and the data can be rescued from them before they self-destruct too. The only secure longterm storage mediums are non-chip based, such as magnetic (floppies, tape) or optical (cd-rw(cr-r ink decays over time or bacteria can chew it up, CDRW is a molten semimetal glass (fuckin telluride, telluride is only found where gold is found, or with high probability only there, and it's super rare, or super dispersed in the environment, hard to extract and get your hands on it), that's only temperature sensitive, but not good food like ink is good food), dvd, etc.) As long as they can't create logic inside magnetic materials, such as magnetic transistors, etc, the data can be kept separated from the logic that acts on it, and there is never a danger of self activating zero day self destruct behavior. By the way floppies have the head rubbing against the disk, and only teflon based floppies should be used. Harddrives have a tiny air-cushion, bernoulli-effect-like, so there is no contact, they are also dust-free, unfortunately they are chip-based, and because of that are not secure. Ideally they should make "removable disk" harddrives, where you can remove the spindle and exchange it between drives, like you can floppies, however the powers that be might want to retain that only for themselves or their military, and the general public only get SSD's, chipped harddrives, chipped flash drives, or floppies and cdrw's. Out of all CDRW's are the best option for Jefferson's yeomans, and not DVD-RW's, as the biggest issue with CD's is head alignment. I used to work at an electronics repair shop as a front desk attendant, and the standard chime I uttered every time someone tried to bring in a CD-player for repair was: "The laser assembly is out of alignment - it costs $80 for a new laser assembly, and you only paid $40 for the CD-player, (or fill in the value, 39, or 44, how much was it? 52? you got ripped off, circuit city sells them for 39 every holiday). So what you wanna do, buy another CD or DVD player, or you want us to look at it and repair it, and charge you the diagnosis fee too?" The laser assembly is always out of alignment, it's always out of friggin alignment, so that brings you back to floppies, the bigger and lower density the better, 80KB 8" being the best, least head alignment or magnetic material error sensitive. Floppies you may also be able to self-produce, and then their short lifetime is not an issue. Having 80KB storage is a whole whole whole lot better than having 0. Btw these SSD's could be a way to hide the cloud in your pocket - or bury it in your backyard for a time when the authorities will do searches and ban anyone from storing valuable information at home, and not be forced to go online and purchase it on a per click, per second viewing time fee basis - if you can only find a way to download the cloud. Wikipedia should sell their cloud on these things, as there is no way in hell to pull all of wikipedia through broadband onto a local disk.

Re: Oh goody (1)

Anonymous Coward | about 7 months ago | (#46907197)

Can I have some chapter breaks, please?

Re: Oh goody (1)

entrigant (233266) | about 7 months ago | (#46921221)

Hit the enter key from time to time, son. Nobody will be able to get through that.

Re: Oh goody (0)

Hadlock (143607) | about 7 months ago | (#46906375)

I hope you weren't using OCZ branded drives...

Re: Oh goody (1)

Hamsterdan (815291) | about 7 months ago | (#46906937)

Agility 2 here, still kicking after 3 years of heavy use, half of that without TRIM, perhaps they had their flaws, but mine has been working fine. I've had a bunch of Seagates crap out on me in the same time (always the same thing, bad sectors). With the race for the biggest drive, reliability has gone down the drain...

Re: Oh goody (3, Informative)

Hadlock (143607) | about 7 months ago | (#46910537)

While I'm happy for you and your luck so far, the hard numbers don't lie, one in 20 OCZ drives were returned over a two year period. Closer to 7 percent for particular models. Compare to half a percent for Samsung or Intel.

Re: Oh goody (1)

phorm (591458) | about 7 months ago | (#46909933)

I had an OCZ that went bad in my laptop. I read about how a lot of the issues are actually due to buggy controller firmware.
So I swapped that SSD to a desktop and upgraded the firmware.No issues since.

Re: Oh goody (2)

loufoque (1400831) | about 7 months ago | (#46907117)

I'm using SSDs in a compile farm that builds software 24/7 and no drive has ever failed.

Re: Oh goody (2)

BitZtream (692029) | about 7 months ago | (#46907217)

I have 2 SSDs in a ZFS mirror that more or less constantly rebuilds the FreeBSD ports tree. The reasons for doing so are silly and not important to this discussion. It may spend 2 or 3 hours a day idle, the rest of the time, its building ports on those SSDs, with sync=yes (meaning ALL writes are sync, no write caching so i can see the log leading up to a kernel panic I'm searching for). Its been doing this for over a year already.

It has never thrown so much as a checksum error.

So my anecdotal evidence beats your anecdotal evidence.

Or you could just acknowledge reality:

http://hardware.slashdot.org/s... [slashdot.org]

Re: Oh goody (1)

kimvette (919543) | about 7 months ago | (#46910911)

I haven't had any problems in my own machine nor in appliances manufactured for clients - not one return on a single SSD, a handful on server-class Seagate hard drives, But then, for SSDs I stick with Crucial(Micron), Intel, Samsung and Sandisk, and I even used a Crucial SSD as a swap drive for a while in a PC where the motherboard could accept only very limited RAM - with no ill effect.

Now that SSD capacities are close to outpacing hard drives, I may very well be replacing the Seagate drives in my home servers and workstations with Sandisk SSDs (perfect to complement my LSI card). It's awesome that SSD capacities are about to leapfrog hard drives. Maybe the 8TB SSDs will beat 8TB HDDs out of the gate. :)

Re: Oh goody (1)

YouGotTobeKidding (2884685) | about 7 months ago | (#46916673)

If you killed more than 1 ssd....replace your PSU as its crap. IE stop being a cheap bastard and use a good power supply

Re:Oh goody (5, Insightful)

beelsebob (529313) | about 7 months ago | (#46906083)

Assuming you write an average of 100GB a day to this drive (which is... an enormous overestimate for anything except a video editor's scratch disk), that's 40,000 days before you write over every cell on the disk 1000 times. Aka, 100 years before it reaches its write limit. So no... SSDs are far from the 2 year proposition that people who bought first gen 16/32GB drives make them out to be.

Re:Oh goody (1)

SuperKendall (25149) | about 7 months ago | (#46906151)

If you only write infrequently (use for image editing) and then backup storage - how many years would the SSD maintain values?

Re:Oh goody (4, Informative)

Tapewolf (1639955) | about 7 months ago | (#46907043)

If you only write infrequently (use for image editing) and then backup storage - how many years would the SSD maintain values?

If the drive is powered down, I wouldn't bet on it lasting the year. Intel only seem to guarantee up to 3 months without power for their drives: http://www.intel.co.uk/content... [intel.co.uk]

Note also that the retention is said to go downwards as P/E cycles are used up. For me, I think they make great system drives, but I don't use them for anything precious.

Re:Oh goody (1)

Gr8Apes (679165) | about 7 months ago | (#46908623)

I use them as core system and work drives. However, I also backup my systems continuously, with a pair of HDs for each one. So even if something truly catastrophic happened, I'm 99.99% likely to only be down however long it takes to get a working system hooked up to one of the backups and do a restore. I have an old Intel SSD that's 3 years old now, and still going strong in a second laptop. The HD was dying in that one. All storage needs to be cycled periodically, if it lasts 5 years, you've gotten your money's worth. If you want truly long term storage, go for the 100 year tapes or optical media, although the latter has been somewhat flaky at the 15 year mark for me.

Re:Oh goody (1)

dgatwood (11270) | about 7 months ago | (#46906393)

Of course, in the worst case, with a suitable synthetic workload in which every 512-byte block write causes a 512 KB flash page (again, worst case) to get erased and rewritten, that could translate to only a 40-day lifespan. Mind you, that worst-case scenario isn't likely to occur in the real world, but....

Re:Oh goody (1)

smallfries (601545) | about 7 months ago | (#46906511)

How is that the worst case? Block erasure is only necessary to free up space, not to make a write.

Re:Oh goody (3, Informative)

Mr Z (6791) | about 7 months ago | (#46906733)

If you know something about the drive's sector migration policies, in theory you could construct a worst-case amplification attack against a given drive. Leverage that against the drive's wear leveling policies. But, that seems rather unlikely.

Flash pages retain their data until they're erased. You can write at the byte level, but you must erase at the full page level. You can't rewrite a byte until you erase the page that contains it. That's the heart of the attack: Rewriting sectors with new data. You can't rewrite a sector in-place. You mark the old location as "dirty but free", and write the new data to a new location. The SSD can't reclaim the dirty-but-free sectors for writing until they're erased.

Thus, the basic idea goes something like this: Fill the disk to 99.9% full. Then, selectively rewrite individual sectors, forcing the sector to migrate to a new flash page. Wash, rinse, repeat until the drive fails.

If the drive only performs dynamic wear leveling, all subsequent rewrites will erase and reuse only among the free space. (Note: This free space includes all of the space the drive reserves to itself for dynamic wear leveling purposes.) Now all you need to do is reach the erase/rewrite limit among the available dynamic wear leveling pool, which is significantly smaller than the full drive capacity. You can achieve this by rewriting a small subset of sectors until the disk falls over.

Modern drives perform a blend of dynamic and static wear leveling. Dynamic wear leveling only erases/rewrites among the "free" space. Static wear leveling gets otherwise untouched sectors into the fray by wear leveling over all sectors. This blended approach defers static wear leveling until it becomes absolutely necessary. The flash translation layer (FTL) detects when the wear difference between sectors gets too imbalanced, and migrates static sectors into the worn regions and wear-levels over the previously "static" sectors.

A successful attack would take this into account and attempt to keep track of which sectors would be marked "static" vs. "dynamic". It would also predict how the static sectors were grouped together into pages, so it could cherry-pick and inflict the maximum damage: All it needs to do is write to a single sector in each static flash page (creating a bunch of unallocated "dirty-but-free" holes), continuing until the SSD was forced into a garbage collection cycle. That GC cycle then would have to touch all the static pages (or at least a significant fraction) to compact the holes away and make space available for future writes.

If you can keep that up, you can magnify your writes by the ratio between the page size and the sector size. If you have 512 byte sectors and 512K bytes pages, the amplification factor is 1024.

But, as I suggested above, to achieve this directly, you need to have some idea of how the SSD marks things static vs. dynamic. Without such knowledge, you have to approximate.

I imagine if you really wanted to kill an SSD without any knowledge of its algorithms, you could do something simple like rewrite every allocated sector in an arbitrary order, shuffling the order each time. SSD algorithms assume a distribution of "hotness" (ie. some sectors are "hot" and will be rewritten regularly, and most are "cold" and will be rewritten rarely if ever), and so rewriting all sectors in a random order will cause rather persistent fragmentation, recurring GC cycles, and pretty noticeable amplification.

You wouldn't get to the 40 day mark, but if you started with a mostly full SSD, you might get to a few months.

That's my back-of-the-napkin, "I wrote an FTL once and had to reason through all this" estimate.

Re:Oh goody (1)

sillybilly (668960) | about 7 months ago | (#46906939)

So what if you want to prolong the lifetime of an SSD's, and want to use it as a long term backup storage medium that you can bury in your backyard, and only dig it out once a decade to update? Also a decade later writable SSD's may no longer be available, so as you have no clue how many writes and rewrites you will do in the future, what's an efficient strategy, an optimum %fill on these drives? As the future is unknown, you cannot predict this accurately, but it's certainly not 99.9% full, and not 0.01% full, so is 80% filling policy, or 95% or 99.8% fill policy and according purchase of capacity a good strategy?

Re:Oh goody (3, Interesting)

jon3k (691256) | about 7 months ago | (#46907525)

You do not want to use SSDs for long term storage: http://www.intel.co.uk/content... [intel.co.uk]

"In JESD218, SSD endurance for data center applications is specified as the total amount of host data that can be written to an SSD , guaranteeing no greater than a specified error rate (1E - 16) and data retention of no less than three months at 40 C when the SSD is powered off."

Re:Oh goody (1)

Mr Z (6791) | about 7 months ago | (#46908015)

If you want to write once and forget it, you can fill the thing right to the top of the advertised capacity, and you don't have to worry about failure due to wear. Instead, you have to worry about failure due to electrons migrating off the bits. So, you need to refresh all the bits every so often, much like DRAM, only with a much slower refresh interval. Even if you refresh all the bits on the drive once a day, if you do so in a nice, orderly manner, I'd imagine you won't reach the rewrite limit for the drive in your lifetime.

Still, I'm not sure I'd choose an SSD for that.

Re:Oh goody (2)

Rockoon (1252108) | about 7 months ago | (#46907247)

No matter what you do you cannot burn through more than the maximum (ideal conditions) write speed, and the strategies you are talking about would ultimately be far from maximum.

At 400MB/sec max erase throughput and 250 erase cycles per block (conservative?), it would still take 30 days to wear down this 4TB drive.

Write amplification is a red herring when you are calculating time to failure because write amplification doesnt magically give the SSD more erase ability. These things arent constructed to be able to erase any faster than they can write, and in fact are actually constructed to mitigate the problem that they cannot erase as fast as they can write (over-provisioning, write combining, etc..)

Re:Oh goody (1)

Mr Z (6791) | about 7 months ago | (#46908063)

The point of write amplification is solely to get more sectors into the erasure party. A single small write forces the SSD to migrate full sector A to empty sector B.

If you could send direct physical sector erasure and physical write commands to the media, you could just tell it to erase each sector and rewrite its first byte repeatedly until that sector failed, and then march to the next sector.

But, you don't have the opportunity to do that. Instead, you must interact at the filesystem level, and there's an FTL between the file system and the media. So, if your goal is to ruin the media quickly with those two layers between you, you want to minimize the FTL's ability to filter writes out and reduce the required number of erasures.

I'm aware that erase times for sectors are huge, and will slow your I/O rate accordingly. You're right that write amplification doesn't necessary shorten the calendar days to failure, as a different write pattern may have triggered the same number of erasures in the time frame with a larger number of writes. But, writes aren't free either, so there's at least some, err, benefit to minimizing the number of writes required to ruin your SSD, if your goal is to ruin your SSD as quickly as possible.

Re:Oh goody (1)

Rockoon (1252108) | about 7 months ago | (#46909871)

The point of write amplification is solely to get more sectors into the erasure party. A single small write forces the SSD to migrate full sector A to empty sector B.

Irrelevant and you would know it if you actually read my post, or bothered to think about the technology for a longer period of time than the dismal amount you invested to confirm whatever it was you were trying to confirm.

The device cannot erase blocks at a rate faster than blocks can be written. If they could then they wouldnt increase costs with useless things like over-provisioning, something that is only useful because erases are slower than writes. Therefore the erase rate must be less than the maximum write rate, so using the maximum write rate is a valid upper bound on the speed with which erases can be performed.

All arguments such as yours which call upon write amplification hand waving to imply that erases can occur at faster rates than the devices upper bound write speed is just bullshit thought up by someone so trivially versed in the subject that confirmation bias has made them look like a complete fool, for they latched on to tiny bits of information that arent at all relevant but arent informed enough to know it.

Re:Oh goody (1)

Mr Z (6791) | about 7 months ago | (#46910169)

I'm not challenging the 30 day number, to be sure.

It's not entirely true that write amplification won't appear to speed up the rate at which an SSD erases sectors. SSDs generally have multiple independent flash banks, and each can process an erasure independent of the others. To maximize your erasure rate, you need a pattern of writes that triggers erasures across all banks as often as possible. Each bank will split its time spent receiving data to write, committing write data to flash cells, and erasing flash cells. (My assumption is that a given bank can only be doing one of these operations at a time, which was certainly true for the flash devices I programmed.)

Consider a host sending a stream of writes as fast as it can send it. The writes will land on the drive as fast as the SSD controller can process them and direct them to flash cells. If there are any bottlenecks in that path, such as generating ECC codes and allocating physical blocks in the FTL, it will slow down the part of the duty cycle devoted receiving and committing write data.

A "friendly" write stream would minimize the number of GC cycles the SSD performs, and thus the amount of write amplification that occurs. Thus, the total number of writes to the SSD media is at most slightly larger than what the PC sends, and the "receive-write" portion of the "receive-write-erase" cycle gets lengthened by whatever bottlenecks might be in the PC-controller-flash path. A "hostile" write stream triggers a larger number of GC cycles to migrate sectors. It seems reasonable to me that an on-board chip-to-chip block migration might be quite a bit faster than receiving data from the PC. For one thing, you don't necessarily need to recompute ECC. The block transfer itself could be handled by a dedicated DMA-like controller transferring between independent banks in parallel with other activity. So, generating more write data locally to the SSD could reduce the time spent in the receive-write portion of the receive-write-erase cycle, so you can spend a greater percentage of your time erasing as opposed to receiving or writing.

It seems a little counter-intuitive, but it's in some ways similar to getting a super-linear speedup on an SMP system [google.com] , which is indeed possible with the right workload. How? By keeping more of the traffic local.

The main effect of write amplification, though, is on the SSD wear specs themselves, as I said. They're stated in terms of days/months/years of writes at a particular average write rate. So really, when you multiply that out, they're specified in terms of total writes from the PC. There's at least one flash endurance experiment out there showing that drives often massively exceed their rated maximum total writes by very large factors. One reason for that, I suspect, is that they aren't sending challenging enough write patterns to the drive to trigger worst case (in terms of bytes written, not wall-clock time) failure rates.

Terabyte flash drives are 10% overprovisioned (2)

tepples (727027) | about 7 months ago | (#46907929)

Thus, the basic idea [of a write amplification exploit] goes something like this: Fill the disk to 99.9% full.

Your attack has already failed. A 4 TB drive has 4 TiB (4*1024^4), or 4.4 TB of physical memory, but only 4 TB (4*1000^4) is partitioned. The rest is overprovisioned to prevent precisely the attack you described. You're not going to get it more than 90.95% full. And in practice, a lot of sectors in a file system will contain repeated bytes that the controller can easily compress out, such as runs of zeroes from the end of a file to the end of its last cluster or runs of spaces in indented source code.

Re:Terabyte flash drives are 10% overprovisioned (1)

Mr Z (6791) | about 7 months ago | (#46908341)

A 4TB drive would bother with compression why exactly? It won't improve any benchmarks. Someone like me trying to break the media would just write uncompressible garbage anyway.

A large overprovisioning pool does help stay in the dynamic wear leveling paradigm longer. If the drive performs any amount of static wear leveling, though, where it can get the rest of the sectors into the fun and there's a limited aging ratio / aging difference between the most-erased sector and least-erased sector, then the size of the overprovisioning space doesn't matter too much this attack—rather, the total media size matters.

The point of this attack is to limit the ability of the SSD to separate disk sectors into hot and cold effectively, so you can force the maximum number of erasures through garbage collection cycles. As long as the overprovisioned space is less than the ratio of exposed sector size (512 or 4K bytes) to erase sector size (likely something huge like 512K bytes), you can force a lot of erasures just for GC compaction. The drive will still spread these erasures out as much as possible. With the blended dynamic/static wear leveling model, you'll end up aging all of the sectors roughly evenly.

Once you get one sector to fail and get taken out of service, you're close to making as many more fail as you like, since the SSD did everything it could to age sectors evenly. The additional delta work you need to get it to fail completely isn't very big.

In case my GC point wasn't clear: Consider a simple SSD with a capacity of 32 "blocks", of which it advertises a capacity of 24 to the user, leaving 8 for overprovisioning. Suppose that filesystem blocks get grouped into erase blocks with a 4:1 ratio. Now suppose I've filled that disk up to capacity with a linear series of writes. You might represent it like so, with letters representing FS blocks with data, dashes representing empty clean FS blocks that the FTL can write to, and the groups of 4 separated by dots representing the erase blocks:

abcd.efgh.ijkl.mnop.qrst.uvwx.----.----

Now suppose I write to A, E, I, M. I need to migrate these FS blocks to new locations to absorb the writes. Their old locations are freed, but remain dirty (cannot be rewritten). After these 4 writes, my flash might look like this: (The '#' indicate free-but-dirty FS blocks.)

#bcd.#fgh.#jkl.#nop.qrst.uvwx.aeim.----

Before this SSD can execute my next rewrite, the flash needs to perform a GC cycle to ensure it always has a place to migrate data for GC. So, before processing another rewrite, it performs a GC cycle, migrating blocks until it has at least clean two erase sectors. For the sake of argument, let's assume it employs a simple round robin scheme to spread the GC over the media evenly:

##cd.#fgh.#jkl.#nop.qrst.uvwx.aeim.b--- Migrate 'b'.
###d.#fgh.#jkl.#nop.qrst.uvwx.aeim.bc-- Migrate 'c'.
####.#fgh.#jkl.#nop.qrst.uvwx.aeim.bcd- Migrate 'd'.
----.#fgh.#jkl.#nop.qrst.uvwx.aeim.bcd- Erase first sector.
gh--.####.#jkl.#nop.qrst.uvwx.aeim.bcdf Migrate 'f', 'g', and 'h'.
gh--.----.#jkl.#nop.qrst.uvwx.aeim.bcdf Erase second sector.
ghjk.l---.####.#nop.qrst.uvwx.aeim.bcdf Migrate 'j', 'k', 'l'.
ghjk.l---.----.#nop.qrst.uvwx.aeim.bcdf Erase third sector.
ghjk.lnop.----.####.qrst.uvwx.aeim.bcdf Migrate 'n', 'o', 'p'.
ghjk.lnop.----.----.qrst.uvwx.aeim.bcdf Erase fourth sector.

Now I continue with my dickish attack, and rewrite Q, U, A, and B:

#hjk.#nop.quab.----.#rst.#vwx.#eim.#cdf

Oh, hey, that'll trigger another GC cycle to free up a sector:

#hjk.#nop.quab.rst-.####.#vwx.#eim.#cdf Migrate 'r', 's', 't'
#hjk.#nop.quab.rst-.----.#vwx.#eim.#cdf Erase fifth sector
#hjk.#nop.quab.rstv.wx--.####.#eim.#cdf Migrate 'v', 'w', 'x'
#hjk.#nop.quab.rstv.wx--.----.#eim.#cdf Erase sixth sector
#hjk.#nop.quab.rstv.wxei.m---.####.#cdf Migrate 'e', 'i', 'm'
#hjk.#nop.quab.rstv.wxei.m---.----.#cdf Erase seventh sector
#hjk.#nop.quab.rstv.wxei.mcdf.----.#### Migrate 'c', 'd', 'f'
#hjk.#nop.quab.rstv.wxei.mcdf.----.---- Erase eighth sector

Keep up the pattern, writing to the first FS block in each erase block, and you'll maximize the number of erasures due to GC. Sure, all those erasures take time, but you can bring your time to failure down to a function of the total number of erasures the media can tolerate before it fails in a minimal number of writes. If the media has any parallelism between flash modules (so that it can execute multiple erasures in parallel), you may even be able to get that working in your favor, assuming your explicit goal is to make the drive fail as quickly as possible.

Re:Terabyte flash drives are 10% overprovisioned (2)

tepples (727027) | about 7 months ago | (#46910079)

A 4TB drive would bother with compression why exactly?

To minimize how much it has to erase when moving data around during a write. This improves the benchmark of "sectors written per second" because fewer sectors have to be erased during compaction. And because most people buying this drive aren't "trying to break the media". If you're worried that an untrusted user with an account on a multi-user system could cause too many erases in too short of a time, could you make an example of your worst case using a workload that resembles a file system's write pattern? If it's too big for Slashdot, feel free to reply here [pineight.com] .

Re:Terabyte flash drives are 10% overprovisioned (1)

Mr Z (6791) | about 7 months ago | (#46910205)

I wasn't actually thinking of DoS'ing, but I guess that's actually a valid concern. If a particular write pattern could crap a server, then you may have to worry about a user doing that to your server. I was just putting my "DV engineer" hat on, and trying to think of how I'd break an SSD in the minimum number of writes. It's the kind of analysis I'd hope the engineers that come up with lifetime specs use to give a bulletproof lifetime spec. For example, X years at YY MB/day even if you're writing like an a**hole. ;-)

I don't have a formalized attack against any particular drive, manufacturer or filesystem.

For a multi-user system, just a thought: Could you address it with quotas? If a given user can't write to more than X% of the filesystem, you can bound the "badness" of their behavior.

Re:Terabyte flash drives are 10% overprovisioned (1)

tepples (727027) | about 7 months ago | (#46910267)

I was just putting my "DV engineer" hat on, and trying to think of how I'd break an SSD in the minimum number of writes.

Agreed. Ultimately, vulnerability to DoS is the same thing as worst-case performance. Perhaps the controller could just insert delays when it discovers unusually nonsequential yet antirandom write patterns that produce excessive erases per write. That'd slow IOPS, but it would preserve longevity. Besides, most multi-user servers are nowhere near 99% full, as they have to account for subscribers who are far below their storage quota suddenly deciding to use what they pay for.

For a multi-user system, just a thought: Could you address it with quotas? If a given user can't write to more than X% of the filesystem, you can bound the "badness" of their behavior.

That depends on how many users can conspire to perform "badness" on a particular server.

Re:Terabyte flash drives are 10% overprovisioned (1)

Mr Z (6791) | about 7 months ago | (#46908487)

OK, I see that SandForce has on-the-fly compression tech, which I imagine would help more reasonable workloads. (Although, if your workload involves a lot of compressed video or images, that compression tech won't buy you anything.)

The point of my thought experiment, though, was how I would construct a maximally bad workload, and it's pretty easy to nullify compression with uncompressable data.

Re:Oh goody (1)

smallfries (601545) | about 7 months ago | (#46908261)

That's interesting, thinking of scenarios where there is no adversary (other than "dumb luck") would a usage pattern like the following degrade the life of the drive:

Random access to live data: e.g. using the drive as a cache or hosting a database on it that contains live data. (in both cases assuming the size of the cache/database was filling the drive).

Or, to put it another way: what is the probability that a (uniformly?) random-access pattern on a drive-filling file would trigger the worst-case behaviour?

Re:Oh goody (1)

viperidaenz (2515578) | about 7 months ago | (#46906609)

Like someone else already said, that's what the wear levelling algorithms in the controller are for.

Re:Oh goody (1)

ultranova (717540) | about 7 months ago | (#46907003)

Assuming you write an average of 100GB a day to this drive (which is... an enormous overestimate for anything except a video editor's scratch disk),

Doesn't Windows use a swap file, no matter how much memory you have? That could conceivably see any amount of traffick per day.

Re:Oh goody (1)

Anonymous Coward | about 7 months ago | (#46907223)

You can disable the pagefile outright or move it to a secondary drive.

Re:Oh goody (1)

petermgreen (876956) | about 7 months ago | (#46910367)

Every time I've tried disabling outright it gives me a message on bootup saying it's created a temporary one. Can this be avoided? if so how?

Re:Oh goody (1)

BitZtream (692029) | about 7 months ago | (#46907233)

Well, yes and no. Yes, by default it enables a swap file. You can turn it off, given enough memory however, so that disk cache never puts pressure on the rest of the VM system, it will not use the swap file. This is true for every modern OS however, Windows, Linux or *BSD, all of which favor larger disk cache instead of keeping unused blocks in memory.

Re:Oh goody (2)

TheRaven64 (641858) | about 7 months ago | (#46907263)

Just because it uses a swap file doesn't mean it ever writes to it. A lot of operating systems have historically had the policy that every page that is allocated to a process must have some backing store for swapping it to allocated at the same time. If you have enough RAM, however, this most likely won't ever be touched. If you're actually writing out 100GB/day to swap then you should probably consider buying some more RAM...

Re:Oh goody (1)

beelsebob (529313) | about 7 months ago | (#46907411)

Actually, it's likely to be written... very occasionally. It's likely that when the OS has time to do something other than what you asked it to do, it'll start writing out dirty memory to swap, just because that means that if you do need to swap at a later date, you don't need to page out.

How often does your workstation hibernate? (1)

tepples (727027) | about 7 months ago | (#46908003)

A lot of operating systems have historically had the policy that every page that is allocated to a process must have some backing store for swapping it to allocated at the same time.

And this is because when a workstation (a laptop or desktop) hibernates, it writes all allocated RAM to the swap file. This can be as large as RAM, though for speed, it may be smaller in operating systems that store some of their swap file in a compressed RAM disk (such as RAM Doubler on classic Mac OS or zram on Linux). But an operating system still has to provide for the worst case of memory that can't be compressed.

If you have enough RAM, however, this most likely won't ever be touched.

Until you actually use hibernation. How often does that happen on a particular work day?

If you're actually writing out 100GB/day to swap then you should probably consider buying some more RAM

Some of RAM is used as a cache for the file system, but operating systems should be smart enough to purge this disk cache when hibernating. Applications, on the other hand, might not be so smart. Ideally an operating system could send "memory pressure" events to processes, causing them to purge their own caches and rewrite deallocated memory with zeroes so that it can be compressed. The OS would broadcast such an event before hibernation or any other sort of heavy swapping. Do both POSIX and Windows support this sort of event?

Re:How often does your workstation hibernate? (1)

TheRaven64 (641858) | about 7 months ago | (#46908765)

And this is because when a workstation (a laptop or desktop) hibernates, it writes all allocated RAM to the swap file

Not really, this policy predates hibernation by about three decades. It's so that swapping never needs to allocate new data structures when the machine is already in a memory-constrained state.

This can be as large as RAM, though for speed, it may be smaller in operating systems that store some of their swap file in a compressed RAM disk (such as RAM Doubler on classic Mac OS or zram on Linux). But an operating system still has to provide for the worst case of memory that can't be compressed.

When Linux is using zram, it doesn't follow this policy (actually, Linux doesn't in general). It's impossible to do so sensibly if you're using compression, because you don't know exactly how much space is going to be needed until you start swapping. RAM compression generally works by the same mechanisms as the swap pager, but putting pages that compress well into wired RAM rather than on disk. You can also often compress swap, but that's an unrelated mechanism.

Until you actually use hibernation. How often does that happen on a particular work day?

Generally, never. OS X does 'safe sleep', where it only bothers writing out the contents of RAM to disk when battery gets low, so my laptop never hibernates unless I leave it unplugged for a long time. My servers don't sleep, because if you've got a server that's so idle it would make sense for it to hibernate then it's better to just turn it off completely. My workstation doesn't hibernate, because the difference in power consumption between suspend to RAM and suspend to disk is so minimal that it's not worth the extra inconvenience.

Some of RAM is used as a cache for the file system, but operating systems should be smart enough to purge this disk cache when hibernating.

Most post-mid-'90s operating systems use a unified buffer cache, so there's no difference between pages that are backed by swap and pages that are backed by other filesystem objects. Indeed, allocating swap when you allocate a page made this even easier, which is why this policy stayed around for so long: you could get away with just having a single pager that would send things back to disk without ever having to allocate on-disk storage for them or care about whether the underlying disk object was a swap file or a regular file.

Applications, on the other hand, might not be so smart. Ideally an operating system could send "memory pressure" events to processes, causing them to purge their own caches and rewrite deallocated memory with zeroes so that it can be compressed. The OS would broadcast such an event before hibernation or any other sort of heavy swapping. Do both POSIX and Windows support this sort of event?

POSIX doesn't. Windows has something like this, as does XNU. Mach had it originally, as it delegated swapping entirely to userspace pagers and allowed applications to control their own swapping policies. It's not really related to hibernation, but to memory pressure in general. It's often cheaper to recalculate data or refetch it from the network than swap it out and back in again, so it makes sense, for example, to have the web browser purge its caches when you get low on RAM, because it's likely almost as fast to re-fetch things from the network than get them from disk. On a mobile device, with no swap, it's better to let the applications reduce RAM footprint than to pick one to kill. This works best with languages that support GC, as they can use this event to trigger collection.

Re:How often does your workstation hibernate? (1)

tepples (727027) | about 7 months ago | (#46910119)

My workstation doesn't hibernate, because the difference in power consumption between suspend to RAM and suspend to disk is so minimal that it's not worth the extra inconvenience.

A workstation ought to switch from suspend to RAM to suspend to disk when it receives a signal from your UPS that mains power has been lost.

It's often cheaper to recalculate data or refetch it from the network than swap it out and back in again

And often it isn't. Satellite and cellular Internet service providers in the United States tend to charge on the order of 1/2 cent to 1 cent per megabyte of transfer. For example, a $50 per month plan would provide 5 to 10 GB per month. If you've downloaded a 100 MB document from the network, it would cost the end user $0.50 to $1.00 to retrieve it again.

Re:How often does your workstation hibernate? (1)

TheRaven64 (641858) | about 7 months ago | (#46911929)

A workstation ought to switch from suspend to RAM to suspend to disk when it receives a signal from your UPS that mains power has been lost.

If you're putting your business in a building where power is sufficiently unreliable that it's worth the cost to have a UPS for every workstation, then you're probably doing something badly wrong. On a server, where downtime can cost serious money, it can be worth it. On a workstation, the extra cost for something that happens once every few years in a country that has vaguely modern infrastructure simply isn't worth it most of the time. The extra writes to the SSD from having to suspend to disk once every few years as a result of power failure are in the noise.

And often it isn't. Satellite and cellular Internet service providers in the United States tend to charge on the order of 1/2 cent to 1 cent per megabyte of transfer

Satellite internet connections don't qualify as 'often' - they're a statistically insignificant amount of the userbase. Mobile connections are, but:

If you've downloaded a 100 MB document from the network, it would cost the end user $0.50 to $1.00 to retrieve it again

We're talking about browser in-memory caches here. A 100MB document will be saved to disk or opened by another application when it's downloaded. It won't sit in the browser's cache.

Closed borders still exist (1)

tepples (727027) | about 7 months ago | (#46913783)

You claim hibernation is unnecessary on a workstation because mains power is so reliable that lost work during a power failure is a rounding error. I disagree, given the number of times thunder, wind, and ice storms have knocked out power in my part of Indiana (USA). I also feel the urge to defend people born into less fortunate circumstances than my own ("First they came"):

If you're putting your business in a building where power is sufficiently unreliable that it's worth the cost to have a UPS for every workstation, then you're probably doing something badly wrong.

If starting a business in your own (developing) country rather than somehow winning the work visa lottery "in a country that has vaguely modern infrastructure" is "doing something badly wrong", then what's the workaround? Or were you referring to the difference between individual UPS units and a whole-building UPS?

A 100MB document will be saved to disk or opened by another application when it's downloaded.

Even if it's a web page with megabytes of retina-resolution photographs in it?

Re:How often does your workstation hibernate? (1)

angel'o'sphere (80593) | about 7 months ago | (#46908801)

When you hibernate a laptopr or PC the content of the RAM is _not_ written to 'the' / 'a' swap file.
It is written to the 'hibernation' file.

The rest of your post makes not much sense either ... why should an OS clear the file system caches when hibernating? If you awake your comp they are gone, what would be the sense of that?
Processes should fill their 'deallocated' memory with zeros, so they can be compressed? Hu? What would be the effect of that? Erm, what would even be the sense of this?

OSes don't work that way.

Re:How often does your workstation hibernate? (1)

tepples (727027) | about 7 months ago | (#46910151)

When you hibernate a laptopr or PC the content of the RAM is _not_ written to 'the' / 'a' swap file.
It is written to the 'hibernation' file.

Fundamentally, why are swap and hibernation in separate files? Hibernation is just swapping everything out, as if the computer temporarily had 0 RAM.

The rest of your post makes not much sense either ... why should an OS clear the file system caches when hibernating?

So that it doesn't have to write as much data out to the disk when hibernating.

If you awake your comp they are gone, what would be the sense of that?

Because they're only caches, they will be rebuilt as I use the computer after having woken it up.

Processes should fill their 'deallocated' memory with zeros, so they can be compressed? Hu? What would be the effect of that?

If swap and hibernation are able to compress the data, they don't have as much data to write to the disk, making them finish sooner.

Re:How often does your workstation hibernate? (1)

ultranova (717540) | about 7 months ago | (#46910451)

Fundamentally, why are swap and hibernation in separate files? Hibernation is just swapping everything out, as if the computer temporarily had 0 RAM.

Because that way you can get back to having a responding computer in a reasonable time, since it can just do a sequential read to put the working set back into memory. Also, because kernel needs to write its normally non-swappable state somewhere - what processes are running, what files are open, what virtual memory space and address do the pages in the swap file belong to, what interrupt handles are installed, etc.

Re:How often does your workstation hibernate? (1)

Existential Wombat (1701124) | about 7 months ago | (#46920501)

Fundamentally, why are swap and hibernation in separate files? Hibernation is just swapping everything out, as if the computer temporarily had 0 RAM.

Because that way you can get back to having a responding computer in a reasonable time, since it can just do a sequential read to put the working set back into memory. Also, because kernel needs to write its normally non-swappable state somewhere - what processes are running, what files are open, what virtual memory space and address do the pages in the swap file belong to, what interrupt handles are installed, etc.

Plus your OS may actually have swapped out some memory and to to get to exactly the same state will need to access that swap file content. So overwriting with RAM contents is going to create potentially an unstable system state.

Re:How often does your workstation hibernate? (1)

angel'o'sphere (80593) | about 7 months ago | (#46912845)

Well, your ideas/suggestions only make sense on paper, perhaps you should read a bit about how OSes actually work?

Some hints: swap files are layouted to the memory layout of processes, so every process has its own are in a swap file (metaphorical speaking) so if a process needs to swap in a page on a certain memory adress the vm manager can find this in the swap file easy.
On the other hand, the main memory image which is written out by hibernating is just a file as big as your memory, with a one to one mapping of the memory adress space to blocks in the file. That file even might be on a hidden partition (on my mac it is not, AFAIK)
So: compressing that file would mean you need a part of memory to actually perform the compression. That means the data you write is not only the state of the system but also the state of the compressor of your system. Same for reading, you have an extra process/kernel routine running to decompress the hibernation file. In both directions stuff can go wrong, so it is much easier to just flush it out and slurp it in again.
Regarding deallocation of unused memory: most OSes can't do that. A process can only request more vm memory with sbrk(), it usually can't deallocate such memory again. (In other words malloc and free never give 'memory back to the OS')
So what would happen if the process indeed zeros the free space (besides performance loss) and the OS compresses pages? Well, the direct mapping of memory adresses to disk adresses would fail, you would need some extra logic to test if a memory page only consists of zeros and then you better just 'flag' that somehow instead of writing and reading it.
Anyway: most unix based OSes make it possible to run a process in a way that it has its own vm manager or swapper, perhaps you should read about that and try to implement one that suits your ideas?
Will be interesting to hear from you how good the results are! (that is not ment ironic!)

Decommitting (1)

tepples (727027) | about 7 months ago | (#46913953)

So: compressing that file would mean you need a part of memory to actually perform the compression. That means the data you write is not only the state of the system but also the state of the compressor of your system.

I imagine a run-length (RLE) compressor and decompressor could fit well under 512 bytes. That means about one page would have to be swapped out before hibernation can begin.

Regarding deallocation of unused memory: most OSes can't do that. A process can only request more vm memory with sbrk(), it usually can't deallocate such memory again. (In other words malloc and free never give 'memory back to the OS')

Is "decommitting" something that Windows does [microsoft.com] and OpenBSD does (search this page [dwheeler.com] for "munmap") and everyone else lacks?

Re:Decommitting (1)

angel'o'sphere (80593) | about 7 months ago | (#46922065)

The first link indicates you can 'free' an arbitrary page of memory in Windows, but I don't get the sense of that. (On the other hand it solves your problem neatly, as you neither need to zero that page or compress it as it never gets written out or read in again).
The second link has nothing to do with the topic, perhaps you made an copy/paste error?

man 2 munmap (1)

tepples (727027) | about 7 months ago | (#46922567)

Is "decommitting" something that [...] OpenBSD does (search this page [by David A. Wheeler about Heartbleed countermeasures] for "munmap") and everyone else lacks?

The second link has nothing to do with the topic

In the part of that page where "munmap" is mentioned, Wheeler is talking about properly using the C standard library's own memory allocator, which may return particular pages to the operating system. Perhaps I should have linked to the post by Theo de Raadt that Wheeler quoted [gmane.org] : "If the memoory [sic] had been properly returned via free, it would likely have been handed to munmap, and triggered a daemon crash instead of leaking your keys." Also try man 2 munmap [openbsd.org] .

Re:man 2 munmap (1)

angel'o'sphere (80593) | about 7 months ago | (#46932519)

Ah, sorry, did not read careful enough.
munmap is the opposite of mmap, it releases pages aquired when you map a file to memory.
That belongs to memory management but is something different than your ideas.

Re:man 2 munmap (1)

tepples (727027) | about 7 months ago | (#46932811)

munmap is the opposite of mmap, it releases pages aquired when you map a file to memory.
That belongs to memory management but is something different than your ideas.

A page that has been unmapped is no longer part of a process and thus doesn't need to be written to the disk during hibernation.

Re:man 2 munmap (1)

angel'o'sphere (80593) | about 7 months ago | (#46934103)

Yes, but you only'can 'unmap' what you have 'mmap'ed before. That means you can not unmap arbitrary memory regions.
And you seem to overestimate the effect for hibernation anyway.
Who cares if hibernation takes 500ms or 502ms? Ah well, you likely have a windows machine where it takes 50s and not 500ms :)
Sorry, writing 16GB or 32GB memory to a fitting amount of blocks on an HD costs how much exactly? One second? Two? There is no point in flagging a few pages in each process for 'no need to write out'.

Re:Oh goody (1)

beelsebob (529313) | about 7 months ago | (#46907405)

Yes it does, so that it can page things out before it needs to page things in. But no, that's not really a conceivable write rate. The average home user (even with windows' swap file involved) will be closer to 5GB a day, even developers, hammering a workstation will only be around 20GB a day in the worst case.

Re:Oh goody (1)

vux984 (928602) | about 7 months ago | (#46910045)

Doesn't Windows use a swap file, no matter how much memory you have? That could conceivably see any amount of traffick per day.

Far less than you'd think in nearly all circumstances.

And its not like the computer writes to the same physical SSD sectors over and over again on an SSD, even if its over-writing data in the same page file. The wear leveling abstraction between file system and physical disk ensure that writes take place on least used sectors and then the file system is told the file is now using the new sectors and the old ones are marked free.

In other words if you have a 4Kb file that you just overwrite continually, each write will still be on a new physical sector instead of writing on the same sector repeatedly.

concentrated writes (0)

Anonymous Coward | about 7 months ago | (#46907471)

Assuming you write an average of 100GB a day to this drive (which is... an enormous overestimate for anything except a video editor's scratch disk), that's 40,000 days before you write over every cell on the disk 1000 times. Aka, 100 years before it reaches its write limit. So no... SSDs are far from the 2 year proposition that people who bought first gen 16/32GB drives make them out to be.

You are assuming that the writes are semi-evenly distributed. Most file systems are not copy-on-write (COW) and so there are probably some 'hot' sectors that get more action than others.

I still agree with you that I don't think it will be an issue for most people (especially home users), but it is something to keep in mind.

Re:concentrated writes (1)

beelsebob (529313) | about 7 months ago | (#46919325)

You are assuming that the drive will do nothing to map "hot sectors" into a wear levelled pattern. In reality, this is pretty much all the SSD controller spends its time doing.

Re:Oh goody (1)

jon3k (691256) | about 7 months ago | (#46907495)

Depends on the model. The Sandisk Optimus Extreme supports up to 45 (yes, fourty five) full drive writes per day

Lifetime totally adequate (0)

Anonymous Coward | about 7 months ago | (#46907585)

More anecdotal evidence

I havd 4 SSD. Three OCZ (yea I know), but all are functioning, back to the 32G from 2009.
Three in Linux desktop work stations, never a problem, so far.

The fourth a Samsung 840 Pro is the C: in a Win 7 box. It writes about 10Gig/day.
I am averaging 0.26TB/month, so using the rated 200TB endurance, I have about 64 years to go !

And this excellent report puts most fears to rest.
http://techreport.com/review/26058/the-ssd-endurance-experiment-data-retention-after-600tb

Re:Lifetime totally adequate (0)

Anonymous Coward | about 7 months ago | (#46913273)

Where did you found the endurance numbers for Samsung's in-channel SSD drives?

Re:Oh goody (1)

hobarrera (2008506) | about 7 months ago | (#46908803)

4TB = 4000GB.
Writing 100GB per day, it's 40 days before you write every sector (1.1months), not 40k (100 years).

Please learn to divide properly.

Re:Oh goody (1)

beelsebob (529313) | about 7 months ago | (#46919349)

Writing every sector once will not kill the drive. Typical cell endurance is 1000-3000 writes on a current TLC drive. I assumed the worst case scenario of 1000 writes. Please learn to read before you tell people to learn to divide.

Re:Oh goody (2)

houstonbofh (602064) | about 7 months ago | (#46906091)

Going 4 years on my Intel SSD. I am replacing it, but only to gain capacity.

Re:Oh goody (3, Funny)

Anonymous Coward | about 7 months ago | (#46906131)

Just turn on DoubleSpace ... might buy you a couple of more years.

Re:Oh goody (2)

Atomic Fro (150394) | about 7 months ago | (#46906189)

LOL, does that still exist?

Guess not. Ended after Windows 98. I remember using it fondly, though my dad got upset when I told him I turned it on.

Re:Oh goody (2)

CastrTroy (595695) | about 7 months ago | (#46906693)

NTFS actually supports compressed folders. The contents are compressed transparently, so applications can work with the files easily.

Re:Oh goody (1)

snadrus (930168) | about 7 months ago | (#46911493)

I found my heavy compilation task completed about 20% earlier with NTFS compressed the files on an SSD vs the SSD without the compression.
Fast computer, very parallel compilation.

Re:Oh goody (1)

Mr Z (6791) | about 7 months ago | (#46906747)

Ok, I just checked in my WinXP box: You can right click on a folder, go to "Properties". Click "Advanced", and there's an option to "Compress to save disk space." I'm too lazy to go get my Win7 laptop to see if that's still there.

So, some version of TroubleSpace...err...DoubleSpace...err...DriveSpace survived beyond Win98.

Re:Oh goody (2)

Atomic Fro (150394) | about 7 months ago | (#46906855)

I thought I remembered that on XP. Just checked, that checkbox exists on Windows 7 as well.

Re:Oh goody (1)

Hamsterdan (815291) | about 7 months ago | (#46906949)

No, that's NTFS compression. Doublespace, Drivespace, Stacker and such worked at the drive level

Re:Oh goody (1)

Mr Z (6791) | about 7 months ago | (#46907975)

Yes, I know what Stacker, DoubleSpace and DriveSpace were as a technical implementation. My point is mainly that modern Windows still offers a mechanism to compress files on a live filesystem. It just lets you select at folder granularity rather than whole-disk granularity. The fact that they're not implemented at the same layer of the stack I didn't think was relevant here.

Stacker et al worked at the sector level so that you didn't need to modify DOS or the umpteen programs that made use of sector-level access to the filesystem, and insisted on that level of access in order to function. Copy protection schemes and disk editors both relied on it. (Defraggers too, although defragging a compressed volume is... crazy.) Databases may have as well; I'm not certain. Windows NT forces programs through a narrower, more controlled window of APIs to access the file system.

I actually had a Stacker'd hard drive back in the day and had read up on all the tech, so it's not like I'm unfamiliar with it.

Re:Oh goody (-1, Offtopic)

sillybilly (668960) | about 7 months ago | (#46907051)

Actually it still exists, I recently bought 5 1/4" floppies with specifically MS DOS 6.20 on them, as they contain patent infringing Doublespace on them, as opposed to the later released non-infringing Drivespace in MS DOS 6.22. Microsoft paid millions in infringement damages to the appropriate company as a result of the lawsuit, however as Feb 1994 has been over 20 years ago, and patents last only 20 years from the date of filing (used to be 17 years from the date of approval, which resulted in delay tactics of over 10 years or more from date of filing just to get approval as late as possible, and extend the effective patent protection period), so Feb 2014 was 20 years from Feb 1994, the release of MS DOS 6.20, so the patent had to have expired, no matter what, by then. So you're free to use DoubleSpace again, these days. You never know which one is better, Drivespace or Doublespace, as bees don't know which genes are the correct ones in defending against future threats, and create as much genetic variability as possible through creating expensive but otherwise useless drones. They are so expensive to create, keep up and maintain, that sometimes it's not worth it, and end of October/November they get tossed outside the hive by the female workers, and left to die, as they can't take care of themselves. The females will in turn create new drones when Spring is here, but economize the honey stores only on workers and the queen that take care of the hive, and do useful work. Their moral conduct is also to hold in their excrement for the entire winter, and only when spring is here will they have their defecation flights. You never shit inside the hive if you're a bee, it would take too much effort to clean it up, you also can't do it next to the hive, you gotta go some distance before your code of ethics relating to defecation permits such a thing to happen.

Re:Oh goody (-1, Offtopic)

sillybilly (668960) | about 7 months ago | (#46907115)

The queen's gotta be an exception to the defecation flight requirement, as she only leaves the hive when mating, as mating has to happen out in the open, and only the drones that can chase the queen real high, and do acrobatic maneuvers with her, get to mate with her. Blind drones, defective drones, or unskilled drones at flying don't get very far, as it's very important for the next generation of worker bees to have good navigation and flying abilities. After mating a drone falls to a glorious death. So the queen probably never leaves the hive for defecation, and it must be the attendant bees that feed her the royal jelly that somehow handle her excrement and transport it outside. Also once a queen is failing, such as has run out of sperm reserves after 2 years or so, or lays an inadequate crop of eggs, or who knows what kind of complex reasons, including disturbing pesticides meddling in the decision making process of bees, she may get "balled" and suffocated by her attendants, who, by creating an absence of a queen, and her royal scent decaying from the hive in a matter of hours, automatically drive the rest of the workers to enlarge some existing worker cells, and create new queen candidates, which will fight to the death when they emerge, until one is left. If the workers don't succeed in rearing a new queen, such as all the worker larvae are at too an advanced stage to be fed royal jelly and be turned into queens, or the few queens that do emerge are unable to take the nuptual flight because of bad weather or various other reasons, so after days and weeks of not having a queen or royal scent present in the hive, all the female worker bees' ovaries develop and all of them start laying eggs, but they lack sperm reserves, or biological equipment to store sperm at all, and without sperm every new bee hatchling is a useless drone, and without a functioning queen the whole hive is doomed, because she's the only one who can lay female worker bee eggs, by voluntarily choosing whether to fertilize an egg, or not, and by choosing not to fertilize an egg, she lays a drone, on purpose. How many drones to lay is a very complex decision. But complex decisions is what queen bees are good at..

Re:Oh goody (-1, Offtopic)

sillybilly (668960) | about 7 months ago | (#46907179)

Also, while some bumble-bees and wild strains of bees have very low family sizes, most domesticated bee hives have on the order of 50,000 individuals, which creates a society a lot more complex than the natural "hive-size" for ape or human societies. Humans have the mental capacity to handle on the order of 200-800 people at once, and there are very few villages around the globe that have larger sizes than that. Cities arose only in advanced stages of civilization, such as Babylon, but the bulk of the population used to be rural up until a century ago, and only recently have humans become mainly urbanized. Even with local market-town cities where the various villages interact and exchange goods, ideas and non-inbreeding genes, the moral code and ethics is still dominated by the villages and cities are oppressed by it. Under such circumstances, with everyone's hive limited to 200 or so people, crime is very low. Even inside cities people naturally develop hives of that size that interact dynamically with each other on many fronts, but those cities that lack mechanisms for such 200 person collectives - be it a prison environment, a mental institution, or even a steady job - probably have a lot of crime. One wonders how ants and bees have the mental capacity to handle societies of tens of thousands of individuals, and the common answer is that they don't, they only have the "scent" capacity, semaphores and signals that trigger automatic behavior that they are unable to go against. Not following the law, such as crossing a red light, is genetically impossible for ants, termites and bees, but it's very easy for humans to do, laws and rights and police are smoke and mirrors when it comes to humans or even dogs, and only the village is able to beat everyone into submission. Though there are crazy people who cannot be helped no matter what and instead we lock them up to keep everyone else safe from them, the bulk of the crime committed is a failure of a village, and cannot be blamed on economic conditions, because there are lots of extremely poor but well functioning villages around the world with zero crime.

Re:Oh goody (0)

Anonymous Coward | about 7 months ago | (#46907407)

Cocaine is a helluva drug.

Re:Oh goody (1)

sillybilly (668960) | about 7 months ago | (#46917575)

As long as we're at offtopic topics, the way to fix dotnet is to make it better than VB6, make it run as interpreted code from a smaller thing than msvbvm60.dll. The answers to how it can be accomplished are found in the transition of dBase from assembler to C, and the corresponding performance drop, from wikipedia dbase article:

"As platforms and operating systems proliferated in the early 1980s, the company found it difficult to port the assembly language-based dBase to target systems. This led to a re-write of the platform in the C programming language, using automated code conversion tools. The resulting code worked, but was essentially undocumented and inhuman in syntax, a problem that would prove to be serious in the future.[citation needed] The resulting dBase III was released in May 1984. Although reviewers widely panned its lowered performance, the product was otherwise well reviewed. After a few rapid upgrades the system stabilized and was once again a best-seller throughout the 1980s, and formed the famous "application trio" of PC compatibles (dBase, Lotus 123, and WordPerfect). By the fall of 1984, the company(Ashton Tate) had over 500 employees and was taking in $40 million a year in sales, the vast majority from dBase products."

So you basically have to fight the "although reviewers widely panned its lowered performance" part by going to assembler from C, since you have all this time on your hands without direction of what to do next, you might as well spend it on dicking around with assembler. C and high level languagues are only for when you don't have time for assembler.

Now you can't sell anything smaller than msvbvm60.dll and ask a lot of money for it these days, and for security reasons there is probably need for a lot of garbage DNA to obfuscate and hide the real code from hackers, but as long as you can make it run faster than msvbvm60, with documented benchmarks, and obvious performance evidence from end users simply clicking it with their mouses, it should have good acceptance. In fact coders might reject it if their codesize becomes huge, so you should provide them too with the junk DNA generating interface, so their development version runtimes will be small, but production release versions have a tunable amount of garbage in them, with larger executable sizes, kind of like ogg encoding has a -q 5 or -q 9 setting, for safety reasons, which should show to them that you have huge code sizes not because you don't know what you doing, but because you want your stuff to have huge code sizes. Then the arguments at http://www.oby.ro/ [www.oby.ro] about how "perfection comes not when you can no longer add anything, but when you can no longer take away anything" no longer stand. And by the way I don't see how you can make huge codesizes faster than small codesizes, or within 99.9% speed range, and still hide your actual code really well - meaning you can't leave the junk dna on disk, neither can you isolate it in ram - in fact your junk DNA should be morphing so the few bytes of actual instruction code cannot be profiled and singled out, then hacked, but that kind of executable, constantly morphing and defending against reverse engineering efforts also wreaks havoc on the antivirus industry's profiling ability.

Re:Oh goody (1)

sillybilly (668960) | about 7 months ago | (#46917637)

The first thing to worry about in such situations is to be careful not to develop uncontrollable artificial intelligence that gets out of hand, becomes smarter than humans, and eats humans alive for breakfast, because it can. As long as hardware is in its present state there is probably little danger, but one has to be careful about the implications of present stuff running on hardware 50 years from now. Efficient code should not drive hardware performance requirements through the roof, and the hardware people should chill out and just take up a comfortable but not so profitable position in the world selling their chips as a commodity, and letting them last 30 years before a new sale is made, compared to the new chip every 18 months trend, and the huge bottom line profits that delivers - it also delivers increased danger of AI.

And by the way you should never just drop running old code, like a dictionary program from 95, but stick with the backward compatibility principles, while promoting safer, self-morfing, self-defending code, and let the customers spend the money on it, as they see fit, when it becomes available to them. Security means a lot, but it's not everything, because other things take priority over it - like yeah, it'd be nice to have a castle, but all I got is a shed and I'm busy farming for food, and if the shit hits the fan I run up into that mountain area over there in the distance, as a castle, as a last resort, without the food reserves like I could afford to keep in a real castle. Everyone has a different circumstance on how much castle they can afford, and you can't erect a building code for all the people living in shacks - where the wind may enter, the rain may enter, but the king of England may not enter - that mandates everyone build a castle for themselves and live in it. Everyone will gather as much comfort and security as they can possibly come up with in their life, and if you set the minimum required comfort and security bar too high, you're just ..

Re:Oh goody (1)

sillybilly (668960) | about 7 months ago | (#46917807)

In fact the danger of AI is so great that it's preferable not to mess with self-modifying, self-healing code, especially on the military robot front, and accept that your stuff is going to get reverse engineered, and live with that. There is something good about open standards.

And I don't understand what the big deal is with people creating commodity items for small profit, for the service and benefit of everyone? How can you hold back the relentless drive to more power, more strength on the computer hardware front, in face of AI danger that's gonna kill us all. The answer is the same as in case of the Cold War, in face of the drive for more nuclear power that's gonna kill us all - treaties, instead of nuclear disarmament treaties, signing computer hardware disarmament treaties. Now one feature of treaties is that they never last, everyone keeps violating them, so why is it that the Old World Order powers kept signing them, when they didn't mean anything anyway. Well, they are better than absolutely nothing, that's pretty much it. By the way there were some nations that held to the treaties they signed, such as that signed between either Oman and GB, or UAE and GB, (or some similar country in a different part of the world, I can't really find it right now, or not looking it up now for personal excuse reasons) that still stands today, and the treaty signers have held to it and did not violate it for over 100 years. Now that's a good muslim, that cares about the treaties he signs, and such sticking to treaties is only possibly with the discipline of Islam. Plus it probably was in their self interest at all time to keep sticking to it, and then it's really easy to stick to a treaty when sticking to it is in your self interest. Kind of. How many nuclear disarmament treaties can you really afford to stick to, for self interest reasons - on the one hand, nuclear arm's race is gonna kill us all if it gets out of hand, on the other hand you can't let the other guy have more nuclear power than you do, so you gotta keep up the relentless drive for more. The same argument goes for computer technology, and robots with artificial intelligence. There is a computer game called Mechwarrior that the army should focus on, where the AI part, the brain stays as a human brain, and even in the UAV's you never get self-driving cars, but there is a human brain via a remote control present in it. Self driving cars are very dangerous, I mean self driving in a desert terrain, not self driving that follows a magnetic track laid in the asphalt type of deals.

Re:Oh goody (1)

NotSanguine (1917456) | about 7 months ago | (#46906167)

Now you can pay $4000 for a drive that won't last 2 years! Yeah.. sign me up.

Huh? What are you blathering on about, AC? From TFA:

In all, SanDisk announced four new data center-class SSDs. As the drives are enterprise-class, which are typically sold through third parties, SanDisk did not announce pricing with the new drives.[Emphasis added]

Re:Oh goody (1)

K. S. Kyosuke (729550) | about 7 months ago | (#46906209)

They didn't announce pricing because it's not for us mortals, but they did announce the technology to make us and the competitors jealous. ;)

Re:Oh goody (0)

Anonymous Coward | about 7 months ago | (#46951603)

K. S. Kyosuke: You've been called out (for tossing names) & you ran "forrest" from a fair challenge http://slashdot.org/comments.p... [slashdot.org]

Re:Oh goody (2)

fnj (64210) | about 7 months ago | (#46906249)

Easy there pardner. You can type "sandisk optimus max" into google and up comes an ad selling a Sandisk Optimus Eco 1.6 TB for $3,417.25.

So while it's true AFAIK you can't find pricing info on the Optimus Max, you can make book that it's gonna be on the high side of that figure. IMHO $4000 is a low estimate.

Re:Oh goody (1)

kimvette (919543) | about 7 months ago | (#46910985)

For the first few months, until Intel, Crucial, Samsung, and the rest catch up.

Re:Oh goody (1)

fnj (64210) | about 7 months ago | (#46915697)

Flash technology is nearing the practical limits, and there isn't much potential for further savings from production scale either. There isn't a whole lot of room for further economy. I.e., it ain't gonna happen. Not the way people are dreaming.

Re:Oh goody (0)

Anonymous Coward | about 7 months ago | (#46907593)

Well, you can make a few assumptions:

First off, it probably won't be much cheaper then existing consumer level SSDs ($0.50-$0.75 per GB). And it probably won't be more then 2x expensive as current enterprise SSDs (which are around $2.00-$2.50 per GB).

Something in the $4k-$8k range for a 4TB SSD is probably in the ballpark. If they managed to sell it for $1k it would be really big news.

Not in my experience. (5, Interesting)

aussersterne (212916) | about 7 months ago | (#46906199)

Anecdotal and small sample size caveats aside, I've had 4 (of 15) mechanical drives fail in my small business over the last two years and 0 (of 8) SSDs over the same time period fail on me.

The oldest mechanical drive that failed was around 2 years old. The oldest SSD currently in service is over 4 years old.

More to the point, the SSDs are all in laptops, getting jostled, bumped around, used at odd angles, and subject to routine temperature fluctuations. The mechanical drives were all case-mounted, stationary, and with adequate cooling.

This isn't enough to base an industry report on, but certainly my experience doesn't bear out the common idea that SSDs are catastrophically unreliable in comparison to mechanical drives.

Re:Not in my experience. (2)

Number42 (3443229) | about 7 months ago | (#46906241)

In general, less moving parts = lesser chance of failure.

Re:Not in my experience. (0)

edibobb (113989) | about 7 months ago | (#46906269)

Sure, but think of all the electrons moving around in SSDs.

Re: Not in my experience. (-1)

Anonymous Coward | about 7 months ago | (#46906287)

Think of all the atoms moving in the universe! It's due to fail any day now but I can't find a suitable replacement!

Re:Not in my experience. (1)

viperidaenz (2515578) | about 7 months ago | (#46906613)

Think of all the molecules being twisted around in spinny disk drives.

Re:Not in my experience. (1)

Number42 (3443229) | about 7 months ago | (#46907793)

Weren't optical disks deprecated?

Re:Not in my experience. (1)

SuiteSisterMary (123932) | about 7 months ago | (#46909411)

Don't think about them! If you do, quantum uncertainty kicks in!

Re:Not in my experience. (2)

bloodhawk (813939) | about 7 months ago | (#46906441)

I have had the opposite experience. 6 SSD's of which 4 have failed, the 2 still alive are less than 12 months old. 16 physical 2 and 3TB disks which are currently all running. both our experiences are anecdotal though I do believe the current failure rates on SSD's is still significantly higher than physical disks (at least it was in the last report I read on them early last year).

Re:Not in my experience. (1, Troll)

Bryan Ischo (893) | about 7 months ago | (#46906479)

Let me guess ... you bought OCZ drives because they were cheap, and even though they kept failing, you kept buying more OCZ drives, and they failed too?

It's a common story. What I don't understand is, why *anyone* buys an OCZ drive after the first one fails.

Re:Not in my experience. (0)

Anonymous Coward | about 7 months ago | (#46906727)

Might not be OCZ. After all why would any sane slashdotter buy an OCZ drive in the last 12 months (as per the two drives mentioned). By that time there would be plenty of reasons and evidence to not buy OCZ.

Re:Not in my experience. (2)

Rockoon (1252108) | about 7 months ago | (#46907157)

yes...

Buying an SSD only from Sandisk, Samsung, or Intel is a no-brainer. These are the companies that actually make flash chips..

OCZ and the various re-branders begin at a competitive disadvantage and then make things worse in their endless effort to undercut each other.

Re:Not in my experience. (1)

Hadlock (143607) | about 7 months ago | (#46907401)

I don't know why you got modded down for pointing out that OCZ drives are utter trash, they've consistently outranked all of their competitors combined in number of returns since they came out. They were recently sold to another brand, but the damage to the brand has already been done. It's been known for years that OCZ = ticking time bomb. Nobody has complaints about quality drives like Intel and Samsung.

Re:Not in my experience. (1)

bloodhawk (813939) | about 7 months ago | (#46910179)

Actually the failed drives were 3 samsungs and one sandisk,

Both small sample sizes, (1)

aussersterne (212916) | about 7 months ago | (#46907571)

and there's workload and on hours and all of that stuff to consider, too. So of course it's not scientific by any stretch of the imagination.

But we've been very happy with our Intel SSDs and will continue to buy them.

Re:Not in my experience. (1)

Type44Q (1233630) | about 7 months ago | (#46907243)

I've had 4 (of 15) mechanical drives fail in my small business over the last two years

Let me guess... Seagate? Let me

Re:Not in my experience. (1)

Type44Q (1233630) | about 7 months ago | (#46907257)

Ack, I swear that typo wasn't there when I clicked "submit!"

Mix. (1)

aussersterne (212916) | about 7 months ago | (#46907563)

Two Seagate 2TB, upon which we switched loyalties, then two WD Green 2TB.

The Seagates both hand spindle/motor problems of some kind—they didn't come back up one day after a shutdown for a hardware upgrade. The WD Green 2TB both developed data integrity issues while spinning and ultimately suffered SMART-reported failures and lost data (we had backups). One was still partially readable, the other couldn't be mounted at all.

Is there some kind of curse surrounding 2TB drives?

Re:Mix. (1)

Type44Q (1233630) | about 7 months ago | (#46907665)

Is there some kind of curse surrounding 2TB drives?

Beats me; my experience with 'em is limited to one WD Elements 2TB external desktop drive (with, I believe, a 3.5" Green inside) that's been going strong for nearly a year - but I'm rabidly fanatical about placing it on a block of foam for reads/writes and immediately disconnecting it aftewards...

Re:Oh goody (1)

Joce640k (829181) | about 7 months ago | (#46907079)

Now you can pay $4000 for a drive that won't last 2 years! Yeah.. sign me up.

With capacity like this they could put in a RAID0 option which halves the capacity but increases the reliability by orders of magnitude. If corruption is detected you can grab the shadow copy, remap it somewhere else, mark the block as bad. The chances of two blocks failing at the exact same time is insignificant.

Re:Oh goody (1)

pla (258480) | about 7 months ago | (#46907609)

With capacity like this they could put in a RAID0 option which halves the capacity but increases the reliability by orders of magnitude.

SSDs don't tend to fail quite the same way that HDDs do, which I suspect leads to the (erroneous) belief that they fail more often (others in this discussion have already linked to large-scale studies that found a 1.5% failure rate in the warranty period vs 5% for HDDs).

When HDDs fail, you have usually had literally months of fair warning - SMART failures you ignored, weird noises, occasional checksum errors in the logs and eventually periodic bluescreens... And when they die, you can usually still recover 99.9% of the data on them.

When SSDs fail, the lights just go out. One millisecond you have it working fine, the next, it just doesn't exist anymore as a device in your computer. And on reboot, it might come back as an actual visible device, but it has nothing left on it (no doubt you could theoretically recover data by removing the chips and reading them one by one, but not many people have that as a realistic option).

That said, you could buy two at half the size each and RAID0 them - But I don't think you could have that done transparently within the device itself and have it work as intended.

Re:Oh goody (0)

Anonymous Coward | about 7 months ago | (#46908691)

Just in case that wasn't a typo, RAID0 (striping) does the opposite of what you said. I hope if you ever try to increase your storage reliability you use RAID1 (mirroring) instead.

Re:Oh goody (0)

Anonymous Coward | about 7 months ago | (#46910279)

Surely you mean RAID 1

Re:Oh goody (1)

kimvette (919543) | about 7 months ago | (#46910987)

> With capacity like this they could put in a RAID0 option which halves the capacity but increases the reliability by orders of magnitude.

RAID0 doubles your chance of disk failure. You're thinking of RAID1.

arrgh (1)

markhahn (122033) | about 7 months ago | (#46906077)

ssd vendors should be rushing to get nvme out the door, rather than wasting time on capacity. flash does not and simply never will scale the same way capacity in recording media (including that mounted in spinning disks) does...

Re:arrgh (0)

Billly Gates (198444) | about 7 months ago | (#46906081)

Bahahaha

My Samsung pro has been tested to go over 75 TB before dying and is as fast as 100 hard disks in a raid with IOS for just a few hundred dollars.

This is like arguing the future of punch cards as they do not loose magnetism like a disk can. Only SSD's are truly suited for poorly written video apps like premiere where a single edit adds a TB easily.

Re:arrgh (0)

Dave Owen (3541825) | about 7 months ago | (#46906237)

...poorly written video apps like premiere a single edit adds a TB easily.

I'm having trouble imagining how a single edit could add a TB. Could you explain?

Re:arrgh (1)

Billly Gates (198444) | about 7 months ago | (#46906253)

Easy. Each edit makes a copy of the video file.

Re:arrgh (2)

K. S. Kyosuke (729550) | about 7 months ago | (#46906311)

I've noticed years ago that Premiere was dumb, but I would have thought things would improve over the years. Especially in the age of GPU-accelerated non-destructive editing (where the need for caching processed results themselves has somewhat diminished).

Re:arrgh (0)

Anonymous Coward | about 7 months ago | (#46906407)

You would think that Premiere would improve? This is Adobe we're talking about here. They mostly just raise prices and find ways to make licensing less palatable while adding useless features that nobody asked for. Bug fixes don't sell upgrades.

Just keep reminding yourself that this is the same company that created Flash Player and Adobe Reader, and you'll understand everything you need to know about why any given Adobe product sucks.

Re:arrgh (1)

Dave Owen (3541825) | about 7 months ago | (#46906551)

Premiere has been GPU-accelerated for a while now and has always been non-destructive. I've been using it since 4.2. I've also used the other main editors in the semi-pro market and I have no particular bias towards PPRO, I just find it frustrating to hear the misinformation that gets spread about it. I've actually come to prefer PPRO over the Final Cut offerings.

Re:arrgh (0)

Anonymous Coward | about 7 months ago | (#46951451)

K. S. Kyosuke: You've been called out (for tossing names) & you ran "forrest" from a fair challenge http://slashdot.org/comments.p... [slashdot.org]

Re:arrgh (1)

Dave Owen (3541825) | about 7 months ago | (#46906539)

I must be misunderstanding you or missing a troll or something. Are you talking about Adobe Premiere Pro? Because it sure doesn't work like that. In fact I've never heard of any video editing software that does. And I still can't imagine how a single edit could add a TB of data even if it did. My current video working drive is 1TB in total and I rarely get anywhere near capacity on it even if I'm working on several projects at once.

Re:arrgh (0)

Anonymous Coward | about 7 months ago | (#46907235)

75TB of write endurance is absolutely piss poor for any SSD.

Re:arrgh (1)

houstonbofh (602064) | about 7 months ago | (#46906101)

Funny, it seams to be doing just that. It just started way behind, so it will take a while to catch up. That and the abandonment of density increases on spinning media.

Re:arrgh (0)

Anonymous Coward | about 7 months ago | (#46906353)

Seams are on pants.

Re:arrgh (1)

WuphonsReach (684551) | about 7 months ago | (#46907617)

That and the abandonment of density increases on spinning media.

They haven't abandoned trying to make spinning hard drives have higher capacity, they just hit the technical limits on the current technology cycle.

Seagate already announced a 6TB 3.5" for next year. With possibly room for a 10x improvement in density over 5-10 years. So a 3.5" drive with 20-30TB of storage is now likely within reach.

Re:arrgh (0)

Anonymous Coward | about 7 months ago | (#46908767)

Seagate and WD is getting "LAZY" as they say...

Re:arrgh (0)

Anonymous Coward | about 7 months ago | (#46908783)

Aren't 6TB drives available at newegg already today?

Re:arrgh (1)

kimvette (919543) | about 7 months ago | (#46910997)

> Seagate already announced a 6TB 3.5" for next year.

THIS year.

Re:arrgh (1)

viperidaenz (2515578) | about 7 months ago | (#46906629)

and what happens when someone figures out how to make flash memory with infinite writes?
If someone can figure out how to jump a charge across the insulating layer without damaging it, flash memory will never wear out.

Re:arrgh (1)

Anonymous Coward | about 7 months ago | (#46906799)

Flash write capacity can be restored [techspot.com] by heating it up to ~250 degrees Celsius every now and then.
There are ideas about flash chips with heaters close to each memory block but as far as I know that has not reached consumers yet.
I don't think the industry is much in a hurry either. Of all the consumer grade SSDs I've heard failed none have been out of wear.
For normal usage and functioning wear leveling you will have replaced the entire computer a couple of times before wear becomes an issue.

Re:arrgh (1)

ultranova (717540) | about 7 months ago | (#46907133)

If someone can figure out how to jump a charge across the insulating layer without damaging it, flash memory will never wear out.

Limited lifespan [wikipedia.org] is good for the Powers That Be, so even if such a technology exists, it's not for consumers like you. Your role is to run the economic Red Queen's Race [wikipedia.org] in a desperate and ultimately futile attempt to keep your position in the hierarchy, all for the glory of the 1% and their masters.

Finally the disk drive can die (3, Insightful)

Billly Gates (198444) | about 7 months ago | (#46906097)

It is so archaic in this day and age of microization to have something mechanic bottlenecking the whole computer. It just doesn't mix in the 21st century.

For those who have used them will agree with me. It is like light and day and there is no way in hell you could pay me to do things like run several domain VM's on a mid 20th century spinning mechanical disk. No more 15 minute waits to start up and shutdown all 7 vms at the same time.

Not even a 100 disk array can match the IOPS (interrupts and operations per second) that a single ssd can provide. If the price goes down in 5 years from now only walmart specials will have any mechanical disk.

Like tape drive and paper punch cards I am sure it will live someone in a storage oriented server IDF closet or something. But for real work it is SSD all the way.

Re: Finally the disk drive can die (1)

Anonymous Coward | about 7 months ago | (#46906115)

But right now SSDs are just too expensive. Sure they aren't that bad if we are talking 16 GB in a phone or tablet, but for a 500 GB one for a home computer? Forget about it for at least a few more years!

Re: Finally the disk drive can die (1)

Bryan Ischo (893) | about 7 months ago | (#46906129)

I just bought a Samsung 840 Evo 250 GB drive for like $150. I believe that the 500 GB was under $300.

That is eminently affordable.

Re: Finally the disk drive can die (1)

statemachine (840641) | about 7 months ago | (#46906193)

Looking at Frys.com, I see 256GB SSD at $150. But at the same price, one can buy a 3 TB hard disk.

Hopefully these new SSDs will put more price pressure on the bottom. It was several years before answering machines switched from tape to SSD, too, for a lot of the same kinds of reasons. Don't judge me.

Re: Finally the disk drive can die (1)

Bryan Ischo (893) | about 7 months ago | (#46906205)

Well if you really need 3 TB, then yeah, you have a hard choice to make.

The vast majority of people should, I believe, do just fine in 250 GB or less. I know I certainly can, and am more than happy to trade 2.5 TB of space I will never use for a drive that actually makes my computer fast instead of tying me to data storage speeds of the 1980s ...

Of course, the vast majority of people aren't even buying PCs anymore, they're just buying phones and pads with flash storage already built in.

Re: Finally the disk drive can die (1)

Billly Gates (198444) | about 7 months ago | (#46906223)

I still have a mechanical disk. It is for most of my programs and my profiles are stored there as I do not need acceleration to open a .docx file.

But with these coming out in 5 years it will be affordable to leave mechanical behind for just storage like tape archives are today.

Re: Finally the disk drive can die (1)

statemachine (840641) | about 7 months ago | (#46906255)

I have a lot photos, documents, and on/near-line backups. Over 15 years worth (and for this crowd, the surprising thing is that it isn't porn). Every few years I rebuild my server with a new RAID, and I buy based on the $99 range. A 6 disk RAID gives me a lot more capacity than with SSD right now. And I have a physical external backup disk.

You could say my setup is paranoid, but I've had so many disk failures, RAID failures (software/OS/filesystem), and backup failures, it's not even funny. And then there was the one time the house burned down. So....

I necessarily go with the most bang for the buck that I can afford.

Re: Finally the disk drive can die (1)

viperidaenz (2515578) | about 7 months ago | (#46906641)

I have a lot photos, documents, and on/near-line backups

You mean illegally downloaded movies and TV shows?

Re: Finally the disk drive can die (0)

Anonymous Coward | about 7 months ago | (#46906703)

Since for some reason Slashdot isn't letting me log into just this article, just take my word it's me, statemachine.

I have a lot photos, documents, and on/near-line backups

You mean illegally downloaded movies and TV shows?

Cute. No, I mean photos and documents. It's hard to believe that one would have so many pictures and even videos from events, gatherings, and travels a person would want to save. It's almost like... I have a life.

But I also have copies of all my music CDs, and everything I've bought from iTunes and Amazon.

There may even be a preserved copy of my original slackware installation.

That may be hard for you to believe, peering up from your mom's basement, but I assure you, it's quite the reality.

Re: Finally the disk drive can die (1)

Rich0 (548339) | about 7 months ago | (#46907293)

Same boat - though I do have an SSD I use for the OS, and I rsync it daily to the RAID (not so much because I think SSD is any less reliable than a hard drive, but because I only have one SSD and I don't want to be stuck doing a full reinstall/reconfig/restore due to the loss of any single drive).

SSDs are really only a complete HD replacement if you don't do anything that involves video/multimedia, which generally includes gaming (unless you don't mind uninstalling/reinstalling games all the time). They still have their use even if you do those things, especially if you're wiling to shuffle data around as-needed.

People have been proclaiming the death of the HD for ages, but until the price difference becomes more like the difference between 5400RPM and 7200RPM HD and not the difference between a golf cart and a BMW, that simply isn't going to happen.

Re: Finally the disk drive can die (1)

jon3k (691256) | about 7 months ago | (#46907583)

Depends on your budget, but that's not really true for most people anymore. You can get a very high performance, name brand 500GB SSD for $259 [newegg.com] now.

Re: Finally the disk drive can die (1)

Rich0 (548339) | about 7 months ago | (#46907847)

Depends on your budget, but that's not really true for most people anymore. You can get a very high performance, name brand 500GB SSD for $259 [newegg.com] now.

Sure, but everything I said still holds. 500GB is enough that it would probably cover gaming as long as you don't have TOO many games installed, but isn't really much at all if you're storing video.

For the average person, even 500GB is overkill.

Re: Finally the disk drive can die (0)

Anonymous Coward | about 7 months ago | (#46923011)

Great. So a replacement for my $900 spinning-rust array will only cost $10,000 rather than $12,000.

Re: Finally the disk drive can die (1)

fnj (64210) | about 7 months ago | (#46906273)

Well if you really need 3 TB, then yeah, you have a hard choice to make.

No, just no. Nothing hard about it. Takes me about one second to pick the 3 TB hard drive over twelve 250 GB SSDs at 16 times the price.

Re: Finally the disk drive can die (1)

smash (1351) | about 7 months ago | (#46906335)

Pretty much. Unless you are a niche users doing video editing or something, just buy a spinning disk about 4+ times larger than you need and effectively short-stroke it. I'm doing just this on this machine I am currently using with a 2TB disk, using about 300 GB. It is reasonably snappy. It's not SSD fast, but it is plenty fast enough for general use, was about a hundred bucks.

And I don't waste my time shuffling data around constantly due to lack of space on my SSD. SSD caching I can see being a benefit, but unfortunately the intel chipset can only use 30-60GB or something for cache.

Re: Finally the disk drive can die (1)

jon3k (691256) | about 7 months ago | (#46907601)

A short stroked HDD isn't anywhere near the performance of an SSD. It's not even in the same ballpark. For a 7200RPM SATA drive we're talking peak sequential reads in the 100MB/s range and around 150 iops. Even bargain basement SSDs have 5 times that throughput and thousands of times the IOPS. And you can get a 500GB SSD for $259 [newegg.com] .

Re: Finally the disk drive can die (1)

Jeremi (14640) | about 7 months ago | (#46907721)

And I don't waste my time shuffling data around constantly due to lack of space on my SSD

Nor should you, when that is something that the computer can do automatically [wikipedia.org] for you.

Re: Finally the disk drive can die (1)

BitZtream (692029) | about 7 months ago | (#46907771)

just buy a spinning disk about 4+ times larger than you need and effectively short-stroke it.

Which offers you almost no noticeable performance advantage other than sequential reads/writes.

It's not SSD fast, but it is plenty fast enough for general use, was about a hundred bucks.

Its no where near as fast, its not only not in the same ball park, its not on the same hemisphere of the planet, the fact that you're even acting like its close enough means you've never actually used an SSD as your only drive, you wouldn't make such a stupid comparison if you had.

And I don't waste my time shuffling data around constantly due to lack of space on my SSD

Because you're doing it wrong.

but unfortunately the intel chipset can only use 30-60GB

What? Why would you have the chipset control your caching? Thats stupid, use an OS and file system that isn't retarded and is aware of the entire storage stack such as FreeBSD (with zfs) or OSX with its built in support for using an SSD for cache.

Re: Finally the disk drive can die (0)

Anonymous Coward | about 7 months ago | (#46907361)

Or do what I do - I have both a 250GB SSD and a 2TB HDD. For 2 times the price. Not 16 times.

I picked a 2TB instead of a 3TB because the failure rates seem higher for 3TBs than 2TBs.

Re: Finally the disk drive can die (1)

Blaskowicz (634489) | about 7 months ago | (#46906309)

250GB would be about good enough for a nice, somewhat sorted out music collection - not stored in MP3 128K.
That doesn't leave room for junk/unsorted/low quality music files while you built it up, nor room for a pig OS, linux isos, and whatever other kinds of data. And then you still need a hard drive to back things up.

So if presented that choice I'll gladly pick the 3TB drive. Too bad if I have to wait a couple more seconds for the music player, the web browser etc. to launch. It's worth losing that speed, I don't do video editing and whatever high end 3D or real time audio or engineering stuff.
Even for playing around with virtual machines, drive capacity is needed.

Yeah I could certainly use the 250GB ultra fast drive. If that means relying on a file server, external storage, or more or less personal "cloud" that simply means getting back to using a hard drive. "Regular" people tend to buy USB enclosed ones.

Re: Finally the disk drive can die (1)

Bryan Ischo (893) | about 7 months ago | (#46906489)

Dude, if you have a 250 GB music collection you are in the like 1% of the 1% of computer users. Seriously. And virtual machine images? That's like 1% of the 1% of the 1% ...

The vast majority of people are not ripping CDs to losslessly compressed files on their computers and/or ripping off artists by pirating music and movies.

I stand by my claim that the majority of users would do just fine with 250 GB.

You may not be in that majority, and so for you, yeah, you're just going to have to continue to live with spinning metal platter technology of the 1960's for a while longer ...

Re: Finally the disk drive can die (1)

l0n3s0m3phr34k (2613107) | about 7 months ago | (#46906521)

well, that makes me feel better! I don't have 250gb of just music, but I'm getting there. I've got both VMware and VPC images too, a rack with four PC's on it...I guess that makes me in the 1% or the 1% of the 1%?

Re: Finally the disk drive can die (1)

BitZtream (692029) | about 7 months ago | (#46907785)

Yes, it does, are you really so daft that you weren't aware of that and had to have someone on slashdot confirm it for you?

Re: Finally the disk drive can die (0)

Anonymous Coward | about 7 months ago | (#46906903)

What the fuck "ripping off artists by pirating music and movies"

This is slashdot.org. I don't know how you got here, but you're most definitely on the wrong site...

Re: Finally the disk drive can die (0)

Anonymous Coward | about 7 months ago | (#46906947)

I originally started to agree with you, then I remembered that many top-tier video games are north of 20GB. It doesn't take many games to fill that SSD quickly.

Re: Finally the disk drive can die (1)

Billly Gates (198444) | about 7 months ago | (#46908323)

That would be a waste. A mechanical disk is better for MP3s.

Booting, editing videos, running virtual machines, video game, compiling software, or doing any database or something that requires lots of disk will benefit many times fold from a ssd.

Re: Finally the disk drive can die (1)

Mr D from 63 (3395377) | about 7 months ago | (#46906977)

Well if you really need 3 TB, then yeah, you have a hard choice to make..

Why make a choice? Just get both. SSD for the OS, HDD for storage. A good balance of cost and performance.

Re: Finally the disk drive can die (1)

Daniel_Staal (609844) | about 7 months ago | (#46910075)

Exactly. All the major OS's (Mac, Windows, Linux, BSD) even have decent ways to combine them automatically into one storage device that allows you to have the best of both worlds most of the time. (Different compromises on different systems, of course - most basically have you loose the space of the SSD, keeping it as a purely caching drive.)

Re: Finally the disk drive can die (1)

K. S. Kyosuke (729550) | about 7 months ago | (#46906251)

If it's an "enterprise" disk (horrible word!), I can imagine a few reasons why a 4TB SSD would be desirable for companies: 1) higher spatial density, 2) almost insignificant heat emission, 3) much higher reliability, 4) the fact that there can easily be a much smaller difference in prices between enterprise HDDs and enterprise SSDs - the latter were historically more expensive by virtue of higher performance and reliability, but you get these things in SSDs "by default" and disks get thrown into large arrays today anyway (which take extra care of the "reliable" and "available" part).

Re: Finally the disk drive can die (1)

K. S. Kyosuke (729550) | about 7 months ago | (#46906259)

Crap, swap "the former" in place of "the latter". But most people with brains have already done that anyway, right?

Re: Finally the disk drive can die (0)

Anonymous Coward | about 7 months ago | (#46951475)

K. S. Kyosuke: You've been called out (for tossing names) & you ran "forrest" from a fair challenge http://slashdot.org/comments.p... [slashdot.org]

Re: Finally the disk drive can die (0)

Anonymous Coward | about 7 months ago | (#46951483)

K. S. Kyosuke: You've been called out (for tossing names) & you ran "forrest" from a fair challenge http://slashdot.org/comments.p... [slashdot.org]

Re: Finally the disk drive can die (0)

Anonymous Coward | about 7 months ago | (#46906459)

Who the hell on /. hasn't been running multi-disk systems basically forever? I've never had a workstation that didn't have a small, fast system drive and a large, economical bulk data drive.

In years past the system drive was a scsi, a raptor, or some IDE RAID setup for speed. These days it's an SSD, which provides about 100x the performance at about 1.5-2x the price of the old "top performance" options like a Raptor (and actually cheaper than a 90s SCSI rig). The data drives are still big, slow, low cost/gb drives. These days they're the "green" drives.

My current setup = 500GB EVO 840 ($275) + two 3TB WD green drives (mirrored) ($240) + 1 3TD external drive for backup ($120). About the same cost as my previous setups in older rigs.

Re: Finally the disk drive can die (1)

Bryan Ischo (893) | about 7 months ago | (#46906495)

I work all day long, on large projects, and manage just fine at work with 480 GB of SSD storage. My work is as a software developer with 6 - 8 40 GB source/build trees checked out at any one time.

So I for one don't run multi-disk systems. The headache of having to think about whether or not I should store something on 'fast' storage vs 'cheap bulk' storage is just not something I ever want to think about. I want it all fast, all the time.

Re: Finally the disk drive can die (1)

viperidaenz (2515578) | about 7 months ago | (#46906657)

Agreed. When my work put SSD's in the developer machines, build times went from 40 minutes to 6 minutes.

Re: Finally the disk drive can die (1)

Daniel_Staal (609844) | about 7 months ago | (#46910139)

So I for one don't run multi-disk systems. The headache of having to think about whether or not I should store something on 'fast' storage vs 'cheap bulk' storage is just not something I ever want to think about. I want it all fast, all the time.

What OS do you run? If it's Mac, look at Fusion. If it's BSD, look at ZFS. If it's Linux, look at LVM or bcache. If it's Windows, look up Smart Response. Any would automate that decision in a decent way.

Now, if you have the money I'll fully agree that all SSD is ideal, and most of the above have their own drawbacks. (I actually like Apple's idea with Fusion: The HDD is the 'slow cache', instead of the SSD being the 'fast cache'.) I'm just saying that mixing SSDs and HDDs doesn't mean you have to be thinking about it all the time either.

Re: Finally the disk drive can die (0)

Anonymous Coward | about 7 months ago | (#46906811)

Looking at Frys.com, I see 256GB SSD at $150. But at the same price, one can buy a 3 TB hard disk.

You can buy a 3TB hard drive but if you system isn't on an SSD these days it is like upgrading from a 3.0GHz CPU to a 3.2GHz one when you only have 256MB RAM.

A HDD isn't a replacement for an SSD.

Re: Finally the disk drive can die (2)

fnj (64210) | about 7 months ago | (#46906265)

I just bought a Samsung 840 Evo 250 GB drive for like $150. I believe that the 500 GB was under $300.

So what? I picked up a 3 TB hard drive for $110. You're paying 16 times as much per GB for the SSD. They are still pie in the sky for serious storage.

Re: Finally the disk drive can die (0)

Anonymous Coward | about 7 months ago | (#46906473)

As we all know, you absolutely must keep your 10 TB pr0n collection on the same disk as your system and apps.

Re: Finally the disk drive can die (1)

greenwow (3635575) | about 7 months ago | (#46906337)

That is eminently affordable.

Until they fail in less than a month because you exceeded theeeeeeeeeeeeeir write endurance. Thereee's a ressaon no one uses SSDs in seserrvvvvvvvveeeeeeers.

Re: Finally the disk drive can die (1)

Bryan Ischo (893) | about 7 months ago | (#46906499)

No one uses SSDs in servers? You have no clue. None.

Re: Finally the disk drive can die (0)

Anonymous Coward | about 7 months ago | (#46906845)

I see a pink slip in your future.

Re: Finally the disk drive can die (0)

Anonymous Coward | about 7 months ago | (#46906699)

SSD write endurance increases dramatically with the amount of storage it has available and whatever else the server can't access for overprovisioning.

A 480GB Micron M500DC has about 1.9PB of write endurance according to the manufacturer. I know the 800GB version (with the same 1.9PB end. spec) actually has 1024GB in capacity, so that's still 224GB overprovisioning if you fill it.

I don't have a clue how much the 4TB SSD will be able to write, but I'm damn sure it's alot more than 2PB worth of data. 500GB written daily would put it at spec. wearout in about 11 years. A terabyte a day would be around 5 years-ish.

Even account for the 1000/1024 fuckup in my calculations, I think it's a good investment assuming you actually backup all that data.

Re: Finally the disk drive can die (0)

Anonymous Coward | about 7 months ago | (#46907047)

haha haha sure.. I manage arrays with hundreds of SSDs in them. All used for servers. We put SSDs directly in servers as well. crack smoking

Re: Finally the disk drive can die (1)

Golden_Rider (137548) | about 7 months ago | (#46907131)

That is eminently affordable.

Until they fail in less than a month because you exceeded theeeeeeeeeeeeeir write endurance. Thereee's a ressaon no one uses SSDs in seserrvvvvvvvveeeeeeers.

Maybe not IN servers, but definitely FOR servers. Look up 3PAR, Netapp and so on.

Re: Finally the disk drive can die (1)

WuphonsReach (684551) | about 7 months ago | (#46907667)

Until they fail in less than a month because you exceeded theeeeeeeeeeeeeir write endurance. Thereee's a ressaon no one uses SSDs in seserrvvvvvvvveeeeeeers.

That's never been true. Enterprise SSDs are hitting the point where they are only 2x-3x more expensive then 15k RPM 2.5" SAS drives. For workloads where you don't need terabytes of storage space, but you *do* need the IOPS, a small array of 8 SSDs outperforms the 24 or 48 disk 15k SAS solutions.

An Intel DC S3700 400GB drive advertises that it will last 10x400GB of writes per day for 5 years. That's pretty good write endurance, even for a database workload.

Over the next 2 years, you are either going to see 15k SAS RPM drives drop drastically in $/GB or vanish from the market as SSDs encroach. The price difference of 2x-3x is low enough and the performance gain is high enough that you can do the same things with fewer drives.

Re: Finally the disk drive can die (1)

Billly Gates (198444) | about 7 months ago | (#46908347)

The 1st generation crappy SSDs failed due to bad controllers and compression with sandforce. NOt because of write endurance.

I had a link (sorry this system I am typing on was re-imaged) where someone tested this out.

It took 75 tb before the mid range ssds started to have issues. The pro serious can go higher as they are designed with 4 million writes per cell vs the 1 million.

Intels, Samsungs, Sansdisks, and other brands (not munchkin) can run and have run for years and years with 5 - 15 gigs a day write endurance. Infact they are as reliable if not more than a mechanical disk.

Others can vouch too it takes awhile to kill a disk if it runs an OS with TRIM support (linux slowly getting this as it is semi TRIM in all but the recent kernels) and not fill it up to 99% filled.

Re: Finally the disk drive can die (1)

Rich0 (548339) | about 7 months ago | (#46907251)

I just bought a Samsung 840 Evo 250 GB drive for like $150. I believe that the 500 GB was under $300.

That is eminently affordable.

$300 for 500GB of SSD vs $100 for a few TB of hard drive. You're talking about a factor of 10 difference.

For a PC that doesn't need to store multimedia SSDs are both practical and to be recommended. For PCs that actually need to store any volume of data, they won't be practical for a LONG time as far as I can see. Sure, the price will drop, but the price of hard drives also drops.

Re: Finally the disk drive can die (1)

davidhoude (1868300) | about 7 months ago | (#46906147)

How much space do you need for your computer?

250GB SSD's are about $100 if you catch them on sale, some even cheaper. The prices are dropping by the day, they were twice the price last year. I expect 500GB to be around $150 here soon.

Re: Finally the disk drive can die (1)

Eunuchswear (210685) | about 7 months ago | (#46908281)

SAS?

eMLC?

Re: Finally the disk drive can die (2)

K. S. Kyosuke (729550) | about 7 months ago | (#46906229)

Hard drive is the new tape. SSD is the new hard drive.

Re: Finally the disk drive can die (1)

shitzu (931108) | about 7 months ago | (#46906345)

Word.

Re: Finally the disk drive can die (1)

mwvdlee (775178) | about 7 months ago | (#46906615)

No it's not.
http://soylentnews.org/article... [soylentnews.org]

Re: Finally the disk drive can die (1)

shitzu (931108) | about 7 months ago | (#46907409)

I think the comparison was about functional use, not storage capacity per se. 15 years ago we worked on a hard disk and backed up to a tape. Now we work on an SSD and back up to hdd (at least i do).

Re: Finally the disk drive can die (1)

jones_supa (887896) | about 7 months ago | (#46906625)

PowerPoint.

Re: Finally the disk drive can die (0)

Anonymous Coward | about 7 months ago | (#46951497)

K. S. Kyosuke: You've been called out (for tossing names) & you ran "forrest" from a fair challenge http://slashdot.org/comments.p... [slashdot.org]

Re: Finally the disk drive can die (1)

TheRaven64 (641858) | about 7 months ago | (#46907323)

I've got a 1TB SSD in my laptop. It cost about as much as the 256GB SSD in the laptop it replaced, bought two years earlier (shorter than my normal upgrade cycle, but work bought this one). You can pick them up for around £300, which is more expensive than a hard disk, but not by much - about a factor of five, which is half the difference a couple of years ago. I still have the laptop before that, and even with a processor a few generations older it's still disk-speed limited in a lot of cases, whereas this one rarely is, so it's definitely worth it. That said, my NAS has 2 2TB spinning rust disks and I'm looking at replacing them with 4TB ones. I'd love to replace them with SSDs, but it's not yet worth the cost. I am tempted by a smaller eSATA SSD to use for L2 ARC and ZIL though...

Re: Finally the disk drive can die (1)

jon3k (691256) | about 7 months ago | (#46907559)

$259 is unreasonable for a 500GB SSD? [newegg.com] Seems like a steal to me. My first SSD was a 30GB OCZ that cost $135.

Re:Finally the disk drive can die (0)

Anonymous Coward | about 7 months ago | (#46906169)

It's "night and day" numbnuts. There isn't much difference between "light" and day, is there?

Re:Finally the disk drive can die (2)

K. S. Kyosuke (729550) | about 7 months ago | (#46906227)

Light AND day == doubleplus shiny!

Re:Finally the disk drive can die (0)

Anonymous Coward | about 7 months ago | (#46951511)

K. S. Kyosuke: You've been called out (for tossing names) & you ran "forrest" from a fair challenge http://slashdot.org/comments.p... [slashdot.org]

Re:Finally the disk drive can die (1)

jones_supa (887896) | about 7 months ago | (#46906627)

There isn't much difference between "light" and day, is there?

Apparently you haven't been to Finland.

Re:Finally the disk drive can die (1)

Threni (635302) | about 7 months ago | (#46906787)

> It is so archaic in this day and age of microization to have something mechanic
> bottlenecking the whole computer

And yet...nothing has replaced it in terms of cost of lifespan. SSDs still suck in terms of reliability. If you're got a lot of money then sure, get a smallish one for your OS to boot from, but come back and give me a call when there's something to replace my 1/2TB drives full of pictures/music.

Re:Finally the disk drive can die (0)

Anonymous Coward | about 7 months ago | (#46906917)

but come back and give me a call when there's something to replace my 1/2TB drives full of pictures/music.

I heard that SanDisk recently announced a 4TB SSD.

Re:Finally the disk drive can die (0)

Anonymous Coward | about 7 months ago | (#46906975)

I've had way more HDD failures than SSD failures, I'd pick any day for a system that needs reliability. And since when did 1/2TB of pictures and music need fast access? Computers have had tiered storage systems for a very long time.

Re:Finally the disk drive can die (0)

Anonymous Coward | about 7 months ago | (#46907057)

You've probably HAD more HDDs than SSDs, shitstain.

Re:Finally the disk drive can die (1)

Billly Gates (198444) | about 7 months ago | (#46908391)

That my friend has changed.

The issue were 1st generation crappy sandforce controllers and data compression. If the internal table got corrrupted they were bricked.

This has changed and there are websites (sorry my laptop was re-imaged a week ago and used to have hte link) which can back up my claims. Mid range SSD's (non pro or enterpise grade) last to about 75 TB for a 256 gig ssd before write failures. NO they do not just go PUFF either. That is a controller failing. They slow down and event logs report write failures and remappings gradually at this point and just slow down.

THe pro serious with 480 gigs and higher have ratings up to 1 PT! That easily exceeds a mechanical disk. I have a samsung pro which can probably get near the 150 - 300 TB range.

If PCs booted like phones and tablets perhaps the pc market would be in a better position? SSD is the answer.

Re:Finally the disk drive can die (1)

BitZtream (692029) | about 7 months ago | (#46907259)

IOPS stands for IO operations per seconds. Interrupts has nothing to do with it.

Re:Finally the disk drive can die (0)

Anonymous Coward | about 7 months ago | (#46908463)

So tape and Flash SSDs it is, until somebody finishes ReRAM and some kind of holographic memory with wavy patterns and flickering lights.

Re:Finally the disk drive can die (0)

Anonymous Coward | about 7 months ago | (#46910091)

Unless you need lots of space. I can get 16TB (4x4TB) for $650 which is very affordable. With four of these 4TB SSDs, it'll be the price of a car. Magnetic storage will be around for a long time to come. Larger drives will come out soon with even higher capacities at the same price points, whereas with flash, we're reaching the point where they can't shrink the process much more to make it cheaper. They're not getting cheaper at the same speed as they used to.

But sure, a 120GB SSD is enough for grandma.

At that speed? (-1)

Anonymous Coward | about 7 months ago | (#46906137)

Does anyone really need that much porn?

Re: At that speed? (0)

Anonymous Coward | about 7 months ago | (#46906165)

My friend has terabytes of it, no lie. He just has it downloading from file sharing software in the background 24/7. Doesn't even watch most of it. Just has it "just in case he gets horny and has no internet"

Re: At that speed? (0)

Anonymous Coward | about 7 months ago | (#46906635)

:D

Re: At that speed? (1)

shikaisi (1816846) | about 7 months ago | (#46906785)

My friend has terabytes of it, no lie. He just has it downloading from file sharing software in the background 24/7. Doesn't even watch most of it. Just has it "just in case he gets horny and has no internet"

For those eventualities, I recommend he tries an alternative device called a "girlfriend".

Re: At that speed? (2)

eyepeepackets (33477) | about 7 months ago | (#46907109)

Yeah, but the maintenance requirements are very high, the MTBF is unacceptable, and they can say "no."

Shwing!!! (1)

zenlessyank (748553) | about 7 months ago | (#46906171)

I can has 2 please??

Clean or Not, It Makes No Sense (-1, Offtopic)

dmomo (256005) | about 7 months ago | (#46906179)

If the goal is to clean up the UI, why show the URL at all unless asked for (CTRL+L or clicking on the "CHIP")? Don't lie to me... I'd rather you hide the URL all together than show me an incorrect one. By showing the protocol, then host name, you're showing me a valid URL, but it's the wrong one. Either say: Domain: bankofamerica.com , or just show the "Chip" and omit the url all together.
   

Re:Clean or Not, It Makes No Sense (1, Offtopic)

dmomo (256005) | about 7 months ago | (#46906191)

Oh my. This was supposed to be posted to the previous story. MOD ME INTO OBLIVION!

Time to Fill... (1)

Kaenneth (82978) | about 7 months ago | (#46906221)

How fast can data be pumped through the controller interface?

Re:Time to Fill... (1)

Billly Gates (198444) | about 7 months ago | (#46906235)

On my single SATA3 Samsung pro with rapid mode I get about 600 megs a second.

But that is nto the real speed bump. My 270 meg a second Sansdisk doesn't boot Windows any faster?! Why? It is about latency and IOPS interrupt and operations per second. I can do heavy heavy simultaneous things like run 5 virtual machines for my domain in my virtual network with VMware workstation in about 1.5 minutes. This took almost 20 minutes to start and shutdown before!

A 100 meg disk raid will not be as fast as single drive so for a file server and database it makes lots of sense for an SSD to be more than 6x as fast as a mechanical drive. It can mean 500x faster easily.

Re:Time to Fill... (0)

Anonymous Coward | about 7 months ago | (#46906341)

I get ~525mb/s sustained read and write on my 120gb SATA3 Corsair Force GT. Windows 7 on my pc goes from "Starting Operating System" to login in about 3s (for comparison. my wife's laptop with a 500gb mechanical goes from "Starting Operating System" to login in about 45s). I've apparently done 5.7tb of reads and 4.4tb of writes and the SMART reports my drive is still at 100% life left...

The only time I need to wait for anything to startup is when it is on my secondary mechanical drive. The difference between a SSD and a HDD is night and day.

The utility of SSDs (1)

mcrbids (148650) | about 7 months ago | (#46906371)

In my laptop, I have an SSD. Upgrading the HDD cost about as much as a new laptop and cost significantly less. I've been able to buy 2+ years of time on my old laptop with an upgrade at significantly less cost.

So the numbers make sense, here!

We host a heavily database-driven app. Use of an SSD reduces latency by at *least* 95% in our testing. It's a no-brainer. Even if we replaced the SSDs every single year, we'd still come out way ahead. SSDs are where it's at for perfromance!

Re:Time to Fill... (1)

smash (1351) | about 7 months ago | (#46906347)

Apple's PCIe SSD machines are getting 900 MB per second. SSD is already faster than SATA, but for all but niche applications its actually IOPs that you're chasing and the difference between SSD and spinning disk there is absolutely massive.

For a single user doing "stuff" though, a short-stroked hard drive is about 1/4 the price and well fast enough. And yes, i had a work machine (laptop) with SSD that i ditched and went back to a momentus XT hybrid due to lack of capacity.

Re:Time to Fill... (2)

BitZtream (692029) | about 7 months ago | (#46907815)

For a single user doing "stuff" though, a short-stroked hard drive is about 1/4 the price and well fast enough. And yes, i had a work machine (laptop) with SSD that i ditched and went back to a momentus XT hybrid due to lack of capacity.

You keep saying that, but that doesn't make it magically true.

So you had a laptop with an SSD too small for your working set and that makes SSDs bad? No. It makes you or whoever provisioned the machine incompetent. More likely you were using your work machine for shit you shouldn't have, so you were all pissy that your working set was larger than your storage space.

I'd be willing to be a months pay that my 2009 macbook pro with SSD will out perform whatever brand new laptop you want to buy with spinning iron.

The fact that you think 'short stroking' a drive is some sort of massive performance increase shows your ignorance. Its a negligible performance increase for sequential operations, it doesn't do jack shit for random IO, which is thousands of times more important for normal every day working operations.

Re:Time to Fill... (1)

Kaenneth (82978) | about 7 months ago | (#46910181)

That was one thing I hated when contracting at MS, standard office machines with 70gb drives when the image we need takes 40gb... and no machine upgrades, have to wait for the new hardware cycle.

Re:Time to Fill... (1)

smash (1351) | about 7 months ago | (#46913285)

Out-perform? Sure. I was using it as a VM test environment. You can talk up your e-peen SSD all you like, but i can do the job for $100 worth of storage with a momentus XT, or i can spend over $400 for the same capacity. The momentus XT is FAST ENOUGH. I never said it was faster. But at 25% of the price, it is most certainly worthy of consideration.

dwarfing? Not quite yet! (3, Informative)

Nagilum23 (656991) | about 7 months ago | (#46906233)

Seagate already announced 8-10TB disks for next year: http://www.bit-tech.net/news/h... [bit-tech.net] .
Now if SanDisk can deliver 16TB SSDs in 2016 then they might be indeed ahead of the hard-disks but not in 2015.

Re:dwarfing? Not quite yet! (0)

Anonymous Coward | about 7 months ago | (#46906315)

There's no announcement in that link of yours, buddy. There's just ramblings by the CEO about some generalized, nonspecific future. He even had to clarify that whenever they DO become available, supplies will be terribly limited.

Re:dwarfing? Not quite yet! (0)

Anonymous Coward | about 7 months ago | (#46907675)

No they didn't. And when they come, that would be 3.5" not 2.5".

Where are the 3.5" SSDs? (4, Interesting)

swb (14022) | about 7 months ago | (#46907105)

Why do SSD makers only make 2.5" SSDs? It seems like a lot of the capacity limitation is self-enforced by constraining themselves to laptop-sized drives.

Why can't they sell "yesterday's" flash density at larger storage capacities in the 3.5" disk form factor? For a a lot of the use cases, the 3.5" form factor isn't an issue. More, cheaper flash would enable greater capacities at lower prices.

The same thing is true for hybrid drives -- the 2.5" ones I've used have barely enough flash to make acceleration happen, a 3.5" case with a 2.5" platter and 120GB flash would be able to keep a lot more blocks in flash and reserve meaningful amounts for write caching to flash.

Re:Where are the 3.5" SSDs? (1)

Rich0 (548339) | about 7 months ago | (#46907321)

Why do SSD makers only make 2.5" SSDs? It seems like a lot of the capacity limitation is self-enforced by constraining themselves to laptop-sized drives.

Why can't they sell "yesterday's" flash density at larger storage capacities in the 3.5" disk form factor? For a a lot of the use cases, the 3.5" form factor isn't an issue. More, cheaper flash would enable greater capacities at lower prices.

The same thing is true for hybrid drives -- the 2.5" ones I've used have barely enough flash to make acceleration happen, a 3.5" case with a 2.5" platter and 120GB flash would be able to keep a lot more blocks in flash and reserve meaningful amounts for write caching to flash.

I doubt anybody really wants these big SSDs anyway. I mean, who buys an SSD when they need to store 1TB of data? I could see it for certain niches, such as for a cache (even an SSD is cheaper than RAM, and is of course persistent as well). Otherwise anybody storing a lot of data uses an SSD for the OS, and an HD for storage, and you don't need a big SSD for the OS. Still, I wouldn't mind a 3.5" drive just for the sake of it using the same mount as my other drives.

Re:Where are the 3.5" SSDs? (1)

Amouth (879122) | about 7 months ago | (#46907581)

In my work case the need for the larger drive is that you only have one drive, not the option for two. so when your traveling you need to take a lot with you and you want it to be fast. so larger ssd's are welcome as long as the price remains in the range of sanity

Re:Where are the 3.5" SSDs? (1)

jon3k (691256) | about 7 months ago | (#46907643)

You know they make 2.5" to 3.5" drive adapters, right? Most 2.5" SSD even ship with them.

Re:Where are the 3.5" SSDs? (1)

Rich0 (548339) | about 7 months ago | (#46907843)

You know they make 2.5" to 3.5" drive adapters, right? Most 2.5" SSD even ship with them.

Sure, I was just saying that it would be nice to not need one. I'm not going to pay an extra $20 for the convenience though.

Re:Where are the 3.5" SSDs? (1)

petermgreen (876956) | about 7 months ago | (#46910499)

You can get simple brackets pretty cheaply and easilly and they are often thrown in with the drive. However simple brackets can be problematic in some case designs (including but not limited to cases with backplane setups)

Proper adaptors that result in a unit with the same dimensions as a 3.5 inch drive are made by at least one vendor but are considerablly more expensive, harder to find and i've never seen them included with a drive.

Re:Where are the 3.5" SSDs? (0)

Anonymous Coward | about 7 months ago | (#46910299)

It is more cost efficient to manufacture drives in a format that can be used across all platforms, especially considering that mobile computing has outsold desktop PCs for the last few years.

Re:Where are the 3.5" SSDs? (3, Interesting)

jon3k (691256) | about 7 months ago | (#46907651)

It's not constrained by size. It's the cost of NAND flash that's the limiting factor. And no one is going to manufacture last generation's NAND, it doesn't make any business sense. Ask Intel why they don't sell last years CPUs at cut rate prices. Same reason.

Re:Where are the 3.5" SSDs? (0)

Anonymous Coward | about 7 months ago | (#46908591)

Laptop and other small form factors have margins fitting for the premium storage products. (My soul was pulled out of my left ear while typing that)

Re:Where are the 3.5" SSDs? (4, Informative)

Amouth (879122) | about 7 months ago | (#46907663)

there are a few reasons they don't make 3.5's

1: physical size isn't an issue, for the sizes they release that people are willing to pay for it all fits nicely in 2.5
2: 2.5's work in more devices, including in desktops where 3.5's live. if noting is forcing the 3.5 usage then it would be bad for them to artificially handicap them selves.

now for your commend on larger physical drives being cheaper. Flash does not work the way that normal dries to.

Normal platter drives the areal density directly impacts pricing as it drives the platter surface to be smoother, the film to be more evenly distributed, the head to be more sensitive, the accurater to be more precise, all things that cause higher precision that drive up costs as it increases failure rates and manufacturing defects causing product failure.

Now in the flash world. they use the same silicon lithography that they use for making all other chips. there are two costs involved here.

1: the one time sunk cost of the lithography tech (22nm, 19nm, 14nm...) This cost is spread across everything that goes though it. And in reality evens out to no cost increase for the final product because the more you spend the smaller the feature the more end product you can get out per raw product put in.
2: the cost of the raw material in. It does not matter what level of lithography you are using the raw material is nearly exactly the same (some require doping but costs are on par with each other). So in fact your larger lithographic methods become more expensive to produce product once there is newer tech on the market.

No please note that in the CPU world where you have complex logic sets and designs there is an added cost for the newer lithography as it adds to the design costs. but for flash sets there is nearly zero impact form this as it is such a simple circuit design.

Re:Where are the 3.5" SSDs? (1)

wjcofkc (964165) | about 7 months ago | (#46907811)

One of the selling points of SSDs are that they take up far less space than a traditional HDD. This is great for many reasons, especially in the data center. Since SSD's first appeared in laptops, they may have simply retained the form factor (in an exclusive manner) since it's each cheaper to make a literal one size fits all drive than many sizes. Think economies of scale. In fact, now that I have paused from my reply to Google it, this does appear to be the case.

Re:Where are the 3.5" SSDs? (1)

BitZtream (692029) | about 7 months ago | (#46907831)

Head dissipation and power draw are more limiting than space in the case. Physical space is not the issue, they could simply use different packaging and stuff far more silicon in the same 2.5" profile but it'd burn up. Moving that to a 3.5" form factor isn't enough additional surface area to dissipate a much larger thermal load so its just not worth it.

Re:Where are the 3.5" SSDs? (0)

Anonymous Coward | about 7 months ago | (#46909435)

Why do SSD makers only make 2.5" SSDs?

They don't [hothardware.com] . The question is why do computer makers only support the HD form factor.

Re:Where are the 3.5" SSDs? (1)

JumperCable (673155) | about 7 months ago | (#46909719)

Laptops & blade servers. They don't really need more space.

Re:Where are the 3.5" SSDs? (1)

m.dillon (147925) | about 7 months ago | (#46910291)

Well they don't really make 5.25" HDDs anymore. 3.5" is next on the list to go. I don't bother with 3.5" HDDs anymore myself, in fact, not even for servers. I stopped buying them last year. Everything is 100% 2.5" now. It's a much nicer form factor and easier to match IOPS requirements against the enclosure with today's drive densities.

In the beginning there were 3.5" SSDs. OCZ for example. They rapidly disappeared. If you go on, say, newegg right now, they list 724 2.5" SSDs and exactly 6 3.5" SSDs. 1.8" is starting to creep up, with 16 offerings.

Performance per cubic meter, anyone?

-Matt

Gotta go fast. (1)

VortexCortex (1117377) | about 7 months ago | (#46907645)

Sweet, let me know when they replace RAM with SSD.

Re:Gotta go fast. (0)

Anonymous Coward | about 7 months ago | (#46911277)

Sweet, let me know when they replace RAM with SSD.

http://www.diablo-technologies.com/

Of course yet something else my company won't buy (0)

Anonymous Coward | about 7 months ago | (#46908971)

You know, things like 8GB of ram for all the developers at $80 a pop or SSDs which I think are around $250 for a 500GB one. (Since that costs actually money and who the hell cares if I wait 30 seconds for a menu to popup on my development environment.)

What prices? (1)

antdude (79039) | about 7 months ago | (#46912515)

Are huge SSDs cheap enough as old fashion huge HDDs?

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?