×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Top Solid State Disks and TB Drives Reviewed

CmdrTaco posted more than 6 years ago | from the stupid-boring-holiday-week dept.

Data Storage 216

Lucas123 writes "Computerworld has reviewed six of the latest hard disk drives, including 32GB and 64GB solid state disks, a low-energy consumption 'green' drive and several terabyte-size drives. With the exception of capacity, the solid state disk drives appear to beat spinning disk in every category, from CPU utilization, energy consumption and read/writes. The Samsung SSD drive was the most impressive, with a read speed of 100MB/sec and write speed of 80 MB/sec, compared to an average 59MB/sec and 60MB/sec read/write speed for a traditional hard drive."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

216 comments

Longevity of NAND flash (3, Insightful)

ASkGNet (695262) | more than 6 years ago | (#21821626)

NAND flash deteriorates with use. When used in a high-I/O situations like hard drives, just how much time will it be able to work correctly? If I recall correctly, NAND blocks are guaranteed to the order of 100000 writes.

Re:Longevity of NAND flash (3, Insightful)

peragrin (659227) | more than 6 years ago | (#21821650)

Yes, new on disk topology mappings, and new tech give you roughly a million r/w and the mappings help to evenly distribute the load.

Re:Longevity of NAND flash (3, Insightful)

plague3106 (71849) | more than 6 years ago | (#21821692)

This is always claimed as the solution, "evening" writes. But I think the question about how long will the drive last is still relevent; all it takes is a mostly full disk, which has a high I/O load. Even with evening, it seems that at least part of the disk can fail before the rest of the disk.

Do traditional drives fail if the same sector is written to over and over again as well?

Then don't fill the drive (4, Insightful)

tepples (727027) | more than 6 years ago | (#21822320)

This is always claimed as the solution, "evening" writes. But I think the question about how long will the drive last is still relevent; all it takes is a mostly full disk, which has a high I/O load.
Easy: don't let the drive become mostly full. This means heavy-duty drives will be a 64 GB chip reformatted for 48 GB with the rest designated as spare sectors for wear leveling, but the power consumption and seeking speed benefits can still make it worthwhile.

Re:Longevity of NAND flash (3, Informative)

vertinox (846076) | more than 6 years ago | (#21822556)

Do traditional drives fail if the same sector is written to over and over again as well?

No, but they'll fail either reading or writing over time regardless if you are writing or just reading just because the drive is moving. Even if you cool your standard drive, eventually it could just fail because it was left on for 10 years (since an active drive is constantly spinning).

Now its not guaranteed to fail, but the chances of a standard HDD failing that you only read from and don't write it is far greater than a SSD that you put files on it one time and don't write further.

I think SSD shine in archival types of things that you don't plan on trashing and rewriting that often such as image collections, movies, and MP3s. That said, swap disks, scratch disk, and cache file directories would logically still have better performance on your spinning platter drives and if that drive goes belly up you haven't lost much.

Re:Longevity of NAND flash (3, Insightful)

s_p_oneil (795792) | more than 6 years ago | (#21822614)

"all it takes is a mostly full disk, which has a high I/O load"

It is a relevant question, but this wouldn't kill your hard drive, it would simply reduce the amount of free disk space. And it's not difficult to imagine a file system smart enough to move files around when this happens. When a sector gets written to too many times, it can simply look for and move a really old file onto that sector to free up some of the rarely used sectors of the drive. With the increased performance of SSD, you probably wouldn't even notice it.

Aside from the re-write issue, flash memory drives should be WAY more reliable than a mechanical HD. It should never just completely die or start getting bad sectors so fast you don't have time to retrieve your data. It should also be a lot easier to replace when it starts to degrade. It shouldn't be as susceptible to damage when you drop it from a height of 3-5 feet, or due to heat, cold, vibration, dust, humidity, etc. I'm not sure whether a magnetic field could erase it like a hard drive, but if not, that's another plus for SSD. I imagine SSD's are more susceptible to static electricity, but so is almost everything else plugged into your motherboard, so I'm not sure if that could be considered a minus.

I'm sure if you ever tried an SSD on a laptop, you'd never want to go back to an old HD. The improved performance and battery life would make going back to an old laptop HD seem like going from broadband back to an old 56K modem.

This has been proven. (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#21821954)

Load distribution has long been an issue [contactlog.net] with solid state drives.

Parent (-1, Minicity) (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#21822374)

MiniShitty sites are off-topic in the vast majority of Slashdot articles.

Re:Longevity of NAND flash (1, Informative)

Anonymous Coward | more than 6 years ago | (#21821670)

And on modern flash devices, writes are automatically distributed over the entire filesystem so that particular areas don't get hammered and worn out. With a large flash device with a reasonable amount of free space, the time taken to reach 100,000 writes to a particular bit will be pretty long (as long or longer than a conventional hard drive's MTBF according to various sources I've read (unfortunately, I don't have the links to hand).

The article (or the manufacturer?) is misleading though - figures of 100MB/s read and 80MB/s write are quoted, but the drive is benchmarked at about 25ish...

Re:Longevity of NAND flash (5, Informative)

goofy183 (451746) | more than 6 years ago | (#21821800)

Will this ever die? The write cycle counts in modern flash is in the millions now. Doing the math you very easily get 20+ years before write cycle wear is a concern: http://www.storagesearch.com/ssdmyths-endurance.html [storagesearch.com]

How many heavily used spinning drives do you know that last even 10+ years?

Re:Longevity of NAND flash (4, Informative)

baywulf (214371) | more than 6 years ago | (#21821882)

Actually the endurance on NAND has been going lower over the years as they switched to smaller cell geometry, larger capacity and MLC technology. Some are as low as 5000 cycle endurance. These MLC(multi-level cell) NAND tend also to be much slower than SLC(single-level cell) NAND. Most SLC NAND have around 50K or 100K endurance.

Re:Longevity of NAND flash (2, Informative)

theoverlay (1208084) | more than 6 years ago | (#21822282)

With 3-bit and even quadbit MLC NAND around the corner we should see faster controllers that will make these drives more attractive and larger. There are even some hybrid controllers that allow multiple nand types(mlc and slc) and even nor in the same application. One of these is Samsung's flex-OneNAND. A good site for more information is http://infiniteadmin.com/ [infiniteadmin.com]

Re:Longevity of NAND flash (1)

orclevegam (940336) | more than 6 years ago | (#21821936)

Thank you for posting this. This is a really great link and is going in my reference documents bookmarks. I'd call for others to mod you up, but they already have.

Re:Longevity of NAND flash (4, Interesting)

ComputerSlicer23 (516509) | more than 6 years ago | (#21822066)

You ever actually done this? I work on embedded systems that use flash drives... Even with write levelling, we've had failures. It's lots of fun, when your 512MB flash isn't 512MB, and will suddenly lose ~41MB suddenly. As a work around, we've had to start partitioning with extra space left lying around at the end of a disk. This isn't even a heavy workload system.

Some friends of mine at another company that were using them in a I/O laden system that wanted to replace laptop drives to make the machinews lower power and more reliable can blow out a flash drive in about 4 weeks.

Kirby

Re:Longevity of NAND flash (1)

MyNymWasTaken (879908) | more than 6 years ago | (#21822126)

You've never had spinning platter hard drives fail on you?

Re:Longevity of NAND flash (5, Interesting)

ComputerSlicer23 (516509) | more than 6 years ago | (#21822392)

Yes I have. However, I've never had one magically get smaller on me in such a way that fsck decides that your done fixing the filesystem. With SSD, YES, I've had exactly that happen to me.

In my life, I've lost a total of about 42Kb be completely unrecoverable with spinning media (yes, I mean that number literally). I use RAID extensively, I was the DBA/SA/Developer at a place that had ~10TB of disk online for 5 years. In all that time, 42KB is all I lost. Oh, that was in the off-line, tertiary backup of the production database (it was one of 5 copies that could be used as a starting point for recovery, we also had the redo logs for 5 days, each DB was a snapshot from one of the previous 5 days). It was stored on bleeding edge IDE drives put in a RAID 5 array. We used it as a cheap staging area before pushing the data over Firewire/USB to a removable drive that an officer of the company took home as part of the disaster recovery system (it had only the most recent DB and redo logs). The guy didn't RMA the hot spare, and we had two drives fail in about 3 days while the hot spare was waiting for the RMA paper work to fill out. In that one particular case, using ddrescue, I recovered all of the data off of the RAID5 array but 42KB (even though it was an ext3 filesystem on LVM, on a RAID5 array, which made the recovery even more complex). Every other bit and byte of data in my life from spinning media that I cared about, I've recovered (I've had a number of drives die with data I didn't care about, but I could have recovered from if need be). Trust me, I know about reliability, backups, and how to manage media to ensure that failure doesn't happen. I know about failure modes of drives. I've hot swapped my fair share of drives, and done the RMA paperwork. I've been in charge of drives that losing any one of the ~200 drives would have cost 10 times as much as I made in a year if I couldn't reproduce the data on it within hours.

If it had been worth $10K, I'd have sent off the drive to get that 42KB of data recovered. But it wasn't. It's well understood how the failure mode of spinning media. People know exactly how to do things like erase drives securely. People know who to call that has a clean room that can remove the magentic media to and put it under a microscope to get the data recovered. SSD isn't nearly as mature in that sense.

All of that is really to say: Yes, I know something about disks and drives. My point is to say that SSD's aren't magic pixie dust in terms of reliablabilty. I've had exactly what he's saying I shouldn't worry about happen to me on a regular basis. Enough, that our engineering department has developed specific procedures to deal with them in the field. We've changed our release procedures to accout for them. If your going to use an SSD or flash drive, go kick the crap out of it. Don't believe on faith anything you read on Slashdot (including this post, which is anecdotal). We order lots of 5,000 flash disk, and you can bet that at least 100 of them has serious flaws within being fielded. The ones the developers and testing uses regularly develop problems in terms of months, not years. The manufacturer tells us essentially, it's not worth it to find those, so deal with it.

The whole point of replacing the laptop drive was to make the silly thing more reliable. But making it uber-reliable for 4 weeks until the write leveling crapped out wasn't the idea.

Kirby

Re:Longevity of NAND flash (2, Insightful)

zeet (70981) | more than 6 years ago | (#21822504)

So you're saying that you lost 42kb of data you did care about, and some other unnamed amount of data that you lost but didn't care about? That seems a bit disingenuous. Even if you could have recovered the other data, since you didn't try it wasn't recovered.

Re:Longevity of NAND flash (3, Insightful)

TooMuchToDo (882796) | more than 6 years ago | (#21822534)

I can understand your reluctance to trust flash media. Indeed, it hasn't been proven like spinning media has. Let's take another example. An in-car radio. I want a 100GB hard drive in my car, solid state, that is for all intents and purposes write once. I should be able to dump 10s of GBs of MP3s onto it, and the index should be stored on a replaceable CF card (as the index would be changed often). But why would I remove music from the drive? I can just add more music.

For the above example, a flash drive works very well. If you need the benefits of flash storage mediums (vs spinning media) you should be prepared to engineer around the situation. Run temporary data out of RAM with battery backup, and only commit the data to flash between reboots and power outages.

Re:Longevity of NAND flash (1)

dunkers (845588) | more than 6 years ago | (#21822094)

He makes the mistake of assuming a complete erase each cycle of full disk. In reality, that sort of runaway process would fill the disk and then start erasing small parts to make room for more data. If 1K of space is made each time the disk if full then that actual cycle would be 1 x 64GB plus 2m x 1K.

Plug /those/ figures in and it turns out the disk will be trash within 20 minutes

Re:Longevity of NAND flash (1)

Lumpy (12016) | more than 6 years ago | (#21822104)


How many heavily used spinning drives do you know that last even 10+ years?


I have at least 15 of them doing that right now. my last employer changed out the SCSI array in a couple of powervaults in 2005 I picked the drives out of the trash and have been using them in a powervault I got off ebay for $25.00 the drives have been spinning for over 10 years now.

I have had only 1 drive fail out of the "untrustworthy" ones I got out of the trash.

SCSI U160 drives are incredibly robust, not like the crap they have made over the past 5 years. Yes they are only 32Gig each but they work and work for over 10 years.

Re:Longevity of NAND flash (2, Interesting)

Amouth (879122) | more than 6 years ago | (#21822512)

i know what you mean.. my desktop i am using right now has an IBM 36gb scsi drive that is pushing 9 years as we speak.. wonderful drives - they truly just don't make them like they used to ... on the other hand just for the sake of it (and it's age) i have a seagate 9.1gb scsi drive that takes up 3x5.1/4 bay's - it was one of the first 9gb drives on the market.. still running.. on a duel p pro running slackware.. it keeps right on chugging away and keep spam out of my mail box..

on the other hand i have 4 wd 250gb ide drives on my desk that give smart errors and just are damn flaky... what can i say.. you get what you pay for.

Swap partition/file (1)

Jamu (852752) | more than 6 years ago | (#21822118)

Is it worth getting a small one and using it for swap? In other words: Is it faster than a normal HDD? And how long would it last (with this usage)?

Re:Swap partition/file (1)

fm6 (162816) | more than 6 years ago | (#21822482)

I'd guess it'd he faster — but not as fast as increasing your RAM so you didn't swap as much.

Re:Swap partition/file (2, Insightful)

johannesg (664142) | more than 6 years ago | (#21822522)

Why not just buy enough RAM? It is cheaper than using a solid-state disk, and if all you use it for is swap anyway it really doesn't matter if it volatile or not...

Re:Longevity of NAND flash (0)

Anonymous Coward | more than 6 years ago | (#21822368)

The math on that page is wrong because erasing data on NAND flash can only be done in blocks, which are relatively large, often several to hundreds of kB; if you want to reset one bit the entire block must be reset. IIRC, this means that in the very worst case you can destroy the flash in days, not years as that link claims.

Re:Longevity of NAND flash (0)

Anonymous Coward | more than 6 years ago | (#21822520)

Wrong. The drive controller keeps track of how many times a block has been erased and swaps it with a low-usage page when the count gets too high. This is how they do load leveling.

Re:Longevity of NAND flash (1)

Bassman59 (519820) | more than 6 years ago | (#21821934)

NAND flash deteriorates with use. When used in a high-I/O situations like hard drives, just how much time will it be able to work correctly? If I recall correctly, NAND blocks are guaranteed to the order of 100000 writes.

Do a web search for "flash wear leveling."

-a

Re:Longevity of NAND flash (4, Interesting)

Khyber (864651) | more than 6 years ago | (#21822064)

And this is why we're moving away from NAND, so get that damned term out of your head already! OUM/OVM is coming, uses a nearly identical manufacturing process (It's the same thing found in RW optical media, except you use electricity instead of a laser to change it's state) as CMOS does, and it has FAR more read/write cycles than anything NAND could have ever hoped to achieve, in the range of 10^8 as opposed to NAND 10^5-10^6

With low enough cost (2)

jafiwam (310805) | more than 6 years ago | (#21821632)

I could do with a 64 GB primary drive on my gaming machine.

Disk performance it the main roadblock to getting on the server first, which has a huge advantage over slower-loading players.

Yes, I am a LPB. Sue* me.

* By "sue" I mean attempt to frag.

Re:With low enough cost (0)

Anonymous Coward | more than 6 years ago | (#21821792)

Excuse me, but where do you live?
Sincerly,
Duke Nukem

Reliability (4, Insightful)

RonnyJ (651856) | more than 6 years ago | (#21821638)

It's not mentioned in the summary, but added reliability might make these types of disks more appealling too:

The no-moving-parts characteristic is, in part, what protects your data longer, since accidentally bumping your laptop won't scramble your stored files. Samsung says the drive can withstand an operating shock of 1,500Gs at .5 miliseconds (versus 300Gs at 2 miliseconds for a traditional hard drive). The drive is heartier in one other important way: Mean time between failure is rated at over 2 million hours, versus under 500,000 hours for the company's other drives.

Re:Reliability (1)

ElizabethGreene (1185405) | more than 6 years ago | (#21821684)

They will sell these by the case to the antarctic research station and mountain climbers. They go through normal hard drives like pez because of the cold and low air density. How is the power consumption compared to rotating drives? -ellie

Re:Reliability (0)

Anonymous Coward | more than 6 years ago | (#21821712)

"How is the power consumption compared to rotating drives? -ellie"

Lower. RTFA, ellie...

Re:Reliability (2, Funny)

ColdWetDog (752185) | more than 6 years ago | (#21822092)

They will sell these by the case to the antarctic research station and mountain climbers. They go through normal hard drives like pez because of the cold and low air density. How is the power consumption compared to rotating drives?

I've been climbing mountains for some 30 years. I've never thought to bring a hard drive with me. I've dragged around quite a passel of other odd and heavy things, but I appear to be missing something again ...

and ... (0, Troll)

Billly Gates (198444) | more than 6 years ago | (#21821642)

... about 2 megs per second under Vista.

WIth 1 million writes per second before the ram goes bad I would be worried about reliability. I hope the firmwire at least maps out bad sectors.

Hmm (4, Interesting)

orclevegam (940336) | more than 6 years ago | (#21821644)

I'm really interested in the SSD drives as high performance replacements (particularly for holding OS images where boot times should be nicely reduced), but I've got to wonder how the mean time to failure of one of these compares to a traditional magnetic disk. I know they use write leveling, but that just means everything will have a tendency to fail around the some time later, rather than a spot or two now and then. Anyone have any actual reports on these? I can usually make it 2 or 3 years before I start to see errors crop up on magnetic disks (sometimes more or less depending on how much thrashing the disk is subjected to). Might it be cheaper to simply buy a decent sized CF or SD card and an ide/sata adapter rather then paying for an actual disk, or is there some inherit advantage to one of these you'd be missing out on?

Re:Hmm (1)

orclevegam (940336) | more than 6 years ago | (#21821672)

Before someone points it out, yes I know the article quotes some mean time till failure for the drive, but I'm wondering if anyone has any actual experience with these, not what the marketing department says the performance should be.

Re:Hmm (1)

Bandman (86149) | more than 6 years ago | (#21821734)

I don't think they've been around for the timespan they say they're good for yet, have they?

Re:Hmm (1)

Sibko (1036168) | more than 6 years ago | (#21821772)

If you had RTFA, you'd probably have noticed it said this:

"Samsung says the drive can withstand an operating shock of 1,500Gs at .5 miliseconds (versus 300Gs at 2 miliseconds for a traditional hard drive). The drive is heartier in one other important way: Mean time between failure is rated at over 2 million hours"

Re:Hmm (1)

Toveling (834894) | more than 6 years ago | (#21821826)

CompactFlash cards aren't going to be anywhere near as fast as this, even the high-quality cards. Most top out at 20mb/s.

Re:Hmm (1)

orclevegam (940336) | more than 6 years ago | (#21822078)

Most top out at 20mb/s.
Well, if the metrics quoted in the article are to be believed the actual performance on the drive they tested averaged 25mb/s with burst speeds of 30mb/s rather than the manufacturer quoted speed of 100mb/s, so if you can actually get 20mb/s out of a CF card, that's not much of a performance hit. Happen to know what the actual mean and burst read speeds are on a traditional HD (more concerned with reading then I am with writing, would be using the CF drive mostly for app/OS storage with media and data on a traditional HD)?

It gets better (3, Interesting)

WindBourne (631190) | more than 6 years ago | (#21821854)

My home server has a terabyte of disk, but I added a CF-IDE adaptor card, along with 4G CF card. I loaded Linux kernel on it, and then mapped a few dirs to partitions on the HD. After about 6 months at it, I noticed that the temp in the case dropped. It appears to be about 5-10C lower (depending on load). The disk spend the bulk of their time sleeping. I have been pleased enough with this server, that I am going to do the same to my small shoe box computer. Rip out the HD, add CF for /, and then mount my home dir from the server.

Re:It gets better (0)

Anonymous Coward | more than 6 years ago | (#21821960)

Welcome to winter.

Re:It gets better (1)

orclevegam (940336) | more than 6 years ago | (#21822012)

That's almost the exact same setup I had been considering. If I can scrape together the money I was also looking at maybe getting a small form factor box to setup next to one of my TVs. Going to put a CF card for the OS/Apps, with either an internal or external large HD for media storage and load MythTV on it. Could then use the desktop as a client to the TV box and watch TV at my desk.

Is it just me? (3, Informative)

crymeph0 (682581) | more than 6 years ago | (#21821666)

Or does the linked article say nothing about TB sized drives, only the flash drive?

Re:Is it just me? (1)

bcrowell (177657) | more than 6 years ago | (#21822308)

No, it's not just you. The /. summary seems to bear little resemblance to the actual article. There's also no mention of the pricing or availability of the SSD, but from a quick check on frys.com, it looks like it's not available yet, what is available is 32 Gb sizes, and 32 Gb sizes will set you back about $350.

Number of writes? (2, Interesting)

QuietLagoon (813062) | more than 6 years ago | (#21821714)

With the exception of capacity, the solid state disk drives appear to beat spinning disk in every category,

Why is the ultimate number of writes never taken into account in these comparison reviews? Why are solid state drives tested so that their weaknesses are not probed?

Re:Number of writes? (4, Informative)

Planesdragon (210349) | more than 6 years ago | (#21821814)

Why is the ultimate number of writes never taken into account in these comparison reviews? Why are solid state drives tested so that their weaknesses are not probed?
Because it's a measure best reflected by Baysean Data, and they don't have enough time to test them.

If you want, buy an HDD and a Flash-Drive of the same cost, hook them up to a program that runs each at equal data-transfer rates, and see how much data you can read and write to each before they fail. Report back to us in the six months it'll take you.

Oh, and you need to do the trial over a wide sample, so get, oh, at least ten of each.

Bayesian or Monte Carlo? (1)

mosel-saar-ruwer (732341) | more than 6 years ago | (#21822352)


Because it's a measure best reflected by Baysean Data, and they don't have enough time to test them.

What's Bayesian Data? [And yes, I am too lazy to Google it.]

Did you mean Monte Carlo?

Or maybe Latin Squares?

MTBF/Write Cycles (5, Interesting)

Lookin4Trouble (1112649) | more than 6 years ago | (#21821724)

Since I've seen this plenty of times, I'll address it.

Write Cycles: Even at the lowest estimate, 100,000 write cycles to failure

Meaning on a 32GB Drive, before you start seeing failures, you would have to (thanks to wear-leveling) write 32*100,000 GB, or 3.2Petabytes

at 60MB/sec write speed of the Samsung drives, you would need to write (and never, ever read) for 3,200,000,000/60, or ~53Million seconds straight.

53Million divided by 86,400 means you would need to be writing (and never ever reading) for ~617 Days straight (That's roughly 20 months of just writing, no reading, no downtime, etc...

So... the sky is not falling, these drives are slated to last longer than I've ever gotten a traditional drive to last in my laptop(s)

Almost forgot to mention, standard NAND of late has been more in the 500k-1M write cycle between failures range. 100k was earlier technology, so multiply numbers accordingly.

Re:MTBF/Write Cycles (1)

jafiwam (310805) | more than 6 years ago | (#21821782)

What happens when you run the same napkin math on a drive that has Windows, Office, and two big games on it?

That leaves you about 10 GB of space to use for writes for swap, temp files, etc.

Re:MTBF/Write Cycles (3, Interesting)

goofy183 (451746) | more than 6 years ago | (#21821830)

Except most wear-leveling MOVES data around on the drive. Since random access is 'free' shuffling mainly read-only data around on the disk periodically is perfectly reasonable.

Re:MTBF/Write Cycles (1)

renoX (11677) | more than 6 years ago | (#21822050)

Please mod parent up! I'm sick of all these posts (modded up!!) who thinks that writing on a mostly full disk remove the effectiveness of wear-leveling, there is no reason why this should be the case..

Re:MTBF/Write Cycles (2, Insightful)

orclevegam (940336) | more than 6 years ago | (#21822564)

Well, it may not entirely negate the effectiveness of wear leveling, but it definitely makes the calculations a bit more complicated. Lets look at the theoretical example of a 32G disk with 31G used and a 512M write about to happen. It decides that the free space already has too many writes, and it needs to write the data to a used section of the disk instead, so it finds a 512M chunk of data that has the lowest write count and copies that to the free space (with a high write count, further increasing it's write count). It then takes the new chunk of data and overwrites the old chunk of data (once more increasing a 512M blocks write count). Now, there's only two issues here. First, even though you've moved some data which you hope is going to be semi-permanent (as it had low write counts) to a high count section, there's no guarantee that the user won't turn around and delete those files in the next couple minutes making it pointless to have copied the data there. Second, to write 512M of data, you've just had to perform 2 512M writes thus using twice the cells to store the actual content. Yes, following this strategy should increase the time till you see a failure, but it also invalidates the simplistic calculation used to come up with the 50+ years till failure figures some people have proposed. Now, I'm not saying the performance won't still be a long time, just that it's not as cut and dry as it might seem.

Re:MTBF/Write Cycles (1)

Skapare (16644) | more than 6 years ago | (#21822590)

If the block on disk has ever been written the flash device has to keep it. It has no idea that no file inodes point to it anymore. Wehn a write is done, it picks a block from the pool and writes it there, and juggles its own mapping. But I am curious about a flash device that will, on its own, just juggle things around. That could avoid the data stagnation problem, where any data that doesn't get written on is just keeping the zones of writing all that much smaller. But it can also increase the number of writes being done. It would have to know that certain blocks don't get frequently written for that to be effective.

Re:MTBF/Write Cycles (1)

TooMuchToDo (882796) | more than 6 years ago | (#21822576)

As long as the driver is smart enough to disable a paging file, not that much writing is done to the hard drive on a Windows box (at least by the OS). When you do updates of course, writing is done. And when you save files, writing is done. But if you're just surfing the web and have 2-4GB of ram, disable caching, and the browser shouldn't write to the disk. If you're running Office, or games, save your work or savefile over webdav to a remote provider or use Amazon S3 to save those small amounts of data.

Re:MTBF/Write Cycles (2, Insightful)

everphilski (877346) | more than 6 years ago | (#21821796)

Meaning on a 32GB Drive, before you start seeing failures, you would have to (thanks to wear-leveling) write 32*100,000 GB, or 3.2Petabytes

NOT true, unless the drive is completely empty! If you have 31 gigs of data on that drive which you were using as long-term storage, then you'd only have to write (32-31)*100,000 GB of data before failure. You obviously wouldn't be overwriting any data already stored on the drive ...

Re:MTBF/Write Cycles (1)

baywulf (214371) | more than 6 years ago | (#21821920)

Not true if the drive uses static wear leveling algorithms. These algorithms will swap data between low use and high use NAND regions periodically.

Re:MTBF/Write Cycles (0)

Anonymous Coward | more than 6 years ago | (#21821958)

This is obviously true only if your mind can't conceptualize a swap operation.

Re:MTBF/Write Cycles (1)

vadim_t (324782) | more than 6 years ago | (#21821980)

It's still not the same failure mode though.

On a magnetic hard disk, once you get a failure you can expect the thing to die completely soon, because failures tend to be mechanical. Once there's scraped magnetic material bouncing around on the inside it's only going to get worse, possibly very fast.

On a SSD what should happen is that sectors die in a predictable fasion, and they die due to writes, so you can still read and recover your data.

Re:MTBF/Write Cycles (5, Informative)

NMerriam (15122) | more than 6 years ago | (#21821988)

You obviously wouldn't be overwriting any data already stored on the drive


No, but the wear-leveling routines in the drive will happily move around your existing data so that rarely written sectors are available for heavy writing operations.

Seriously, this "issue" comes up in every discussion about SSDs, and it seems like people are just unwilling or unable to accept that what was once a huge problem with the technology is now not even remotely an issue. Any SSD you buy today should outlive a spinning disk, regardless of the operating conditions or use pattern. It is no longer 1989, engineers have solved these problems.

Re:MTBF/Write Cycles (2, Interesting)

jafiwam (310805) | more than 6 years ago | (#21822200)

No, but the wear-leveling routines in the drive will happily move around your existing data so that rarely written sectors are available for heavy writing operations.

Seriously, this "issue" comes up in every discussion about SSDs, and it seems like people are just unwilling or unable to accept that what was once a huge problem with the technology is now not even remotely an issue. Any SSD you buy today should outlive a spinning disk, regardless of the operating conditions or use pattern. It is no longer 1989, engineers have solved these problems.
Actually, I think the issue is there are differences in the drives that don't come up in the articles themselves, so that detail gets left out every time.

So, it's inevitable that someone who doesn't know this particular detail, but is already familiar with how platter based magnetic media work will come up with that issue in pretty much every discussion.

The problem is it's new. That's all. (Or, perhaps that techno-journalists write about stuff they don't know enough about.)

Re:MTBF/Write Cycles (0)

Anonymous Coward | more than 6 years ago | (#21822062)

That's exactly what I was thinking. Sadly, the wear leveling mechanisms aren't documented in any meaningful way whatsoever. The most important question is, how the drive can determine which parts of the disk are "empty". If it cannot, then, after filling the drive just once to its maximum capacity, The number of write cycles left for will be only (combined size of hidden spare/wear leveling sectors)*100,000. Will that also happen if you use an "unsupported" filesystem, and, if there is such a thing, which filesystems are supported?

Re:MTBF/Write Cycles (1)

NormalVisual (565491) | more than 6 years ago | (#21821902)

There's a serious flaw in your analysis - you're assuming a totally empty drive. You're going to be wearing the drive more and more as it gets full, and the combination of an almost-full drive and a busy swap partition might get interesting very quickly.

I agree that on the whole, flash is a lot more durable now than it used to be, but I'm not quite convinced that these will be suitable as a general-purpose replacement for magnetic disks. Aside from the NAND longevity issue, I'd be concerned about the ability to recover data in case of a controller failure or other hardware-related issue. Mag disk is relatively easy to deal with in that regard.

Re:MTBF/Write Cycles (1)

Zerth (26112) | more than 6 years ago | (#21822120)

Smart wear leveling enables the drive to swap files on sectors with many writes left(i.e., read only or rarely changed files) and with those on sectors with few writes left(swap, savegames, etc).

So performance isn't that far from a nearly empty drive.

Although I do agree, I'd be concerned about recovering from controller failure more than with a magnetic drive.

Re:MTBF/Write Cycles (1)

pushing-robot (1037830) | more than 6 years ago | (#21822506)

Why would anyone use flash for virtual memory? You can get 4GB of DDR2 SDRAM for seventy bucks [frys.com], or two gigs for less than half that [newegg.com]. Notebook SO-DIMM prices are about the same [frys.com].

With DDR2 prices so cheap, I don't see why anyone (with a modern enough system to use DDR2) is swapping data to disk regularly. Certainly not anyone who can afford a SSD.

What about real performance (3, Interesting)

B5_geek (638928) | more than 6 years ago | (#21821764)

How do these SSD compare to a real high-end disk like a 15k rpm Ultra320 SCSI drive?
Of course SSD will beat an IDE disk hands-down, but that is not why you buy IDE drives.
I have always used SCSI for my OS/system and IDE for my storage, this combination (in addition to SMP rigs when available) has allowed me to out-live 3 generations of processors. Therefore saving me money on upgrades.

SSD seems best marketed to 'gamers' so why is it always connected to a very limited IO bus?

Re:What about real performance (1)

jd (1658) | more than 6 years ago | (#21822000)

I've seen SSDs used to cache access to a traditional hard drive, and have even seen traditional hard drives used to cache access to optical mass storage. So long as your disk usage is "typical" (lots of access to a limited range of programs and libraries, infrequent access to the full range), it makes sense to layer the access in this way. You don't then have to care about limited space on the SSDs, you don't have to worry about MTBF because it's very unlikely all layers will fail at the same time (cacheing means that you've spare copies of everything that you frequently use), and you're much less subject to the limitations of any of the devices.

As for the bus, I would have to agree that most Flash-based or other popular SSD solutions are far slower than they need to be. (Bear in mind that PCI Express 2.0 can deliver 5 gigabits per second per lane, across 32 lanes. Yes, not many home PCs have PCI Express 2.0, but even conventional PCI technology is capable of delivering more than adequate bandwidth. To make the SSD faster, it could also be cached using conventional RAM. Ideally, it wouldn't be a write-through cache, but a battery-backed standard cache that flushes as-needed or on external power failure, so as to minimize writes.

As with most technologies, if there aren't good solutions it is not the fault of the technology, it is the lack of imagination by those implementing it.

Re:What about real performance (4, Insightful)

KonoWatakushi (910213) | more than 6 years ago | (#21822152)

No need to compare with 15k rpm drives; flash disks lose spectacularly to low rpm laptop drives for random write performance. For obvious reasons though, no one ever tests random write performance. Manufacturers also rarely report random write IOPS.

Flash is great, if your disk is basically read-only.

Not the jump I was hoping for (1, Interesting)

Badmovies (182275) | more than 6 years ago | (#21821802)

The new solid state drives did beat out older drives in terms of performance, but I can honestly say that I was hoping for a bigger difference between the two in terms of performance. Not just "beating" the older technology, but beating it by an order of magnitude.

Looking at it, the biggest benefit I can see is that the solid state drives should be better at withstanding shock and vibration - which normal hard drives hate. If they cannot improve the performance (which will still be useful for gamers, servers, and other speed freak things) then reliability and security of data is the selling point. I can see rugged notebooks using these.

Re:Not the jump I was hoping for (1)

Ant P. (974313) | more than 6 years ago | (#21821948)

Nope, the biggest benefit is that they don't have a 5400rpm motor spinning up and down every time you want to get data out. That and the screen are what drain laptop batteries, the CPU these days is relatively efficient.

Re:Not the jump I was hoping for (1)

Badmovies (182275) | more than 6 years ago | (#21822214)

That's a very good point and definitely a factor for laptops. Still, one of the "golden bullets" I want now is a blazing fast way to retrieve data from the storage.

Well, that and size. Give me the power of a laptop in something the size of a cell phone, with a projected screen in midair, with a way of registering me typing commands (without a keyboard) and I will be happy with my computer - for a year or two.

Re:Not the jump I was hoping for (2, Informative)

TimothyJones (954047) | more than 6 years ago | (#21821962)

That's because those are not really performance SSD drives. Random Access time is much improved but the transfer rate is way below a good HD. MTron has some high performance drives that pulverize everything else but they do cost an arm, leg and probably one of your kidneys. The only real benefits of those Samsung SSD's are much lower power consumption, no heat or noise. On a laptop this is still very good news.

Re:Not the jump I was hoping for (1)

baywulf (214371) | more than 6 years ago | (#21822060)

It would not be too hard to increase the sequential performance by striping data across more NAND. Random read performance is also not too hard. The hard part is always random write performance. This is because if you want to modify a sector of data, all the remaining data must be moved to a new place. The copying of old data takes lots of time but tricks can be used to optimize them out but for true random writes, the performance will never be that good with the current NAND limitations.

Re:Not the jump I was hoping for (1)

Skapare (16644) | more than 6 years ago | (#21822632)

They should be able to parallel several flash chips to increase the speed. Or maybe the old drives already did this?

what about number of reads (1, Interesting)

josepha48 (13953) | more than 6 years ago | (#21821888)

ssd's have a limited lifetime of number of reads and writes. I would imagine that a 10 year old hard drive that was used all day would last longer than an ssd. I would also imagine that someone who does a lot of compiling and disk writes would wear down the ssd and then have to throw it out and replace it. I know that they have some technology that spreads this out on some devices, but still. I think having the main OS on an SSD would be ideal and then the swappable parts could be on a regular disk. You could make the OS read only, so it would be less likely to have a hacker install virus software to the OS directly and the OS could refuse to run software in the read only space. While this would be limiting, it would solve some of the issues we have today with virus and security.

Re:what about number of reads (1)

stewbacca (1033764) | more than 6 years ago | (#21822224)

I would imagine that a 10 year old hard drive that was used all day would last longer than an ssd.
Considering in over 25 years of computing, I've never seen a hard drive last longer than 5 or so years, I'm just going to go ahead and bet that an ssd would be a better choice for me.

Where can I buy one? (1)

Sibko (1036168) | more than 6 years ago | (#21821894)

So... yeah. I want one, I'm sure there's more than a few other slashdotters out there who also want one. But none of Samsung's links helped me find a store that sells the 2.5 inch 64GB drives. Does anyone know where these are being sold?

Re:Where can I buy one? (2, Informative)

ricky-road-flats (770129) | more than 6 years ago | (#21822232)

Near me, this place [scan.co.uk] has a handful of different ones.

Re:Where can I buy one? (0)

Anonymous Coward | more than 6 years ago | (#21822608)

Hahahahahah... nice "consumer-friendly" prices.

No one's going to buy these fucking things until they drop to an affordable amount -- affordable means: prices competing with existing non-solid-state drives.

Until then, it's a pipe dream. Even Juniper doesn't use this kind of media in their M20s and M40s -- they use ATA disks (and should be ashamed for it).

real review here (0)

Anonymous Coward | more than 6 years ago | (#21821924)

This review here compares several Mtron SSDs to the Samsung and a Sandisk:

http://www.tabletpcreview.com/default.asp?newsID=1037 [tabletpcreview.com]

[Spoiler]: They beat the crap out of the Samsung SSD, are available now and already several hundred bucks below a Grand.

Waiting for low-end drives (4, Insightful)

alegrepublic (83799) | more than 6 years ago | (#21821964)

I am still waiting for a reasonably priced low-end drive. An 8GB usb drive [buy.com] can be found for about $50. Packing 4 of them and replacing the usb circuitry with SATA would make for a 32MB for $200. Granted, it may not be the fastest drive around, but sometimes speed is not the most important factor. A 32MB would be enough for installing any current OS and still have some room for personal files to carry along on a trip. So, I think the current trend of providing high-end drives only is just an attempt to milk users to the maximum without much concern for what we actually need.

Speed for a mech. HD is burst, not track-to-track? (3, Interesting)

Futurepower(R) (558542) | more than 6 years ago | (#21822044)

Quote from the Computerworld article and the Slashdot summary:

"Samsung rates the drive with a read speed of 100MB/sec and write speed of 80 MB/sec, compared to 59MB/sec and 60MB/sec (respectively) for a traditional 2.5" hard drive."

The speed quoted for a mechanical hard drive is a burst speed, accurate for reading only one track, and doesn't include the time it takes for a conventional rotating hard drive to change tracks. Isn't that correct?

Re:Speed for a mech. HD is burst, not track-to-tra (1)

0123456 (636235) | more than 6 years ago | (#21822106)

"The speed quoted for a mechanical hard drive is a burst speed, accurate for reading only one track, and doesn't include the time it takes for a conventional rotating hard drive to change tracks. Isn't that correct?"

Depends. My IDE drives seem to sustain 60-ish MB/second on a large contiguous file even across multiple tracks... but suck if the file is heavily fragmented.

Re:Speed for a mech. HD is burst, not track-to-tra (0)

Anonymous Coward | more than 6 years ago | (#21822396)

No, burst would be reading from the cache in the drive, which is typically done at near interface speed (several hundred megabytes/s).

Here ius a 461GB one ... (1)

foobsr (693224) | more than 6 years ago | (#21822122)

... not affordable [linuxdevices.com], of course.

But that is what you get in the not so distant future:

* Performance:
o Access time -- 30 to 100 microseconds
o Burst transfer rate -- 300 MB/sec.
o Sustained transfer rate -- up to 100 MB/sec.
o I/O operations per second -- Up to 20,000
* Environmental specifications:
o Operating temperature 0-70 degrees C ("commercial" range), -40 to +85 degrees C ("industrial" range)
o Shock (operating) -- 1,250 G
o Vibration (operating) -- 16.4 G rms
* Reliability:
o MTBF -- 1.9 million hours, minimum
o Error correction -- corrects up to 9 random bit errors per 528-byte block
o Data integrity -- up to ten years
o Read endurance -- unlimited
* Physical specifications:
o Form-factor -- 2.5-inch HDD
o Dimensions -- 2.75 x 3.95 x 0.93 inches (69.85 x 100.45 x 23.55 mm) maximum, storage capacities 64 GB and below are 0.33 inches thick
o Weight -- 2.9 to 7.8 ounces (83 to 221 grams)


CC.

mo3 Up (-1, Flamebait)

Anonymous Coward | more than 6 years ago | (#21822216)

look at your soft, ar full-time GNAA said.h 'Screaming

With the Exception of What??? (1)

Nom du Keyboard (633989) | more than 6 years ago | (#21822572)

With the exception of capacity, the solid state disk drives appear to beat spinning disk in every category,

Well excuse me, BUT, capacity is the single largest factor in my disc drive purchase decisions. I'll give away speed, power consumption, size, heat, noise, and even cost - everything but reliability - in favor of capacity. Even "slow" hard drives are quite fast historically speaking, and none of those other factors make up for running out of drive space.

And don't the SSD's cost a lot more too? Capacity and Cost, the two biggest factors to consider.

RAID: speed & reliability (0)

Anonymous Coward | more than 6 years ago | (#21822610)

site details an early RAID experiment using 4 thumb drives:
http://www.bigbruin.com/reviews05/thumbraid_1 [bigbruin.com]

Hmmmm...thumb drives (USB flash drives) are:
1) hot swapable...plug-and-play
2) inexpensive...widely available...falling in price
3) daisy-chain USB hardware is expandable

Seems like there should be an idea for a product in all this.
RAID could solve the reliability issues...have the OS
pop open a warning box:
        Thumb drive #4 has now become 'unreliable'.
        It's data has been copied to other drives.
        Please replace #4 with a new device.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...