Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Samsung SSD 840 EVO MSATA Tested

samzenpus posted about 5 months ago | from the looking-at-the-numbers dept.

Hardware 76

MojoKid (1002251) writes "Shortly after 2.5-inch versions of Samsung's SSD 840 EVO drives hit the market, the company prepared an array of mSATA drives featuring the same controller and NAND flash. The Samsung SSD 840 EVO mSATA series of drives are essentially identical to their 2.5" counterparts, save for the mSATA drives' much smaller form factor. Like their 2.5" counterparts, Samsung's mSATA 840 EVO series of drives feature an updated, triple-core Samsung MEX controller, which operates at 400MHz. The 840 EVO's MEX controller has also been updated to support the SATA 3.1 spec, which incorporates a few new features, like support for queued TRIM commands. Along with the MEX controller, all of the Samsung 840 EVO mSATA series drives feature LPDDR2-1066 DRAM cache memory. The 120GB drive sports 256MB of cache, the 250GB and 500GB drive have 512MB of cache, and the 750GB and 1TB drives have 1GB of cache. Performance-wise, SSD 840 EVO series of mSATA solid state drives performs extremely well, whether using synthetic benchmarks, trace-based tests like PCMark, or highly-compressible or incompressible data."

cancel ×

76 comments

Sorry! There are no comments related to the filter you selected.

It's all about the IOPS... (1)

Midnight_Falcon (2432802) | about 5 months ago | (#46615309)

From TFA:

4KB Random Read (QD1): Max. 10,000 IOPS 4KB Random Write(QD1): Max. 33,000 IOPS 4KB Random Read(QD32): Max. 98,000 IOPS (500GB/750GB/1TB), 97,000 IOPS (250GB), 94,000 IOPS (120GB) 4KB Random Write(QD32): Max. 90,000 IOPS (500GB/750GB/1TB), 66,000 IOPS (250GB), 35,000 IOPS (120GB)

Judging by this, the speed is about the same as other comparable SATA III SSD's, with a little bit of a boost but nothing dramatic.

yup (2, Insightful)

Anonymous Coward | about 5 months ago | (#46615387)

But it fits in an mSATA port which means more compact notebooks (don't need room for a 2.5") or faster notebooks with big-storage (mSATA SSD + 2.5") or used as an bios-level accellerator with certain bioses that can only use mSATA port for this.

Re:yup (2)

ArcadeMan (2766669) | about 5 months ago | (#46615441)

Aren't mSATA drives half the size of a 2.5" drive? Put two of these in a laptop, in Raid 0.

Re:yup (1)

garrettg84 (1826802) | about 5 months ago | (#46615607)

Just bought two of them off of amazon. Impatiently awaiting delivery on monday/tuesday to do exactly this.

Re:yup (0)

Anonymous Coward | about 5 months ago | (#46615947)

Raid 0, are you mad?

Sure you get a performance boost but my experience with Samsung SSD's have been poor, very poor. Please give us the advantages of raid-0 and how a single error can be fatal.

Re:yup (1)

ArcadeMan (2766669) | about 5 months ago | (#46616477)

Raid 0 will nearly double your speed. A single error WILL be fatal.

Re:yup (1)

Charliemopps (1157495) | about 5 months ago | (#46617797)

It doubles your chance of failure as well as your speed. But in practical use, the speed wont quite be doubled... and double the chance of failure is still very low if this is just a gaming laptop or something you remote into work with. My primary computer runs Raid-0 and I've never had a failure. But I only keep windows and games on that drive. I store all my documents, photos, music and movies on a 10TB Raid-1 array.

I once spent $250 on a 20mb hard-drive so I have a hard time complaining about the price of disk space now-a-days.

Re:It's all about the IOPS... (2)

Joce640k (829181) | about 5 months ago | (#46616345)

Judging by this, the speed is about the same as other comparable SATA III SSD's, with a little bit of a boost but nothing dramatic.

Yada yada...

10% more, 10% less, who gives a damn? Only a drooling idiot would buy an SSD because it won this month's benchmark.

Aren't we at the point where we just ignore "speed" and look at what's inside them that's going to make them last a long time and keep our data safe?

Re:It's all about the IOPS... (1)

adolf (21054) | about 5 months ago | (#46617159)

Aren't we at the point where we just ignore "speed" and look at what's inside them that's going to make them last a long time and keep our data safe?

Only if you've got a verifiable test for durability, and enough time to implement it.

Over here in reality, we have zero real data about how each new generation of SSD actually ages as time marches on. We can postulate and we can theorize, but we're ultimately full of shit when it comes to finding an new SSD to "last a long time and keep our data safe."

So, yeah: I'll take speed benchmarks. Some data is always better than bullshit.

Re:It's all about the IOPS... (1)

Joce640k (829181) | about 5 months ago | (#46619199)

Really? You think the engineers who design them have no idea how long they're going to last?

Re:It's all about the IOPS... (1)

adolf (21054) | about 5 months ago | (#46628587)

I think the marketing people who sell them to us are full of shit, just as marketing people tend to be.

If I could converse with the engineers directly, my opinions may be different...but all I've got to work with is marketing fluff and speed benchmarks -- only the latter of which is useful.

SSDs haven't always been reliable. And brand name hasn't has much to do with it, either.

I mean, I expected the early SSD failures from OCZ. But Intel? Sheesh. (I presume from the 640k suffix on your moniker that you've been around plenty long enough to remember these debacles.)

In terms of Samsung in particular, they've somewhat-recently crossed the bridge wherein a single bit of flash can have more than two states. It's awesome tech. It's also entirely unproven.

And anecdotally, if I were to trust a company to build durable goods, I would not have had to replace two power supply caps in my 52" Samsung LCD TV. Capacitors are not wear-items in well-designed circuits, and inherently unreliable caps are inexcusable. The new caps (all $2 worth of them) have now been working longer than the original caps lived for. (Last I checked, there are lawsuits involved between consumers and Samsung related to these particular capacitors.)

Nevermind the well-vetted and wildly popular Xbox 360, most of the early units of which suffered a terminal Red Ring of Death...which itself was due entirely to an unfamiliarity with the properties of lead-free solder and bad engineering. (Yes, a failure of that sort is bad engineering. Engineers are not infallible.)

We do ourselves a disservice if we blindly trust that all of these things have already been figured out on our behalf, especially if a marketer is the person telling us that it's a good and durable design. Because time and time again, we get lousy products which aren't durable at all -- no matter what any marketing fluff might say, or a user's expectation might be.

That said, if you have any citation of a Samsung engineer talking freely about their latest flash technology, I'm all ears.

Meanwhile, I'm standing firm: We don't know how durable it is. You don't know, and I don't know. We do not have enough data at this time to make such a determination.

Re:It's all about the IOPS... (1)

HappyPsycho (1724746) | about 4 months ago | (#46623729)

http://us.hardware.info/review... [hardware.info]

The 840 is one of the few SSDs we have this kind of public data on. As they conclude, this one will most likely outlive whatever it is put in with the added benefit of being highly resistant to G-force shocks (normally from dropping).

Re:It's all about the IOPS... (1)

tlhIngan (30335) | about 5 months ago | (#46618607)

Judging by this, the speed is about the same as other comparable SATA III SSD's, with a little bit of a boost but nothing dramatic.

You know what the problem is? SATA3 is too damn slow. Yes, a modern SSD has hit the SATA3 bandwidth limit of 6Gbps.

The interface is now the bottleneck - something that hasn't happened in disk storage systems for a long time - it took SSDs to actually saturate a SATA3 link with a single drive, and SATA3 was created with SSDs in mind. And we've hit the limit again, far before SATA4 is even a draft.

It's why we're having PCIe SSDs that easily get 750MB/sec reads and writes.

IOPS is where we can improve.

Just saying.. (5, Interesting)

uofitorn (804157) | about 5 months ago | (#46615359)

Shouldn't this submission feature an orange "Ad" image similar to Google's paid results instead of the "Hardware" image?

Re:Just saying.. (0)

Anonymous Coward | about 5 months ago | (#46615809)

But... but... but... It'z teh Samsungz!!!!!

I would like to know (1)

fustakrakich (1673220) | about 5 months ago | (#46615389)

Does a solid state drive really need a cache?

Re:I would like to know (3, Informative)

Antique Geekmeister (740220) | about 5 months ago | (#46615427)

Yes. The SSD systems are not reliable enough to act as registers yet, and would impose a noticeable pre-processing penalty for those highly optimized, low-level operations that use memory registers.

Re:I would like to know (0)

Anonymous Coward | about 5 months ago | (#46615509)

We're talking about disk cache (as mentioned in TFS), not CPU cache. The answer is still yes (or they wouldn't have built it on, now would they?)

Re:I would like to know (0)

Anonymous Coward | about 5 months ago | (#46616017)

Everything needs a cache at the interface between discrete systems. That's usually built into the design, not something you could upgrade or hotrod.

Re:I would like to know (1)

bheading (467684) | about 5 months ago | (#46616089)

Act as registers ? Huh ?

Re:I would like to know (0)

Antique Geekmeister (740220) | about 5 months ago | (#46616139)

Registers are well defined high speed memory locations. Examples include http://en.wikipedia.org/wiki/M... [wikipedia.org] .

Managing these is left, by most modern computer languages, to the compiler and the kernel.

Re:I would like to know (1)

cheater512 (783349) | about 5 months ago | (#46616585)

Erm....those fast registers are *ONLY* in the CPU. You do know that right?

Re:I would like to know (1)

Antique Geekmeister (740220) | about 5 months ago | (#46617225)

On further thought, you're correct. The cache in the SSD drives is for other uses.

Re:I would like to know (1)

Rockoon (1252108) | about 5 months ago | (#46618161)

How does this shit get modded up as informative?

...."reliable enough to act as registers"

This clown is a technology tourist at best.

Re:I would like to know (1)

Antique Geekmeister (740220) | about 5 months ago | (#46627885)

I misspoke. I was thinking of various schemes I've been seeing to replace system _RAM_ with SSD's, and the problems with the approach.

Re:I would like to know (1)

ArcadeMan (2766669) | about 5 months ago | (#46615451)

RAM is still much faster than flash.

Re:I would like to know (2)

Rich0 (548339) | about 5 months ago | (#46615531)

RAM is still much faster than flash.

Sure, but the OS operating the drive has its own RAM cache.

There are only a few reasons to put a cache on a drive that I can think of:

1. If the RAM is battery-backed then writes to the drive cache can be treated as writes to the drive itself.
2. If the physical operation of the drive is abstracted from the OS, then it may only be possible to optimize out-of-order writes by utilizing a cache at the hardware level. For example, an OS might write a consecutive series of blocks, but perhaps one of those blocks was remapped by the drive to a different cylinder making it MUCH cheaper to write that one block later.
3. A variation of #2 is that on SSD the erase and write operations operate on different sized blocks, so there might be other optimizations the controller can make if it can perform operations out-of-order.

So, RAM being faster on its own isn't really a driver for putting a cache on a drive (just spend the money on more system RAM, where it can be used more flexibly). However, a RAM cache allows the drive to do other optimizations on writes not possible at the OS level.

Re:I would like to know (1)

ArcadeMan (2766669) | about 5 months ago | (#46615553)

Should the OS care how the drive works? Shouldn't it just ask it to read/write data?

Re:I would like to know (1)

lgw (121541) | about 5 months ago | (#46615599)

Caching is mostly the OSs job. There's no point in reading less than 32K in a single operation, even if the userland is just requesting 1 byte (and usually reading 256k or so at a time makes sense). The OS, not the drive or controller, should be caching at that level.

That's why it's odd that an SSD would want a cache. A spinning disk has other concernes, and needs to do read-ahead caching and (for better drives) optimize the command queue, both of which require a place to stick read data until it's put on the bus. But SSDs don't care about read order or sequentially, so it's curious.

As other have said, it must help out or Samsung's engineers wouldn't have put it there, so what's it for?

Re:I would like to know (2)

Luckyo (1726890) | about 5 months ago | (#46615627)

SSD do much better when mated to a highly optimized controller that does a lot of background work tailored specifically to flash memory. OS has no idea about any of that - it just has some basic cache management and TRIM. It probably wouldn't either - this stuff is typically trade secret and the reason why SSD controllers are so important for performance of the drive.

Re:I would like to know (0)

Anonymous Coward | about 5 months ago | (#46615669)

It's not data cache, it's block mapping table and some housekeeping data.
4 byte physical block location for every 4kB logical sector = ~976MB mapping table for a 1TB drive.

Re:I would like to know (0)

Anonymous Coward | about 5 months ago | (#46616217)

It's the job of the OS to do caching at the logical level. The SSD does caching at the physical level.

For example, for spinning disks, the application may request 1 byte to be read, while the OS makes a request to the disk to read 32k, and the disk may read the entire track -- or read as much of the track as it can until the next read request comes in. It is the same for SSDs, just the hardware is different and has different speed optimization opportunities, including write combining, or skip writing (ignoring one request to write a sector when another write to the same sector comes in immediately after, etc).

Re:I would like to know (1)

Christian Smith (3497) | about 5 months ago | (#46628925)

As other have said, it must help out or Samsung's engineers wouldn't have put it there, so what's it for?

The embedded controller is just a computer, like any other computer. In the case of the Samsung controller, it's got 3 ARM processors (citation needed, blah blah blah, google it), one for SATA IO, and one each for reading and writing to FLASH (I think). The controller needs firmware to make it do anything useful, and it needs RAM for it's working data and firmware code.

So the firmware just acts like a regular, multi-threaded embedded program. It probably contains an embedded general purpose kernel, which manages and dispatches the threads, and the threads cooperate to manage all the translations to and from FLASH and the SATA bus.

All this translation requires large lookup tables, to map from SATA LBA to FLASH blocks. If it's mapping at a 4K sector level, a 1TB drive would require ~250 million lookup entries, which is a lot of data, requiring a lot of RAM. I'd imagine the lookup tables are trie based, with the table backed on FLASH, and demand loaded. Still, the working set of lookup data will be many 10s or 100s of megabytes.

Note, the early sata controllers were probably single cored controllers, and suffered from stuttering as a result of writes hogging the CPU while erasing and writing to FLASH.

perhaps (0)

Anonymous Coward | about 5 months ago | (#46615635)

Not necessarily, but if you can optimize the IO to the disk to the filesystem then you do get much better performance - especially on reads. On a laptop or desktop it probably doesn't matter that much, depending on what your application is I guess. It becomes more noticeable as the disk gets fuller when the caching above the Disk but below the File system has less time to just push a write down.

Reading from cache is much quicker than the disk (obviously) but if that address isn't in cache then the disk has to go and fetch it. If that address is busy fetching and another write comes in to that same address then the write has to wait for the read to be fulfilled likewise, so being able to fetch it efficiently means you get the request back quicker and the next write can be allowed.

At the enterprise level this can matter - you'll find may documents on how to best optimise disk 'chunks' for different types of database applications (exchange for example), although that is out of my knowledge.

But fundamentally the OS does not need to care, but if it can be optimised for a specific type of read write pattern (or expected read write pattern) then it will perform significantly better. The problem with cache is of course when the power goes off! Then you have lost IO, the filesystem thinks the IO has completed as the cache as sent the ack back for it but has yet to destage the IO to the actual 'disk'. It's in limbo basically. That's the main problem with cache.

I personally think that while bench marks are great, they rarely represent real life (but I also accept there is no real better way to compare different products), and when it comes to Laptops and the consumer level market, they are frankly irrelevant. When I look for a desktop or laptop SSD I really only care about a couple of things, how much is it and how large is it (and I guess will it fit). I rarely give a damn about benchmark tests.

Re:I would like to know (1)

maccodemonkey (1438585) | about 5 months ago | (#46615931)

Should the OS care how the drive works? Shouldn't it just ask it to read/write data?

No.

Caching in RAM is far more efficient than anything a drive (any kind of drive except for a RAM disk) can do.

The end goal of a read/write operation is to get data into RAM. Whether that data is being turned around and sent to a GPU or network does not matter, it's going to show up in RAM first. This means RAM is already naturally a cache structure.

RAM is also going to be much faster than disk. Even if the disk itself is as faster or faster than RAM, it's still got to be piped over some sort of bus to get to RAM, which is going to cause some latency. The cost for getting data from RAM to RAM is 0, or near-0, while the cost for getting data from a SATA disk via the PCI-E bus into RAM is high.

So unless computers get radically redesigned (which will probably eventually happen), any halfway decent OS should probably do RAM caching, and a disk definitely can't offer any feature set that could provide similar performance.

Mac OS X recently added very aggressive disk caching (it will use any free memory for disk caching), and it dramatically improves performance, even on machines with super fast SSDs.

There were a few reasons cited above on why a cache for an SSD could make sense... But... In general the performance of SSDs has already exceeded the speed at which the SATA bus can deliver information, so it's hard to think of any difference an SSD cache could make for performance, unless the SSD backing the cache was pretty slow.

Re:I would like to know (1)

Jeremi (14640) | about 5 months ago | (#46615981)

Mac OS X recently added very aggressive disk caching (it will use any free memory for disk caching), and it dramatically improves performance, even on machines with super fast SSDs.

Recently? I was under the impression that this was how MacOS/X (and indeed most non-ancient flavors of *nix) had always worked. Was I mistaken about that?

Re:I would like to know (1)

maccodemonkey (1438585) | about 5 months ago | (#46616025)

Mac OS X recently added very aggressive disk caching (it will use any free memory for disk caching), and it dramatically improves performance, even on machines with super fast SSDs.

Recently? I was under the impression that this was how MacOS/X (and indeed most non-ancient flavors of *nix) had always worked. Was I mistaken about that?

I'm actually looking for technical information now...

Long ago, definitely before Mac OS X, Mac OS had a hard set disk cache size. You could go into your Memory control panel and chance the disk cache size.

There's information out there stating that earlier versions of Mac OS X had some sort of disk cache (which seems reasonable), but it doesn't say how large the disk cache is.

The most recent change in caching and memory management in OS 10.9 is that before actually cycling out cache, OS X will actually compress that memory, and continue trying to hold it in memory. The idea is that compressing memory and sending it into deep storage is less expensive than having to get that information from disk later. So in that respect, 10.9 is much more aggressive about disk caching.

Re:I would like to know (1)

ArcadeMan (2766669) | about 5 months ago | (#46616501)

And my experience is that the memory compression is so slow on a Core 2 Duo that it should be disabled by default for those processors. Before disabling it, my system kept freezing for nearly five seconds every time I switched programs using ALT+TAB. I disabled the damn thing and now my system runs nearly as well as it did with Snow Leopard.

Re:I would like to know (1)

Rich0 (548339) | about 5 months ago | (#46616033)

Should the OS care how the drive works? Shouldn't it just ask it to read/write data?

In reality there is a place for both - at least until cache gets so cheap that we can afford to have gobs of it everywhere.

There are optimizations the OS can do due to having more information about what applications are up to and the filesystem design. There are optimizations the physical drive can do due to understanding how the data is physically stored on the disks. Until one or the other changes there is good reason to cache in both places.

Re:I would like to know (1)

AmiMoJo (196126) | about 4 months ago | (#46619867)

All modern SSDs do compression and most do encryption of data as well. They need a RAM cache to do that before each write and after each read. Compression reduces writes that wear out flash memory, and encryption is just a nice bonus.

Re:I would like to know (3, Informative)

beelsebob (529313) | about 5 months ago | (#46615513)

Yes. One of the best ways to avoid wearing cells is to cache writes aggressively, in the hope that either 1) another write will simply write over the top of that one, or 2) another write will fill in the rest of the cell, so that you don't have to erase a cell for a partial write.

Re:I would like to know (3, Interesting)

Zontar The Mindless (9002) | about 5 months ago | (#46615539)

Mine has one*, and that certainly doesn't seem to slow it down.

-----

*Yes, I actually just set up a new machine that has a 2.5" Samsung 840 EVO-0 SSD, and it rocks. OFF to kdm login screen in about 2 seconds.

I believe that I've not enjoyed a new piece of tech so thoroughly since microwave ovens first came out.

Re:I would like to know (1)

TechyImmigrant (175943) | about 5 months ago | (#46615885)

>Does a solid state drive really need a cache?
Flash is slower than RAM.

However, from a computer architecture 101 point of view, putting silicon memory behind a disk interface is completely stupid. Putting the cache behind the disk interface is even stupider. So the cache is more there to patch up the ludicrous storage architecture than to address any particular shortcomings of flash memory.

 

Re:I would like to know (0)

Anonymous Coward | about 5 months ago | (#46616641)

YES.
Do you seriously want to go back to the days where every bit of hardware has its own "unique snowflake" drivers and software again?
We are only just managing to get rid of that shit methodology for hardware design by getting companies to agree on common interfaces so one generic driver can be used.

Leaving the OS to do caching will only result in more mess, more complications and even more pointless software bloat.
The OS should only ever need to care about as little as possible when it comes to backing storage: read, write, failure, properties
Anything else is pointless bloat that can be done in the hardware units.
Caching on the drive also allows much finer control of protecting the SSD cells from wear by remembering and overwriting in cache rather than SSD. Doing that between OS and backing storage is a bit slower, more so in write-heavy stuff oddly enough despite the RAM being closer. Equally it is also much faster, we basically have a hybrid RAM / SSD drive. If only we could control the cache better, that'd be great. Didn't one drive come with a program that could let you set a caching method? I'm maybe thing HDD / SSD hybrids.
Drives still have some room for improvement, but they are pretty much better than other hardware sectors in regards to standards and overall quality. (even Seagate)
I'd rather keep their stuff separate because it allows them to innovate without having to make sure their stuff works in the newest OSes. (which is also one problem with SSD adoption because XP has only partial support for SSDs on purpose so Microsoft gets more money, it is suffering like DirectX was from Vista onwards until just a few years ago really before people started taking advantage of it in larger numbers)

Re:I would like to know (1)

m.dillon (147925) | about 5 months ago | (#46617569)

No, not really. It's a waste of power and an unnecessary extra cost to throw a ton of dram into a SSD when the main system is likely going to have far more ram for caching purposes available. Remember that OCZ already tried the large-cache approach and it was a complete failure. It is far, FAR more important for the SSD to have a large enough capacitor to be able to at least flush flash meta-data on power loss, and you can't do that if you have a lot of power-hungry ram or have a lot of dirty data in that power-hungry ram that needs to be flushed.

More to the point, these large caches can't be used to buffer writes without serious data loss. They are really only useful for buffering reads. And, beyond that, nearly all operating systems use asynchronous writes and nearly all operating systems do some manner of write combining themselves, and tend to be IOPS-limited for reads anyway. So there's no point having the SSD cache a large number of writes, and no real benefit from having the SSD cache a large amount of read data either.

Write pipelining is accomplished by limited the number of tags one uses for write commands. The size of the cache available for writes is irrelevant because most OS flushes are going to fill it up nearly instantly anyway when they flush. So if you don't limit the number of tags you use for write commands, you will starve read commands no matter how small or large the SSD's cache is. Generally speaking, 4 tags (out of the 32 supported by SATA) is plenty for writes.

Larger write caches do not improve SSD write throughput at all, and generally won't improve read throughput either.

-Matt

Re:I would like to know (1)

Bengie (1121981) | about 5 months ago | (#46618231)

All I know is high-bandwidth high-latency links tend to want decently sized caches. 512MB of memory with a SSD that can read and write about 500MB/s both ways at the same time, you may want a decent sized cache for sudden bursts or stalls.

no capacitors (1, Redundant)

dshk (838175) | about 5 months ago | (#46615505)

"mSATA solid state drives performs extremely well" It has no power loss protection capacitors, so if it performs extremely well, then it also lose data extremely well. Maybe you can put it into a laptop, but I would not risk even that. This is another useless "customer" SSD drive.

Re:no capacitors (0)

Anonymous Coward | about 5 months ago | (#46615541)

it's a mSATA drive - can you think of an enterprise scenario where you'd be using mSATA drives?

Re:no capacitors (2)

dshk (838175) | about 5 months ago | (#46615625)

Actually I cannot think of any scenario, where I would use such a drive. We have lost 3 desktop drives which has no capacitors. One is still working. Meanwhile there are no issues with four drives which has capacitors.

Re:no capacitors (1)

Decker-Mage (782424) | about 5 months ago | (#46615919)

I have ten drives here: 2 x 60 GB, 4 x 128 GB, 3 x 160 GB, all 'normal' SSD's, and one 240 GB PCIe. All of them are backed by an UPS. Oh, I forgot the ones in the portables and tablets which also count as battery backed since they also do an orderly shutdown when the batteries are nearly drained. Still I do not expect any hard drive to operate without loss of data when the power is ripped out from under the device, whatever storage device is under scrutiny. Sorry, but many operating systems cache writes and data loss happens even with journalling. You can turn that off at the cost to throughput. That's why I used device quick disconnect option on systems with no UPS elsewhere. I imagine having capacitors is nice as an added level of protection. Hell, I almost certainly have them on the PCIe at the least as defense against power-loss is a feature. Still, expecting total loss protection from just the drive, mechanical, solid-state, even tape or or optical disc is not entirely rational.

Re:no capacitors (1)

K. S. Kyosuke (729550) | about 5 months ago | (#46616233)

Still I do not expect any hard drive to operate without loss of data when the power is ripped out from under the device, whatever storage device is under scrutiny. Sorry, but many operating systems cache writes and data loss happens even with journalling.

I was told once by a guy effectively building Firebird servers for living that this is precisely what you should expect from a good machine, that is, expecting that the data had been physically written by the time the synchronous write call returns. Having a battery-backed RAID controller or a similar contraption for an SSD drive that allows you to buy some reliable caching for a little extra complexity shouldn't make a difference. If your hardware and software doesn't work this way, then it's probably just unfit for corporate use?

Re:no capacitors (1)

Bengie (1121981) | about 5 months ago | (#46618279)

Most non-raid harddrives will no-op the sync write and immediately return, even for top brands. No real way to know.

Re:no capacitors (1)

K. S. Kyosuke (729550) | about 4 months ago | (#46620155)

That's what he said, too. But he also said that you should throw these out without mercy and go for drives that work reliably in this respect.

Re:no capacitors (1)

dshk (838175) | about 5 months ago | (#46616423)

I don't expect total power loss protection. I do expect reasonable behaviour, like data already synced to the drive is there even after a power loss. SSD drives without capacitors fail spectacularly, they are much worse than HDD drives. Just google for "SSD failure modes abstract", for example: https://www.usenix.org/system/... [usenix.org]

Re:no capacitors (1)

AmiMoJo (196126) | about 4 months ago | (#46619899)

Laptops with batteries. Unless something pretty severe happens to the machine it should never suffer from a sudden, uncontrolled loss of power. Even a severe crash followed by holding the power button would give the drive plenty of time to flush caches.

Re:no capacitors (1)

K. S. Kyosuke (729550) | about 4 months ago | (#46620197)

They tend not to have ECC while also having some pretty crappy HW and drivers. I can't exclude the options that there are specialty laptops designed to work as reliable mobile servers, but these are definitely going to be specialty stuff. (Something like http://web.eurocom.com/ec/ec_m... [eurocom.com] would perhaps qualify?)

Re:no capacitors (1)

dshk (838175) | about 5 months ago | (#46615591)

To be fair, queued TRIM is actually useful. The old TRIM killed performance in a high IO environment.

Re:no capacitors (2)

trparky (846769) | about 5 months ago | (#46615613)

For mere consumers it's a great drive. If you need that level of data assurance you're looking at the wrong SSDs, go look at Intel SSDs but be prepared to pay an arm and a leg for it. For us mere mortals it's still a great drive.

Re:no capacitors (1)

WuphonsReach (684551) | about 5 months ago | (#46616809)

go look at Intel SSDs but be prepared to pay an arm and a leg for it.

Well, maybe not an arm and a leg, the 300GB Intel DC S3500 units are only $300. Or the 600GB unit for $600. So around $1/GB and they come with the large capacitors inside to deal with power loss.

The Intel DC S3700 units, OTOH, are $2.25-$2.50 per GB. Which isn't all that much either in the big view, even regular SSDs 3-4 years ago were $1.50-$2.00 per GB.

Re:no capacitors (1)

bill_mcgonigle (4333) | about 5 months ago | (#46616011)

"mSATA solid state drives performs extremely well" It has no power loss protection capacitors, so if it performs extremely well, then it also lose data extremely well.

So use one of the filesystems that can deal with this situation quite reliably. Or if your filesystem can't deal, then don't use these drives. Right tool for the job and all that.

Re:no capacitors (3, Informative)

dshk (838175) | about 5 months ago | (#46616109)

No file system can deal with the situation when data synced to drive is lost. And that is the better case for SSD in case of a power loss, they frequently lose completely unrelated data.

Re:no capacitors (2)

Ihunda (642056) | about 5 months ago | (#46616187)

^this, it's amazing how few people actually understand that most writes on modern systems are buffered either in the kernel cache or in the disk cache without any form of protection. Disable those volatile caches and your OS with it's resource hog apps will behave as slowly has a 386 on floppy disks. At least, the laptop battery can act has a giant capacitor and your exposition to data loss due to power failure on a laptop should be very very small.

Re:no capacitors (1)

adolf (21054) | about 5 months ago | (#46617131)

^this, it's amazing how few people actually understand that most writes on modern systems are buffered either in the kernel cache or in the disk cache without any form of protection. Disable those volatile caches and your OS with it's resource hog apps will behave as slowly has a 386 on floppy disks. At least, the laptop battery can act has a giant capacitor and your exposition to data loss due to power failure on a laptop should be very very small.

^ !english, it's charming to watch people write such comparisons when it is plain that they've never run a 386 from floppy disks...none of which, generally, had any write caching at all. (And, no, BUFFERS in config.sys doesn't count.)

And batteries acting as capacitors? Why can't the battery just be the battery that it is? Why diminish the perfectly cromulent role of a battery to that of a capacitor? What are you gaining from writing such nonsense, other than perhaps a good feeling because you got to use more words instead of fewer words?

I do like your idea of a data loss exposition, though. Where can I buy tickets?

Also, I would like to subscribe to your newsletter.

Re:no capacitors (1)

Bengie (1121981) | about 5 months ago | (#46618423)

I know ZFS can "deal" with these situations, depending on what you mean by "deal". It won't mess up data that is already committed.

Re:no capacitors (1)

dshk (838175) | about 5 months ago | (#46618903)

It is not the file systems which mess up the data (at least with journaling file systems), but the SSD drives without power loss protection capacitors. What a journaling file system could do, if it successfully syncs the journal, but after a restart the journal is gone, the result of later partial writes are there? This is the so called serialization error. It happens with costomer HDD drives too, but it is much worse with these SSD drives. Take a look at the publication I linked in my other comment above.

Any anti-Crash & Burn circuitry? (1)

bill_mcgonigle (4333) | about 5 months ago | (#46615985)

The best part of using SSD's? You learn to make your backups religiously, because they will die and they will die fast. I have some very long-lived SSD's in production (SLC) but each one that I've had fail (I have a stack of about 20 on my workbench which may or may not go back for 'lifetime warranty' claims - do I really want replacements of crappy SSD's?) has gone from perfect to unreadable in minutes.

2014 and they're still Hot & Crazy [codinghorror.com] .

Re:Any anti-Crash & Burn circuitry? (0)

Anonymous Coward | about 5 months ago | (#46616231)

Could you please list which models you experienced this on? Might help us keep our hair from going grey.

Re:Any anti-Crash & Burn circuitry? (0)

Anonymous Coward | about 5 months ago | (#46617053)

having deployed several thousand intel x25e drives, the failure rate was lower over 3 years
than other hard drives. i don't have rca for all of 'em, but if you just take the raw customer
return rate, it was roughly 1%.

i have seen machines or sets of machines mimic the anecdote from codinghorror.com.
usually this is due to bad power, a bad power supply or both.

Re:Any anti-Crash & Burn circuitry? (1)

tlhIngan (30335) | about 5 months ago | (#46618629)

The best part of using SSD's? You learn to make your backups religiously, because they will die and they will die fast. I have some very long-lived SSD's in production (SLC) but each one that I've had fail (I have a stack of about 20 on my workbench which may or may not go back for 'lifetime warranty' claims - do I really want replacements of crappy SSD's?) has gone from perfect to unreadable in minutes.

The main reason why SSDs fail is due to sudden power loss causing a massive corruption of the FTL tables. It's why some come with capacitors - so they can sync the on-media tables with what's in the RAM cache on sudden power loss. There are mitigation techniques that are possible as well that allow for sudden power down without losing data. In fact, the modern SSD is faster than the interface it's on, so compromising performance for data safety is doable.

After all, once you're around 500MB/sec, you can't go faster. If the flat out rate is 750MB/sec, no one will see it, so give up 33% of that speed for data safety so you'll still see 500MB/sec at the interface.

As for your pile of dead drives - chances are a good chunk o them, if they've still got life in them, can be used. Their tables are corrupt, so you should try a ATA Secure Erase (in anything but a Lenovo system - go figure, but Lenovos do strange things). We've used it to recover an SSD in a dropped laptop that shattered to a million bits (which was on and doing stuff).

Most good SSDs respond to typical power down commands as a request to sync data - i.e., when a hard disk is issued the spindown command prior to system turn off, it syncs the cache to the platters, parks the heads, and shuts down. Doing so is far safer on the hard drive than a straight power down (less mechanical wear - a sudden powerdown switches all the platter spinning energy into the voice coils, which flings the heads to the parking area violently. It's why a soft spindown rating on a hard disk may be 50,000+ load/unload cycles, while a emergency spindown is only 10,000 or less).

Likewise, smart SSDs do the same thing - they see a spindown command and use it as an opportunity to sync the tables to media, and then report to the host that they're ready to be turned off.

We used the hdparm method of sending the ATA Secure Erase command to the drive, it works, takes about 5 minutes and recovers and SSD to the condition it was in before failure. The only thing is that previous wear doesn't reset (of course), but the drive is still as reliable as it was brand new - just because the tables were corrupted once doesn't impact a thing after a secure erase - it's basically used to recreate brand new tables.

Yeah! (1)

Decker-Mage (782424) | about 5 months ago | (#46616081)

I can totally see mounting one of these on my Intel Galileo so it has awesome storage for a serious drone AI package and a ton of capacity for recording video and sound. Whether by air or ground. Give it IR, radar (EMCON'ed of course) and LIDAR. Wrap it all up in some RAM (Radar Absorbing Material) and they'll never 'see' it coming. Yeah!

OK, so I'm not serious, still neat though! On second thought, except for the aerial vehicle (lowest price I've seen is $699.00) it really is doable.

Fix the INTERFACE!! (0)

Anonymous Coward | about 5 months ago | (#46616443)

GRAY text on GRAY background SUCKS!

Be Aware: Many mSATA slots don't support SATA-3 (1)

BBF_BBF (812493) | about 5 months ago | (#46616463)

Ensure you check the SATA protocol that is supported by your computer/tablet's mSATA slot first.

I have a 2012-3 era Thinkpad X230 and it has an mSATA slot, but it only supports SATA-1, so an SSD like the Samsung featured on this article are overkill since the computer cannot ever take advantage of the throughput. Even a cheap SSD with a Phison controller was fast enough to saturate the SATA-1 interface, so that's what I got.

Re:Be Aware: Many mSATA slots don't support SATA-3 (1)

Burz (138833) | about 5 months ago | (#46616543)

OTOH, the drive's IOPS are arguably much more important to how much better a system can perform; SATA-1 doesn't look so limited in this respect. Sequential throughput makes a noticeable difference for more specific applications.

Re:Be Aware: Many mSATA slots don't support SATA-3 (1)

Burz (138833) | about 5 months ago | (#46616613)

Also, your mSATA slot is SATA-2 @3Gbps, not SATA-1. I don't think you could notice any difference in IOPS between 3Gbps and 6Gbps SATA links.

mSATA is on the way out (1)

Solandri (704621) | about 5 months ago | (#46617793)

Yes, I know it seems like mSATA SSDs just showed up yesterday. They were a shortcut using the existing PCIe mini format (used primarily for wifi cards), but connecting it to a SATA controller instead of / in addition to the PCIe bus.

It's being replaced by the M.2 form factor [wikipedia.org] , which supports multi-lane PCIe, SATA, SATA Express, as well as USB 3.0. It also gives manufacturers the choice of multiple standard lengths. A few of the M.2 SSDs already out operate in PCIe mode instead of SATA, and can top 1 GB/s [thessdreview.com] . The quick and dirty way to tell the SATA vs PCIe M.2 drives apart [thessdreview.com] is that the SATA ones have two notches (three rows of connectors). The PCIe ones have one notch (two rows of connectors).

M.2 is still in its early stages, and some systems with M.2 connectors can take the SATA-type SSDs but not the PCIe-type, or vice versa. But the drives and systems are out there. The PCIe SSD in the Macbook Pro is actually a modified (proprietary) version of Samsung's M.2 SSD (XP941).
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>