×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Six-Drive SATA III SSD Round-Up Shows Big Gains

Soulskill posted more than 2 years ago | from the johnny-five-needs-throughput dept.

Data Storage 129

MojoKid writes "Solid state drives have gone from essentially non-existent on the desktop to the preferred storage medium of enthusiasts and workstation professionals in less than three years. Three of the drives featured in this six-drive SATA III SSD round-up consistently offered 'best-of-class' performance throughout testing, with speeds in excess of 500MB/s for read and write throughput. OCZ's Vertex 3 Max IOPS, Corsair's Force GT, and the Patriot Wildfire all feature the same SandForce SF-2281 controller and synchronous NAND flash memory. These drives offered the highest transfer rates in the majority of tests, though performance does drop off as the data gets more incompressible."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

129 comments

However, something important to keep in mind (3, Insightful)

Sycraft-fu (314770) | more than 2 years ago | (#36744466)

At this point, all SSDs are basically "fast enough" for desktop usage. You notice a major difference between an SSD and a HDD. You don't notice much, if any, difference between a lower and higher end SSD on the desktop.

The same is not true on servers, of course, the heavier random load makes IOPs a big deal in various servers (databases particularly).

While I'm certainly not saying don't get one, I'm saying don't dump your SATA II SSD if you have one for these, and don't pass up a SATA II SSD if it is on sale.

they still need to be a lot bigger now 500GB and u (1)

Joe_Dragon (2206452) | more than 2 years ago | (#36744502)

they still need to be a lot bigger now 500GB and up should be ok or maybe 128-256 system SDD + 500GB + Data disk but with games getting bigger and you may run out of room installing them on a SDD and need to install to a data disk.

Re:they still need to be a lot bigger now 500GB an (1)

Anonymous Coward | more than 2 years ago | (#36744700)

While I'm certainly not going to argue why it'd be nice if SSDs increased in capacity (given reasonable price points), I must inquiry, why the need? 10 GB is easily enough for today's OS with most programs (3 GB or less with some sacrifices), but let's say bloat increases that requirement five fold, and that SSD's resetting the price point per gigabyte doesn't discourge that bloat. Googling indicates a modern game requires something like 15 GB to install (...seriously?), so what are you trying to do? Install every game you own? Never uninstall anything? Or store vast amounts of media (bitrate of 8 MB/sec continuous read) on a drive capable of 200 MB/sec throughput / 4 ms latency with the obvious tradeoff of price per gigabyte? Get a drive that bests everything in speed, latency, and capacity for the same price as the older technology?

Re:they still need to be a lot bigger now 500GB an (3, Interesting)

fuzzyfuzzyfungus (1223518) | more than 2 years ago | (#36744810)

While, of course, I'd like all that and a pony, I recognize that ours is not exactly a perfect world. Big SSDs are expensive and big HDDs are (comparatively) slow.

What would be nice, though, and arguably rather more reasonable(it's only a matter of software, and across millions of users the unit cost should be approximately fuck all), would be seeing the tech for transparently dividing workloads across two or more disks with heterogenous characteristics descend from its present position in expensive SANs and comparatively esoteric server FSes.

Sure, the manual "OS+applications on SSD, porn and torrents on HDD" tactic works more or less alright; but having humans wasting their time doing a (lousy) attempt at a machine's job seems like such a pity. Handling the messy details of physical storage location, in order to achieve best apparent performance with lowest burden on the operator, is exactly the sort of abstraction that our computers should be handling for us.

Re:they still need to be a lot bigger now 500GB an (1)

uvajed_ekil (914487) | more than 2 years ago | (#36745146)

Sure, the manual "OS+applications on SSD, porn and torrents on HDD" tactic works more or less alright; but having humans wasting their time doing a (lousy) attempt at a machine's job seems like such a pity. Handling the messy details of physical storage location, in order to achieve best apparent performance with lowest burden on the operator, is exactly the sort of abstraction that our computers should be handling for us.

Very well said! What is boils down to is that you shouldn't have to be a major geek or expend a lot of effort administering your system simply to have and use a fast, efficient computer. Modern operating systems have taken a lot of the thought and need for specialized knowledge out of the equation, but SSDs (largely due to their low capacity or capacity/price ratio) are confusing to some and require more work. Laptops with single drive bays are another consideration, and no one really wants to use an external drive.

You can have that, it just costs (1)

Sycraft-fu (314770) | more than 2 years ago | (#36745786)

What you are talking about is having an SSD function as cache for a HDD. Good idea, one I like myself quite a lot. Well it does exist, but not to the extent it should.

The only real cheap option I know of is Seagate's Momentous XT drives. They are laptop drives with 4GB of flash on them for cache. Net effect is you get desktop level of performance out of a laptop drive. Quite effective. Unfortunately, that laptop drive is all they make.

At the high end Intel, LSI, Adaptec, and probably some others make SATA/SAS RAID controllers that can take SSDs as cache (in addition to their RAM cache). Quite effective, and can be rather flexible, but limited to high end RAID controllers so really expensive.

Intel just introduced a final option in the form of their Z68 chipset. It supports an SSD cache drive on the included SATA ports. Seems to work pretty well, though there's limits on it (can only be so large and they seem to really want SLC drives). That's fine if you are building a new Sandy Bridge system, but not the kind of thing you can add to existing hardware.

I'm not sure why it is the kind of thing there isn't more interest in. I would think that HDD companies would jump on this to make premium HDDs but so far, not much.

Re:they still need to be a lot bigger now 500GB an (0)

Anonymous Coward | more than 2 years ago | (#36746208)

or you can just use a real OS, that actually allows you to treat all your storage locations as a unified whole instead of forcing you to compartmentalize your storage.

Microsoft has finally seen the light, too: dynamic disks and ntfs junctions. Sadly, it'll probably be another three Redmond releases before the installer will allow you to do The Right Thing (tm).

SSD+HDD in a laptop, or media production (1)

tepples (727027) | more than 2 years ago | (#36744872)

so what are you trying to do? Install every game you own? Never uninstall anything? Or store vast amounts of media (bitrate of 8 MB/sec continuous read) on a drive capable of 200 MB/sec throughput / 4 ms latency with the obvious tradeoff of price per gigabyte?

How about not having to carry around a USB hard drive? As I understand it, a lot of laptop computers have only one internal SATA port, to be occupied by an SSD or a hard drive. Or how about media production as opposed to passive viewing? Production with non-linear video editing software often needs multiple simultaneous mixed sources and multiple seeks to read all the simultaneous streams.

Re:SSD+HDD in a laptop, or media production (0)

Anonymous Coward | more than 2 years ago | (#36745076)

This really isn't a task for laptops yet. Nonlinear video editing, ideally with multiple disks in a RAID, is best done on a desktop machine still. A 120gb ssd is adequate for most laptop uses, and a 2.5" USB drive for data is not that much of a burden if you're already carrying around a laptop + ac adapter anyway.

Re:they still need to be a lot bigger now 500GB an (1)

billcopc (196330) | more than 2 years ago | (#36745188)

There are some of us who enjoy variety in our selection of games. The most common games are between 10 and 15 GB each. My World of Warcraft install is 25gb and growing with every new patch. Portal 2, a relatively simple game, is 11 gigs. It adds up very quickly. Of the 120 or so games in my Steam list, I can only keep around 20 installed on my relatively large 300GB SSD. It's no coincidence that the largest games benefit the most from SSD access times and read speeds. In my case, I've had to set up a 4-way RAID-0 of conventional hard drives, to have cheap and decently fast secondary storage. I would not dream of running a PC with just the 300GB drive, at least not for a multipurpose work and entertainment PC.

There's a lot of push from manufacturers for faster SSDs, but what I'd like to see is a super inexpensive "fast enough" product. Sure, I like my hyperfast PCI-E SSD with its 900mb transfer rates, but for gaming I would probably be happy with a device 1/5th as fast, because I now spend a lot of time just past the loading screen, waiting for the other players to synchronize. A cheap SSD would still be faster than any spinning platter, and I could keep this superfast one for more time-sensitive, seek-heavy duties.

By cheap, I mean less than $0.75 per GB. Right now, something like my Velodrive costs over $5.00 per gig, while consumer-grade SSDs hover near $2.00 per gig. This makes a comfortable 500gb device absolutely unaffordable for the all but the most dedicated hardware nuts, but bring it down to the price of a decent GPU, and people will budget for it as part of their next system build. The popularity of Seagate's "hybrid" drives is proof of this, though their usefulness is sorely limited to OS and application launch patterns. Modern games overwhelm the Seagate's 4GB SSD cache, unless you RAID a few of them together.

Re:they still need to be a lot bigger now 500GB an (1)

DarkXale (1771414) | more than 2 years ago | (#36746464)

Most of the games when I check have a usage of at most 6GB; not 10-15. Sure theres a few titles in that range, but certainly not the majority.

Re:they still need to be a lot bigger now 500GB an (1)

WuphonsReach (684551) | more than 2 years ago | (#36747444)

a) Stop being a data pack rat.

b) Prioritize. Do you really need game XYZ, that you play about once a quarter, installed? Why not 7Zip the install folder to another drive then restore it when you actually do want to play. Or if you have Steam, 7Zip it for posterity, offload it to a external drive, then uninstall/reinstall using Steam. Or put the less frequently used data on an older, slower, larger disk.

Consumer grade SSD is a bit below $2.00/GB finally. And WD still makes those 10k RPM SATA drives (which are still pretty good for a lot less then a SSD). I have SSD on the laptop and the 10k RPM on the desktop, both are about equal in terms of feel (although it's possible to bury the 10k RPM SATA).

Re:they still need to be a lot bigger now 500GB an (0)

Anonymous Coward | more than 2 years ago | (#36745310)

[quote]10 GB is easily enough for today's OS with most programs[/quote]

Are you serious? Windows 7's listed minimum requirements are 16 GB for 32-bit and 20 GB for 64-bit. True, a completely bare install won't actually use 100% of that, but once you get a couple of service packs and Office on there, I guarantee you'll be over 10 gigs. My first-gen iMac, circa 1998 and running OS 8.6, came with a 10 GB drive which I had no problem filling with just Kid Pix.

On my current box, I'm using up 120 GB on my system drive. All the "data storage"-type heavyweight items (music, movies, games, unsorted torrents) I have on another disk. Now, some of that 120 GB is probably stuff that could be moved, but there's no way it could be cut down to 10 or even 20 gigs without some serious deletion of apps I use. I wouldn't even consider anything under 40 GB for my system drive, and that's if I didn't care about my games. Add in the games and that means I'll be requiring more like 80 gigs out of an SSD.

I don't consider myself a heavy user, and in this age of 1080p digital camcorders and 8 megapixel point-and-shoots, 10 GB won't even cut it for Grandma.

Re:they still need to be a lot bigger now 500GB an (1)

ShakaUVM (157947) | more than 2 years ago | (#36745654)

>>While I'm certainly not going to argue why it'd be nice if SSDs increased in capacity (given reasonable price points), I must inquiry, why the need?

It's a pain in the ass spanning two different drives, and having to do a lot of this stuff manually. Moving your user directory from your fast C:\ SSD to your large D:\ HDD, and making symlinks so that everything works correctly is easy enough. Moving your Steam folder to D:\, and then individually moving certain games over to C:\ and symlinking everything to work - EVERY TIME YOU INSTALL A NEW GAME - is a total pain in the ass.

I keep a fairly tight lid on what games I have installed on Steam - uninstalling them once it becomes clear to me that I have no desire to play a certain game again (plus I can always reinstall from Steam later), but even still I have 83.4GB in my steamapps directory.

A lot of stuff installs to C:\ whether you ask it to or not, and moving a lot of things Windows expects to find in C:\ to D:\ tends to break Windows, even if you symlink C:\Program Files to D:\Program Files. So on my 60GB SSD C:\ drive, 50GB is already used.

So, *at a minimum* I'd need a 130GB SSD, with a 240GB SSD being the obvious purchase choice so that I have room for new apps. This will run between $400-$500 at current prices, which I feel is a bit too high.

Re:they still need to be a lot bigger now 500GB an (1)

WuphonsReach (684551) | more than 2 years ago | (#36747464)

Consider a 10k RPM SATA or 10k RPM SAS drive instead. Access times are much higher then a 7200 RPM, without prices being unreasonable.

Just make sure you have good airflow...

Re:they still need to be a lot bigger now 500GB an (0)

Anonymous Coward | more than 2 years ago | (#36745706)

10 GB for an OS with most programs?

I know you lot like your Linux, but once I have installed Win7 together with my favorite assortment of CAD and Office tools, I use about 30 GB. Add a system managed swap file and a decent overhead for load leveling, and 60 GB is the smallest appreciable OS drive for me.

Re:they still need to be a lot bigger now 500GB an (0)

Anonymous Coward | more than 2 years ago | (#36746162)

While I'm certainly not going to argue why it'd be nice if SSDs increased in capacity (given reasonable price points), I must inquiry, why the need? 10 GB is easily enough for today's OS with most programs (3 GB or less with some sacrifices), but let's say bloat increases that requirement five fold, and that SSD's resetting the price point per gigabyte doesn't discourge that bloat. Googling indicates a modern game requires something like 15 GB to install (...seriously?), so what are you trying to do? Install every game you own? Never uninstall anything? Or store vast amounts of media (bitrate of 8 MB/sec continuous read) on a drive capable of 200 MB/sec throughput / 4 ms latency with the obvious tradeoff of price per gigabyte? Get a drive that bests everything in speed, latency, and capacity for the same price as the older technology?

Seriously? My Windows 7 Professional 64-bit directory alone is a shade under 20gb in size. My Program Files {,(x86)} and Program Data directories are 37.3gb in size and I only have a dozen or so games installed (Steam directory is 22gb :\). For my uses, a 128gb ssd with a 1-3tb spinning drive (for Users directory and all music/dvds/data) would be great and if anyone asked me to suggest drives for them, I would be suggesting that for them too.

Larger ssds would be nice to have if they drove down the price points for the smaller drives. Eventually though we will all be using solid state drives for pcs although they will probably not be based on NAND memory though.

Re:they still need to be a lot bigger now 500GB an (2)

Joce640k (829181) | more than 2 years ago | (#36746610)

10 GB is easily enough for today's OS with most programs (3 GB or less with some sacrifices),

You haven't tried Windows 7 then? I don't think you can even install it in 10Gb. I'd say 25Gb is a bare minimum for Windows 7 plus a few applications.

Re:they still need to be a lot bigger now 500GB an (1)

dgatwood (11270) | more than 2 years ago | (#36748144)

I don't know about the other folks, but...

C. Store vast amounts of media.

I like to have my collection of photos with me on my laptop. It's about 80 gigabytes currently, and growing at 10-20 gigabytes per year.

Also, my OS only takes 7 GB or so (including all the various third-party graft-ins that I use), but the applications I use add a whopping 34 gigabytes of additional storage requirements, and that's without a lot of optional stuff installed. So just to carry the software I have installed on my laptop plus my photos would fill 121 gigabytes with no additional files at all. Add to that my current Photoshop projects at almost 4 GB, musical compositions at 10 GB, etc. and you can quickly see why the commonly available 128 GB sizes just don't cut it.

I'd love to move to an SSD for the added reliability, but I'm currently using a 500 GB hard drive, and am rapidly getting annoyed waiting for terabyte capacity to become thin enough to fit in a laptop so I can upgrade. Call me when SSDs can hold a terabyte for under a grand, and we'll talk. Until then, they're toys as far as I'm concerned.

Re:they still need to be a lot bigger now 500GB an (1)

Mashiki (184564) | more than 2 years ago | (#36744710)

OCZ sells 500 to 1TB HDD's, the problem I seem to keep hearing is that once you pass the 256GB range, the stability of the drives falls through the floor. Now that might be chance, or via a pile of 'first adopter issues' for the new types of drives with larger space. But for the most part 60GB is more than enough for the average person. One problem with SSD's though is that some games simply don't play nice with the super-high speed of SSD's, even more recent games like FONV, and DA2. They'll actually run slower in some cases.

Though I have considered picking up a nice 120gb drive to replace my 60gb drive, simply because I like the damn near instant load times for some of my games. Plus it's the tail end of the 1st generation SSD's, even with over 2 years on it, it's still working great.

Re:they still need to be a lot bigger now 500GB an (0)

Anonymous Coward | more than 2 years ago | (#36744814)

That has to be first adopter issues. I remember being told CD drives faster than 8 times were prone to spinning out and having the CD explode

Re:they still need to be a lot bigger now 500GB an (2)

SenseiLeNoir (699164) | more than 2 years ago | (#36747768)

I have created a system recently with a 120GB SSD, and a 1 terabyte HDD. I installed Win 7 and some important applications on the SSD, and installed all other applicatiosn as well as the user folders, and swap on the HDD. I get pretty much no difference in performance to a full SSD system, whilst still havign capacity at a price I can afford.

A friend of mine has done a similar thing with Linux and he is getting pretty good results in the performace stakes too.

Re:However, something important to keep in mind (1)

bertok (226922) | more than 2 years ago | (#36744636)

The same is not true on servers, of course, the heavier random load makes IOPs a big deal in various servers (databases particularly).

Did you actually read the article and look at the numbers?

SSDs stomp all over mechanical drives for random IO throughput, by 3 orders of magnitude or more!

My old SATA II SSD outperformed a 48-spindle enterprise SAN volume for real-world database performance, and my current SSD is at least 3x faster!

To perform complex schema changes I would often make a copy of the production database onto my laptop, perform the IO intensive operations like re-indexing on there, and then copy the result back to production, because that way it only took a few tens of minutes instead of a day or more!

Re:However, something important to keep in mind (0)

Anonymous Coward | more than 2 years ago | (#36744690)

Did you actually read the comment and respond to what GP was saying?

You don't notice much, if any, difference between a lower and higher end SSD on the desktop.

The same is not true on servers, of course, the heavier random load makes IOPs a big deal in various servers (databases particularly).

Re:However, something important to keep in mind (1)

gman003 (1693318) | more than 2 years ago | (#36744712)

You seem to have misunderstood. What he was saying was that for desktops, the difference in speed between different SSDs is negligible, not that the difference in speed between an SSD and a HDD is negligible.

Re:However, something important to keep in mind (0)

Anonymous Coward | more than 2 years ago | (#36745100)

Absolutely not true. Lower-end SSDs have issues with latency and reliably that make them worse than a HDD.

Re:However, something important to keep in mind (0)

Anonymous Coward | more than 2 years ago | (#36745114)

Err, reliability. Sorry, that was a typo.

Re:However, something important to keep in mind (5, Interesting)

Solandri (704621) | more than 2 years ago | (#36745502)

At this point, all SSDs are basically "fast enough" for desktop usage. You notice a major difference between an SSD and a HDD. You don't notice much, if any, difference between a lower and higher end SSD on the desktop.

A large part of the reason SSDs are faster than HDDs is their low latency (has a big impact on small file read/writes). However there's another big reason you don't notice much difference between lower and higher end SSDs: We're using the wrong metric.

SSDs and HDDs benchmarks are almost universally given in MB/s. The problem is, people don't perceive speed in MB. They perceive it in seconds. The computing tasks you need to get done are almost never "I can wait 1 second. How much data can my computer crunch?" They're of the type "I need to crunch 1 GB of data. How many seconds will that take?" So the correct metric we should be using is s/MB.

But it's the same number! Why should this make a difference? Because when you invert a metric, the big numbers become small numbers, and the small numbers become big numbers. e.g. Say you have a HDD which can read 100 MB/s, a cheap SSD which can read 200 MB/s, and an expensive SSD which can read 500 MB/s. So in 1 second, the HDD reads 100 MB, the cSSD 200 MB, and eSSD 500 MB. Expressed in MB/s you gain 100 MB/s switching from HDD->cSSD, and a whopping 300 MB/s switching from cSSD->eSSD. Switching from cSSD->eSSD gives you 3x the benefit of switching from HDD->cSSD! So the extra money for the expensive SSD is definitely worth it! Right?

Hold on. Invert to s/MB and say you need to read 1 GB. The HDD takes 10 sec, the cSSD 4 sec, and the eSSD 2 sec. Switching from HDD->cSSD saves you 6 seconds. Switching from cSSD->eSSD only saves you 2 sec. So in terms of time you spend waiting, the HDD->cSSD switch saves you 3x as much time as the cSSD->eSSD switch. The vast majority of your time saved can actually be obtained from the switch to the cheaper SSD. The next step switching to the expensive SSD only gives you a marginal improvement. (Even if you insist on using relative measures of time, the cheap SSD still wins. 10 sec to 4 sec is a 60% reduction in time. 4 sec to 2 sec is only a 50% reduction in time.)

Anandtech basically stated as much [anandtech.com] in a recent SSD review. They admitted that in real world use (i.e. benchmarks measured in seconds), there really isn't much difference between the different SSDs. But reviews with benchmarks showing all products having nearly the same result doesn't get people coming back to read more reviews. So to hype people up, reviewers invert the scale and measure in MB/s to exaggerate small differences. Differences which for the vast majority of people are so small as to be nearly meaningless in their real-world computer use.

The same thing crops up with fuel mileage in cars. Fuel consumption is actually gallons per mile. But because the U.S. measures it in miles per gallon, it exaggerates the benefit of high mileage vehicles. If you ask a dozen people which saves more gas, switching from a 14 MPG SUV to a 25 MPG sedan, or switching from a 25 MPG sedan to a 50 MPG hybrid, I will bet nearly all of them will say switching to the hybrid saves more gas. After all, 50-25 = 25 MPG improvement, while 25-14 = only a 11 MPG improvement. But if you drive 100 miles:

14 MPG SUV = 7.1 gallons used
25 MPG sedan = 4 gallons used
50 MPG hybrid = 2 gallons used

Surprise. The 11 MPG improvement switching from the SUV to sedan saves you 3.1 gallons per 100 miles driven, while the 25 MPG improvement switching from sedan to hybrid only saves you 2 gallons. The metric we should be using is GPM, not MPG. The rest of the world measures fuel consumption in liters per 100 km for this reason. (A consequence of this is that if we as a nation wish to lower our fuel consumption, we should actually be concentrating on discouraging people from buying SUVs they don't need, rather than encouraging them to buy hybrids. Getting 2 people to switch from SUVs to sedans is roughly worth getting 3 people to switch from sedans to hybrids.)

Re:However, something important to keep in mind (1)

hahn (101816) | more than 2 years ago | (#36745748)

Good post and interesting point. I wish I had mod points, but I hope someone else mods you up.

I understand (1)

Sycraft-fu (314770) | more than 2 years ago | (#36745878)

That is part of the reason I wanted to note this for people. You find right now any SSD on the market is fast enough for desktop and little gains are had from faster ones.

Also there's three other issues at play you didn't consider:

1) For speed, access time can be more important than raw transfer rate. That really doesn't improve much with these higher drives. They all tend to be in the couple hundreds of microseconds. That's great and way better than HDDs, but given that it does not improve, you don't see a user experience improvement, even for large differences. You can take a 200MB/sec drive and compare it to a 500MB/sec one and not find the improvement you'd expect on most things because the access time has not improved.

2) They are fast enough that frequently, the disk isn't what you are waiting on anymore. You end up waiting on things like the processor to deal with the data it has loaded. This is particularly true since many apps are that optimized for IO. They'll be single threaded on their IO tasks because why not? The disk was always the bottle neck, no need to optimize. So no matter how fast you made things on the SSD, you'd see no improvement.

3) For many things, you are at the limits of human perception. On my system, many of my apps launch "immediately." Now by that I don't mean literally with no delay, I'm sure there is a measurable delay, I mean I can't see it or measure it. I click the button and the app is there. Further increase would be meaningless since it has already exceeded my perception. If the app is launching in 50 milliseconds, making it launch in 1 wouldn't matter. Sure, 50 times as fast but I can't tell.

Hence why I say to people, slow SSDs are fine for the desktop. Mine actually are slow SSDs, they are WD SiliconEdge drives which are around the bottom in most benchmarks for SATA 2 drives and nothing compared to the new SATA 3 ones. Yet they still are fast enough as to no longer be the problem. I don't wait on my disks anymore, which is really what it is all about.

Re:However, something important to keep in mind (1)

hotrodent (1017236) | more than 2 years ago | (#36746120)

What a brilliantly written post. I've always had a problem with the [distance] / [fuel used] metric but couldn't explain why. Now I understand why, and you saved me a fairly useless upgrade cost to my SSD! Now if only /. would increase the [modpoints] / [user] ratio.....

Re:However, something important to keep in mind (2, Insightful)

Anonymous Coward | more than 2 years ago | (#36746184)

Surprise. The 11 MPG improvement switching from the SUV to sedan saves you 3.1 gallons per 100 miles driven, while the 25 MPG improvement switching from sedan to hybrid only saves you 2 gallons

But (gah!) you're just making the same error you're complaining about - looking at absolute changes rather than relative ones! Switching to the hybrid does save you more gas, when considered as a percentage of how much gas you were using before.

Maybe I'm just strange, but I've never considered such figures in the terms you're going on about, so it just looks like a strawman argument to me.

Besides, all that number waffle is entirely academic, since no-one ever faces the choice of (Switching from A to B) OR (Switching from B to C) - they face the choice of (Switching from A to (B OR C))

Re:However, something important to keep in mind (1)

kanweg (771128) | more than 2 years ago | (#36746310)

My primary reason for having a car is to cover distance, not to burn fuel. So, to me the proper metric IS distance per volume of fuel. When you want to focus on cost advantage, your way is fine. When you want to calculate how to extend the number of years gas can be used for combustion vehicles before we run out of oil, MPG is the right metric.

GPM is shrouding the inefficiency of gas guzzlers. With MPG you can quickly see that twice the mileage is twice as efficient.

Bert

Re:However, something important to keep in mind (1)

dargaud (518470) | more than 2 years ago | (#36746782)

GPM is shrouding the inefficiency of gas guzzlers. With MPG you can quickly see that twice the mileage is twice as efficient.

The distance to your work doesn't change when you change your car. So what really interests you is: how much money will you save by the end of the trip if you have a thriftier car ? And that's a direct measure of GPM. MPG is useless.

Re:However, something important to keep in mind (1)

sifi (170630) | more than 2 years ago | (#36746332)

This is one of my pet hates too - it should really be GPM (Gallons per Mile); probably some conspiracy with the motor manufacturers. On another slightly related note: My car gives me an average MPG for each trip. If I do a regular trip (say to work), and I want to figure out the 'cheapest' route, I want to know the 'total fuel used' per trip. If I go for the route with the lowest average MPG figure, it might be a longer route with less stopping and starting, which actually uses more fuel overall. Of course, it gives me the distance too so I can work it out, but why not give me the 'total fuel used' information too?

Re:However, something important to keep in mind (1)

DarkXale (1771414) | more than 2 years ago | (#36746516)

I don't know about conspiracy. Seems standard to measure in liters per 100km on European cars.

Re:However, something important to keep in mind (4, Interesting)

Rennt (582550) | more than 2 years ago | (#36746342)

The 11 MPG improvement switching from the SUV to sedan saves you 3.1 gallons per 100 miles driven, while the 25 MPG improvement switching from sedan to hybrid only saves you 2 gallons.

Classic mistake. You can' t make a comparison without a baseline.

So lets use GPM.
4 / 7.1 = 0.56
2 / 7.1 = 0.28

OK, what about MPG?
25 / 50 = 0.5
14 / 50 = 0.28

Doesn't make a lick of difference if you use X/y or Y/x. Switching to a hybrid will save your nearly twice as much fuel as switching to a sedan would. The numbers don't lie.

Re:However, something important to keep in mind (2)

Ant P. (974313) | more than 2 years ago | (#36746690)

The metric we should be using is GPM, not MPG.

You mean l/km.

Re:However, something important to keep in mind (1)

misexistentialist (1537887) | more than 2 years ago | (#36748106)

The difference would probably be noticeable when manipulating large video files, just like the best mpg would be significant if you drove 1000s of miles or operated a fleet of vehicles.

Re:However, something important to keep in mind (2)

hairyfeet (841228) | more than 2 years ago | (#36746430)

The problem with SSDs is that the tech isn't really ready for primetime IMHO, unless you just have some money burning a hole and don't care about the data they'll have on them. Atwood at Coding Horror even says SSDs should be judged on a hot/crazy scale [codinghorror.com] since you are dealing with a device that gives serious performance at an INSANE failure rate. From the looks of things they may be full of shit on those MTBF numbers.

All I know is I have a couple of gamer customers for whom the benchmark is God and both went SSD, not cheap shit either, the biggest most expensive they could find. With BOTH drives what happened is one day they flipped the switch and....nothing. That's it, just dead. No warning, no SMART, just tits up DOA bye bye data. Needless to say they weren't happy.

So i think I'll let others blow the crazy money on a drive that may only last a year, since with the big fat caches on the new HDDs combined with plenty of RAM for Windows 7 to superfetch everything seems to me more than fast enough for my customers. I just can't see myself spending that kind of money then having to be drawn up in a knot or do backup after backup just to keep from having to worry about flipping the switch and watching my data go poof. Faster is nice, but not when it is going really fast right off a cliff.

Re:However, something important to keep in mind (1)

after.fallout.34t98e (1908288) | more than 2 years ago | (#36747538)

Take notice that Atwood says they are worth it even with the failure rates he is experiencing. Also consider that he may not be the norm when it comes to failure rates.

I purchased my X-25M G2 the day it became available on newegg (mid 2009 sometime). Several of my friends also have various drives(all still running with no indications of problems):
X-25M G2 160GB
Vertex 2 120GB
another Vertex 2 120GB
several others that I don't know the names of offhand. All were purchased immediately when they first came to market.

The point is personal experiences are not indicative of statistical means.

Re:However, something important to keep in mind (3, Insightful)

GooberToo (74388) | more than 2 years ago | (#36747250)

You notice a major difference between an SSD and a HDD.

I completely agreed. Most report total drive failure in 6-18 months with SSD drives with absolutely no warning of impending catastrophic failure. So ask yourself, are you ready for complete data loss every 6-18 month, while paying an extreme premium for the privileged? Most recommend NOT using these as your primary device, but rather use a smart caching device, which uses these devices as an accelerator while still using the HDD as your primary, dramatically more reliable, long term storage medium. Of course, in either case, a sane, complimentary backup strategy should employed.

Why no testing with pci-e SDD cards next to the (3, Interesting)

Joe_Dragon (2206452) | more than 2 years ago | (#36744468)

Why no testing with pci-e SDD cards next to the Sata-3 disks?

also should test with a HIGH END SAS / SATA raid card.

Re:Why no testing with pci-e SDD cards next to the (0)

Anonymous Coward | more than 2 years ago | (#36744566)

pci-e SDD cards

Can I point out what's wrong with this post or do you think the internet will berate him enough for it?

Re:Why no testing with pci-e SDD cards next to the (1)

gman003 (1693318) | more than 2 years ago | (#36744730)

Other than a few minor mistypings (SDD instead of SSD, pci-e instead of PCIe), I see nothing [newegg.com] wrong [storagesearch.com] .

Re:Why no testing with pci-e SDD cards next to the (1)

fuzzyfuzzyfungus (1223518) | more than 2 years ago | (#36744904)

Just while we are on the subject, might as well point out that PCIe SSDs come in several rather different species:

The crudest(and generally cheapest) are essentially just a PCIe RAID controller, with the guts of two or more SATA SSDs soldered to the same board, or to daughtercards. Other than the SATA signals being routed over traces rather than cables, these are basically indistinguishable from discrete SSDs. If you are lucky, the vendor at least picked a RAID chipset nicer than whatever you were planning on using, if you are less lucky, not so much. At least you can boot off of them, and their capacities tend to be high.

Next up are architecturally similar to the previous; but with greater customization of the RAID firmware(possibly some degree of TRIM awareness/other SSD relevant considerations). Still basically the same; but not quite as rushed out the door.

Then you get the ones that are not, and don't bother to pretend to be, HDD controllers. These tend to be Very Expensive and not bootable; but they run like a bat with an expense account whose escape from hell depends on the speed of his Big Serious Database.

Re:Why no testing with pci-e SDD cards next to the (3, Informative)

ub3r n3u7r4l1st (1388939) | more than 2 years ago | (#36744626)

Right now TRIM command don't pass through RAID chips. So unless these SSD comes with robust garbage collection algorithm I wouldn't put those in RAID.

Re:Why no testing with pci-e SDD cards next to the (2)

DigiShaman (671371) | more than 2 years ago | (#36745172)

I was about to correct you regarding Intel's support for TRIM using RAID. But when I went to double-check, Intel corrected their previous statement.

Intel® Rapid Storage Technology 9.6 supports TRIM in AHCI and RAID modes for drives not part of a RAID volume. A correction was filed to update the information in the Help file, which stated TRIM was supported on RAID volumes.

Solution ID: CS-031491
Date Created: 24-Mar-2010
Last Modified: 05-May-2011

http://www.intel.com/support/chipsets/imsm/sb/CS-031491.htm [intel.com]

Re:Why no testing with pci-e SDD cards next to the (4, Interesting)

billcopc (196330) | more than 2 years ago | (#36745286)

TRIM is nowhere near as big an issue as people think. GC works fine on both Indilinx and Sandforce units. Realistically, once you start talking about RAID, the modest and highly situational performance gains from TRIM become irrelevant. Even with TRIM support, the block erase still happens in the background, so there is some delay before write performance is restored. If you're in a heavy rewrite scenario, you gain nothing as the drive doesn't idle long enough to handle those deferred erases anyway. And if you're not a heavy rewriter, the GC will take care of it sooner or later. It's all so moot.

My Revodrive X2 hits 650mb/sec with ease. My new Velodrive goes up to about 1000mb/sec, despite having a gazillion apps and games installed and my careless use of my "desktop" folder as temp space. The bottleneck is the shitty Silicon Image RAID chip, not the lack of TRIM. A very interesting product on the radar is the new Angelbird Wings, another PCI-E SSD with some cool features not found in the other brands, like mounting ISOs as virtual CDs at boot time, and a built-in Linux/XFCE distro for management/partitioning. They're claiming 900mb to 1000mb/sec speeds on the 4-channel model.

Still think TRIM is the deal-breaker ?

Re:Why no testing with pci-e SDD cards next to the (0)

Anonymous Coward | more than 2 years ago | (#36744944)

Why no testing with pci-e SDD cards next to the Sata-3 disks?

also should test with a HIGH END SAS / SATA raid card.

Comparing these against PCIe cards is useless, not only for the fact that the PCIe cards continue to destroy SSD drives, but they also continue to cost anywhere from 10x - 100x as much.

Re:Why no testing with pci-e SDD cards next to the (0)

Anonymous Coward | more than 2 years ago | (#36745298)

I get the cost argument, but am missing how PCIe would destroy SSD drives. You have the PCIe bus connected to NAND flash chips via some controllers/interface logic, and the latter is what will have the task of policing the traffic to & from the memory. And the access times of a NAND flash - assuming 100ns i.e. 10MHz, would take

What is PCI now used for? Graphics is more often built-in. Ethernet controllers and an RS-45 port is built into the motherboard. Sound is normally built in, unless one wants a high end multimedia sound card. One could put in a Wi-Fi controller for PCs, depending on some situations. But other than that, I hardly see why PCI buses exist. Looks like the ideal place to put an SDD, thereby freeing up SATA slots for things like a DVD/Blu-Ray drive.

Re:Why no testing with pci-e SDD cards next to the (0)

Anonymous Coward | more than 2 years ago | (#36745366)

Oops, And the access times of a NAND flash - assuming 100ns i.e. 10MHz, would take a few PCIe cycles for a read operation, but any such operation can be buffered by the interface

SATAIII is great, but unstable (5, Informative)

Anonymous Coward | more than 2 years ago | (#36744590)

Before getting excited and rushing off to buy a SATAIII SSD, bear in mind that there are currently some serious stability issues in some of the drives. I recently bought a Corsair Force 3 120GB, only to discover that many, many (as in most) customers are experiencing problems with system hangs and BSODs. I've been lucky enough to only have the drive lock up once a day after a full day of use, but plenty of folks only last about an hour. The old drives were recalled, but even the replacements are having the same problems. This a big support thread at http://forum.corsair.com/forums/showthread.php?t=96333, and plenty more just like it in the same forum.

Corsair seems to think this is an issue with the SandForce controllers, which is reasonable considering other big names (OCZ, for instance) are having the same issues with some of their new drives. Just be sure to do your research first.

Re:SATAIII is great, but unstable (2)

theshowmecanuck (703852) | more than 2 years ago | (#36745232)

I'm not going to rush out and buy any SSD until the price point comes WAY down. Right now my tried and true SATA HDDs work just fine. And I like my wallet performance better when it has more dollars in it because I didn't go right out and buy the newest stuff when the old stuff still does the job. Maybe in a couple more years when the price is better, and when I actually need to replace a drive because it fails or I build a new workstation.

Re:SATAIII is great, but unstable (0)

Anonymous Coward | more than 2 years ago | (#36746802)

That sound reasonable. However if someone is building a new fast computer for more than a 1000 dollars it is probably just foolish not to buy an SSD, because HDDs are absolutely the slowest component in a PC.

Re:SATAIII is great, but unstable (1)

after.fallout.34t98e (1908288) | more than 2 years ago | (#36747676)

You obviously haven't had the pleasure of seeing your compile times drop 10x (or you aren't doing work on something big enough to notice).

The projects I work on take about 12 minutes to compile on a 7200 rpm drive; 8 on a 10k and 1 on my intel SSD. Visual Studio itself shows noticeable performance gains as well. These improvements are enough to translate to extra features in a release cycle.

Re:SATAIII is great, but unstable (1)

MartinSchou (1360093) | more than 2 years ago | (#36745920)

Before getting excited and rushing off to buy a SATAIII SSD, bear in mind that there are currently some serious stability issues in some of the drives.

I see ... so a problematic product means that it's the interface that's problematic? By that standard you must really computers. There have been issues with chipsets, CPUs, RAM, graphics cards, mother boards, PSUs, optical drives, sound cards, USB controllers etc.

That doesn't mean you can't be using Slashdot of course. But there have also been problematic phones (both smart and dumb), tablets etc ...

So how are you even on Slashdot again?

Ah, right. Because you're an idiot who seems to think that a problem with a specific product is a problem with a general interface.

Re:SATAIII is great, but unstable (1)

Dracolytch (714699) | more than 2 years ago | (#36747668)

I've had issues with my OCZ Agility 3 drives. OCZ has released an updated firmware, which thankfully seems to resolve many of these issues. However: For some folks putting in this firmware and fully eliminating the BOSD issue requires an involved OS fresh-install and configuration process.

Re:SATAIII is great, but unstable (1)

Rockoon (1252108) | more than 2 years ago | (#36747774)

Before getting excited and rushing off to buy a SATAIII SSD, bear in mind that there are currently some serious stability issues in some of the drives.

Many of the reports for OCZ that I have seen all end with "I put the machine to sleep/hibernate, and now it doesnt see the drive at all."

Since other manufacturers also have some similar reports, the issue is almost certainly whatever is in common with all of them. A straight component failure, and didn't Intel start using someone elses controller and arent they also experiencing issues now?

Compressible? (0)

Anonymous Coward | more than 2 years ago | (#36744628)

Er... So these drives try to compress files on-the-fly? If not, what does data compressibility have to do with their trasnfer speeds?

Re:Compressible? (1)

Cinder6 (894572) | more than 2 years ago | (#36744812)

Sandforce drives do this, yes. As do probably some others.

Re:Compressible? (0)

Anonymous Coward | more than 2 years ago | (#36744846)

So are they listing their real capacity (i.e., can a "128 GB" model actually store 128 GB of uncompressible data) or their "average" capacity (factoring in the compression)?

Re:Compressible? (1)

Skuto (171945) | more than 2 years ago | (#36745214)

The real capacity, or actually, even less than it.

SSD drives need spare space to work. So if you buy a 128G drive, it's actually got 128G of flash, unlike say a hard drive that would be more like 119G. But the SSD will advertise 119G too The spare space is used for garbage collection. Some vendors reserve even more, as this makes the work of the firmware easier. You can see it in the article.

Now, if you can compress your data, that means only half the work has to be done writing and reading it. This makes the drive faster. Having to write less data also means there are less write cycles, and even more reserve room that can be used for garbage collection.

So instead of using the compression to advertise space that isn't there, it's used, where possible, to increase the lifetime of the drives.

Given the quality of current SSD drives, it may be a moot point; the firmware will eat your data with a driver bug faster than the flash will die.

Re:Compressible? (1)

fuzzyfuzzyfungus (1223518) | more than 2 years ago | (#36744860)

To the best of my understanding, the answer is "yes and no".

"No" in the sense that they don't go down the old tape-vendor route of "Hey, let's just arbitrarily assume that all customer data can be compressed by half and put a capacity number twice as large on the box!" bullshit, nor do they tempt madness and horrible OS/FS confusion by having apparent disk capacity fluctuate wildly depending on how compressible the data being written are.

"Yes" in the sense that the big challenge facing Flash SSD vendors is the fact that flash can be read in a neat, granular, manner; but can only be erased for re-writing in comparatively large blocks. This means that the controller chips need a RAM cache and/or some reserve Flash not included in the stated capacity, and some clever algorithmic juggling to queue up reads, writes, and rewrites such that the drive doesn't prematurely run out of space because it has all its blocks partially full and nowhere to store data while it consolidates, and such that its speeds remain as high as possible even as it has to occasionally perform slow read/cache/erase/consolidate/write operations to free up blocks. Apparently, the Sandforce controllers do some compression in order to make better use of their limited scratch space, so highly compressible demands will take longer to hit the point where they start grinding on the block consolidation, while incompressible ones will hit that point more or less as fast as their raw size would indicate.

Re:Compressible? (1)

billcopc (196330) | more than 2 years ago | (#36745308)

I'm no SSD engineer, but since compressed data takes up less space and the compression is done in hardware, it results in less data to be written to the NAND flash. Less work to be done = faster I/O completion. Less wear = less work for the garbage collector.

The same is true of spinning platter hard drives. If your CPU can compress data faster than the disk can write it, you can read and write that file more quickly. This is one of the reasons why the Linux kernel is usually stored gzipped, and why things like SquashFS are used on slow media such as LiveCDs and USB flash drives.

New way of thinking needed (2)

Archangel Michael (180766) | more than 2 years ago | (#36744716)

What is needed is a new way of thinking about memory/storage. More "unix" like thinking where the entire Processro/Cache/RAM/SSD/HDD/Cloud/Tape concept is a singular flat memory space that is addressed as needed. Processor/Cache for instantaneous use, RAM for immediate use, SSD for near RAM fast use, HDD for occasional user, and so on. Where the files (or bits of files) that are needed often are moved closer to the Core processor as needed, automatically.

SSD should be built into motherboards. (1)

elucido (870205) | more than 2 years ago | (#36744936)

And every OS should be installable directly into the motherboard SSD chip. It should be as fast as the motherboard allows.

60GB of SSD cache ought to be enough to install any OS.

Re:SSD should be built into motherboards. (1)

thomasdn (800430) | more than 2 years ago | (#36746194)

And every OS should be installable directly into the motherboard SSD chip. It should be as fast as the motherboard allows. 60GB of SSD cache ought to be enough to install any OS.

Problem is what to do if the SSD breaks? You have to replace your motherboard as well. Also, if some component on your motherboard breaks, you risk losing your data on SSD.

Re:New way of thinking needed (0)

Anonymous Coward | more than 2 years ago | (#36745212)

That sounds more like multics actually...i can't just dd from "memory portion of mapping space" to "disk portion of mapping space". Solaris/FreeBSD/ZFS does do some abstraction in regards to storage and memory, but this is largely by claiming most of both :).

simple memory hierarchy (0)

Anonymous Coward | more than 2 years ago | (#36745524)

What is needed is a new way of thinking about memory/storage. More "unix" like thinking where the entire Processro/Cache/RAM/SSD/HDD/Cloud/Tape concept is a singular flat memory space that is addressed as needed. Processor/Cache for instantaneous use, RAM for immediate use, SSD for near RAM fast use, HDD for occasional user, and so on. Where the files (or bits of files) that are needed often are moved closer to the Core processor as needed, automatically.

You wouldn't need HDD. Expand the BIOS - NOR flash - to contain the barebones OS (not including utilities) - the minimal that Windows, Linux, BSD kernels need to run, and put those parts there. On a PCIe SSD, put all the programs, and other software. As for the data, have that on an external USB HDD or SSD.

Then you have the on-CPU level 1-3 cache, your RAM being your level 4 cache, BIOS/OS kernel as level 5, SSD PCIe as level 6 and the external drive as level 7.

I'll stick with Intel (2)

vsage3 (718267) | more than 2 years ago | (#36744942)

I bought one of Intel's 3rd generation 80GB SSDs back in January and have had zero problems with it. No, it's not as fast as OCZ's drives, but it's reliable. Intel's failure rate is 0.6% while OCZ's is 3% (not sure if that's a per-year figure or something). Why an average user would buy primary storage with a 3% failure rate is beyond me.

(failure rate figure comes from http://www.anandtech.com/show/4202/the-intel-ssd-510-review/3 [anandtech.com] )

Re:I'll stick with Intel (0)

Anonymous Coward | more than 2 years ago | (#36745096)

because ocz stands behind their products if theres a problem.

and intel tells you to go fuck yourself and buy their new chip.

And someone STILL has to pay for intels giant ad campaign across the planet. Guess who!

no... no.... fuck intel. they earned their spot at the bottom of the choices.

Re:I'll stick with Intel (1)

bemymonkey (1244086) | more than 2 years ago | (#36745142)

Re:I'll stick with Intel (1)

Skuto (171945) | more than 2 years ago | (#36745166)

Sure, but a 3% return rate is still FIVE times as much as 0.6%.

We're talking about storage here: data loss. Such numbers are NOT acceptable. How people still dare touch these drives is beyond me.

Re:I'll stick with Intel (0)

Anonymous Coward | more than 2 years ago | (#36745270)

These drives WILL fail, just like any other. That's why you have backups.

Re:I'll stick with Intel (1)

Skuto (171945) | more than 2 years ago | (#36745440)

Sure, in-between restoring the backups you might be able to get some work done :P If the backup disk hasn't failed, too.

Re:I'll stick with Intel (2)

devleopard (317515) | more than 2 years ago | (#36745318)

I burned through 2 OCZ Vertex 3's in like 2 weeks. One ran for a day, then was so screwed the system wouldn't power .. some sort of crazy short. So I took it back, and was lured by the speed, and replaced it with same drive. Was good, but random blue screen. Turns out their firmware is garbage for Win 7 ... supposedly you can update the firmware, but both the Windows approach (had to hook drive up to second machine, as it won't update if it's a boot drive .. WTF) and their Linux boot reported firmware was up to date. (had 2.06 - most recent is 2.09 .. again, WTF)

Took it back, replaced with a "slower" SSD. (One random blue screen means I've lost all those milliseconds over the life of the drive I'm gaining with a faster drive.) At the end of the day, my computer has to work, or I won't be able to.

Re:I'll stick with Intel (2)

Jeremi (14640) | more than 2 years ago | (#36745536)

At the end of the day, my computer has to work, or I won't be able to.

At the end of the day you should go home and relax. Don't burn yourself out!

Re:I'll stick with Intel (1)

thegarbz (1787294) | more than 2 years ago | (#36745806)

Why an average user would buy primary storage with a 3% failure rate is beyond me.

It's beyond most people. The vast majority of SSDs I've seen are not used for primary storage. They are simply too expensive to be in that service. I see them in laptops, and in desktops with the windows partition and apps installed. I've never seen a desktop with an SSD without also a standard HDD as well (though I'm sure someone somewhere thinks this is a good idea).

Re:I'll stick with Intel (1)

Rockoon (1252108) | more than 2 years ago | (#36747814)

a 3% per year failure rate would be an order-of-magnitude better than platter drives. Hell, even 3% in the first 3 months would be on par with platters.

I'm missing the Intel 510 here (2)

Skuto (171945) | more than 2 years ago | (#36745190)

It's got a SATA-600 interface, the same Marvell controller like the Crucial m4, but it's significantly faster overall. Should be more competitive with the OCZ offerings, and it's 5 times less likely to eat your data.

Re:I'm missing the Intel 510 here (0)

Anonymous Coward | more than 2 years ago | (#36745416)

By virtue of its controller, the Intel 510 is also unaffected by incompressible data.

ha I tried SSD this weekend == FAIL (0)

Anonymous Coward | more than 2 years ago | (#36745236)

so minding my own business when BAM big sale $70 (after rebate) for corsair F60 add shows up in front of me (yes i still read the newspaper)

so i go buy it (fry's is pretty close to my house) anyways in my excitement i failed to read some feedback on these things..... HOLY SHIT WHY ARE THESE DRIVES ON THE MARKET???? nearly ubiquitous resume from sleep issues...given that the whole point was to be faster having to shut down when done with my computer and then restart later vs. using S3 (my computer would BSOD resuming w/ the F60) which works awesome w/ my Samsung F3 == FAIL

I was sooo disgusted i returned it and have basically shelved the SSD idea for now. I'm sort of mad at myself for not doing more research but i let my previously always positive experience with Corsair make me lazy this time ... corsair makes some nice stuff but i'm going to be hella skeptical of any SSDs from them for a long time after this.

If I do venture back to SSD territory (which i'm sure i will later) i will be doing a whole lot of checking for reliability / issues before laying out any cash on a SSD in the Future.

Re:ha I tried SSD this weekend == FAIL (1)

Cimexus (1355033) | more than 2 years ago | (#36745896)

Yeah. I've been using a Corsair Force F115 here for the last 5 months or so. Resume from sleep was buggy for me too - a quick poke around the Corsair forums reveals it's a known issue, and unfortunately the last firmware that Sandforce submitted to Corsair for testing, that was supposed to fix the issue, didn't pass their quality control. So they are waiting on SandForce to 'try again' with their next firmware.

However, I then realised that putting my computer to sleep wasn't really necessary anymore given that Windows boots in 7 seconds now (end of BIOS to usable desktop). The time penalty for powering off cf. going to sleep was small now. So I just disabled sleep altogether several months ago and forgot about the whole thing. Until I read your post just now. :)

(I will say that other than having to disable sleep, I love this damn thing. So, so fast...)

Re:ha I tried SSD this weekend == FAIL (0)

Anonymous Coward | more than 2 years ago | (#36746142)

in fairness when it wasn't bsoding on resume it seemed pretty quick

but the quickness mainly showed in getting windows started up, with the way i use my computer (it is also serving as a htpc as it is next to our 42" lcd) the lack of sleep mode working was really a showstopper for me. Now the pc can resume and record something then go back to sleep when done. If i had to power down that just wouldn't work sooo i had no choice to return it.

who knows maybe if i see a good deal on a intel SSD i will try again soon?

How's the power consumption? (0)

Anonymous Coward | more than 2 years ago | (#36745262)

Serious question. Do these SSD consume more or less power than conventional HDDs?

For my purposes (remote solar-powered systems), I'm more interested in power savings than anything else, such as capacity or speed.

Every Watt counts.

Re:How's the power consumption? (1)

hamburgler007 (1420537) | more than 2 years ago | (#36745312)

A whole lot less. When idling they typically run under 100 mW.

Re:How's the power consumption? (0)

Anonymous Coward | more than 2 years ago | (#36745610)

Yeah, there are no mechanical operations that cause increased power consumption. Reads & writes are plain data transfers, and the only power consumption in the flash is that involved when doing a write i.e changing a 1 to a 0, or in case of erase, changing all 0s to 1s. If it's just idling, the consumption is even less.

There is only 1 argument against SSDs - their price per GB is far less than that of HDDs. The above discussions on reliability are more a function of badly designed controllers than the NAND flash chips themselves.

6 disks, 6 awards (2)

myspys (204685) | more than 2 years ago | (#36745514)

Howcome all 6 disks got an award? Either "recommended" or "editors choice".

Is that the only way to keep everyone happy, and the freebies coming?

Re:6 disks, 6 awards (2)

after.fallout.34t98e (1908288) | more than 2 years ago | (#36747822)

Because the editors are still in their heads comparing them to HDDs. It will be a few more years before the wow factor of just how fast these things are winds down and reviewers become able to compare them against each other and not the spinning media.

A question born of ignorance ... (1)

MacTO (1161105) | more than 2 years ago | (#36745938)

... but why are we trying to straddle SSD onto a standard that was presumably developed for magnetic media?

My meager understanding of the situation is that standards like SATA were developed to read and write for media with symmetric block sizes (i.e. you have to read as much data as you write at one time), with no real consideration to physical limitations such as the maximum number of writes, and with the consideration that it was difficult to map a disk to a physical memory location (e.g. due to latency).

Ignoring the issue of compatibility with both current OSes and BIOSes, wouldn't it be best to develop a bus that is specific to SSD and is presumably closer to being directly addressed by the CPU via. contemporary OSes?

Re:A question born of ignorance ... (1)

jasomill (186436) | more than 2 years ago | (#36746414)

Ignoring the issue of compatibility with both current OSes and BIOSes, wouldn't it be best to develop a bus that is specific to SSD and is presumably closer to being directly addressed by the CPU via. contemporary OSes?

What CPU and OSes do you have in mind? SATA and SAS interfaces are used on "non-legacy" systems that range from embedded systems to mainframes, running dozens of different operating systems, not to mention external drives with USB/FireWire bridge chipsets. Even ignoring economies of scale, "loose coupling" tends to be a good engineering practice, especially given that components in a design typically don't evolve in lock-step.

Moreover, given that the need to support additional interfaces adds cost and complexity, I suspect it'd be more likely for things to move in the other direction — replacing the essentially "storage-only" SATA, not with SATA + SSD port, but with some more general I/O interface like Thunderbolt or USB.

Finally, "naïve block management" of SSDs isn't merely inefficient, it significantly reduces the useful life of the drive, thus it's even more important that some layer of abstraction exists between the physical media and the rest of the system — otherwise you'd wind up with newer drives that require newer drivers, and thus eventually a "cascade of upgrades" — new drivers are only available for a new OS version, new OS version requires newer hardware.

As much as anything, this separation seems like it'd be important for warranty purposes — manufacturers can only warrant a drive if they maintain enough control to have a reasonable grasp on its lifetime.

Re:A question born of ignorance ... (0)

Anonymous Coward | more than 2 years ago | (#36747460)

Additionally, how come hard drives don't come with an expansion slot for memory sticks, so the used could upgrade the buffer to say, 2G? It seems like a pretty common set of electronics.

My experience with SSDs until now (1)

TheDarkMaster (1292526) | more than 2 years ago | (#36747172)

Contributing to the discussion, I recently bought an OCZ SSD 60GB Agility 3. Aware that an SSD does not give guarantees about data integrity, I use it only as HD operating system, and it fulfills this task well. And if it fails is less bad to reinstall the operating system than losing my data. (my data is in a conventional HD).

But so far had no problems with it (installed on an Intel ICH10R controller in RAID mode, the SSD itself is not part of a RAID volume) SATA2 mode. I'm just a little disappointed with the fact that the performance for data not compressible is much worse than stated by the manufacturer.

They don't last very long in my experience (0)

Anonymous Coward | more than 2 years ago | (#36747828)

Caveat: I've only been using them for a couple of years in large-scale deployment.

However - that being said - SSDs wear out faster than IDE and SATA HDDs did, in the same 400 desktop windows XP business environment.

I'd be interested in hearing if other people have had the same experience. We are using OCZ drives.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...