Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Performance Showdown - SSDs vs. HDDs

timothy posted more than 6 years ago | from the all-those-Ds-at-once dept.

Data Storage 259

Lucas123 writes "Computerworld compared four disks, two popular solid state drives and two Seagate mechanical drives, for read/write performance, bootup speed, CPU utilization and other metrics. The question asked by the reviewer is whether it's worth spending an additional $550 for a SSD in your PC/laptop or to plunk down the extra $1,300 for an SSD-equipped MacBook Air? The answer is a resounding No. From the story: "Neither of the SSDs fared very well when having data copied to them. Crucial (SSD) needed 243 seconds and Ridata (SSD) took 264.5 seconds. The Momentus and Barracuda hard drives shaved nearly a full minute from those times at 185 seconds. In the other direction, copying the data from the drives, Crucial sprinted ahead at 130.7 seconds, but the mechanical Momentus drive wasn't far behind at 144.7 seconds."

Sorry! There are no comments related to the filter you selected.

w00t! (0, Funny)

Anonymous Coward | more than 6 years ago | (#23239336)

First post?

And this was on a standard 4400RPM drive in a ThinkPad.... not that that would affect posting to a web page...

Re:w00t! (-1, Offtopic)

EricR86 (1144023) | more than 6 years ago | (#23239476)

First post?

No sir/madam. If the mods are generous and mod this up, history will record you as a troll, you will be hidden from most filters, and I will be recorded as the first poster.

Re:w00t! (1)

mrsteveman1 (1010381) | more than 6 years ago | (#23240030)

Now all you have to do is cover up the fact that he was ever here at all

Post brought to you by Hans Reiser's torn anus (-1, Troll)

Anonymous Coward | more than 6 years ago | (#23239626)

This post is brought to you by Hans Reiser's shredded anus, which is by now no doubt being passed around the jail house like a pack of smokes. His poor anus probably now resembles a pastrami sandwich that fell apart. I wonder if he'll describe that experience in the passive voice...

Don't forget ... (-1, Troll)

Anonymous Coward | more than 6 years ago | (#23239360)

... to pay your $699 licensing fee you cock-smoking tea-baggers [] .

bad test (5, Insightful)

Werrismys (764601) | more than 6 years ago | (#23239362)

In typical use most of the time is spent seeking, not just reading or writing sequential blocks. The Windows XP disk IO is especially brain damaged in this regard (does not even try to order or prioritize disk I/O). Copying DVD images from one drive another is not typical use case.

Re:bad test (1)

fricc (410400) | more than 6 years ago | (#23239506)

That was my reaction too, big, long donkey ears for the reviewer...

Re:bad test (4, Interesting)

SanityInAnarchy (655584) | more than 6 years ago | (#23239642)

Consider, also, that when you're doing anything other than the contrived "copy from one device to another"... HD-DVD has a minimum guaranteed throughput of something like 30 mbits, Blu-Ray needs 50. It looks like the worst numbers on the solid state devices were still at least some 30 megabytes per second, meaning you could play five Blu-Ray movies at once.

Skimming the article, it seems very likely that the person responsible has read just enough to be dangerous (they know the physics of why seeking is slow), but not enough to have a clue what kind of behavior would trigger seeking. The one measure was boot time, during which they acknowledge that Vista does a bunch of background stuff after boot, but don't measure it.

He did get one thing right, though -- they are not exactly living up to their potential. For one thing, there are filesystems explicitly designed for flash media, but you need to actually access it as flash (and the filesystem does its own wear leveling) -- these things pretend to be a hard disk, and are running filesystems optimized for a hard disk, so the results are not going to be at all what they could be.

Re:bad test (1)

peipas (809350) | more than 6 years ago | (#23239938)

Anecdotally, I have a 32 GB SSD in my Dell M1330. I got stuck with Vista with this machine, but in its "User Experience" rating I get a 5.8 for hard drive. The scale is based on 5.0 being the fastest available at Vista's release. I assume "fastest" refers to consumer machines, but have conventional hard drives somehow become that much more efficient all of a sudden that they meet or exceed this performance?

Re:bad test (4, Interesting)

ThePhilips (752041) | more than 6 years ago | (#23240162)

XP IO subsystem is pretty OK.

The problem with SSD is that flash based storage has much much higher block size.

While conventional HDDs have block size 512 bytes, actual SSDs have block size of 64 kilobytes.

Not only Flashes write relatively slow, but if file system has e.g. cluster size of 8K, every write to it in worst case would also (re)write redundantly 64K-8K=56K.

Test is realistic - if you want to see how bad most applications can be with SSDs. But that's going to change with SSD becoming more and more common place.

If they really wanted to test SSD performance they would have taken Linux with jffs2 or newer logfs. Though this two have their own problems.

Re:bad test (0)

Anonymous Coward | more than 6 years ago | (#23240294)

Here's my real world result:

after 1 month, I switched the SSD back to an HDD because the performance of cached mode Outlook absolutely stunk on the SSD. Dell Latitude D630.

Untested performance... (4, Interesting)

smitty97 (995791) | more than 6 years ago | (#23239400)

Unfortunately there's no comparisons of battery life and speed tests with fragmented files.

Re:Untested performance... (1)

aeskdar (1136689) | more than 6 years ago | (#23239424)

But does this justify the extra money... no.

Re:Untested performance... (1)

MightyYar (622222) | more than 6 years ago | (#23239792)

But does this justify the extra money... no.
That is one thing that cannot be tested - the value of better battery life and faster seek times is subjective. The market will make the best test of whether these things are worth the price.

Re:Untested performance... (1)

Smidge204 (605297) | more than 6 years ago | (#23240340)

Well the whole point of the test would be to prove there IS better battery life and faster seek times - objectively.

If objective tests show no real advantage, then any subjective improvement in value is basically self-delusion... probably in an attempt to justify the extra cost to yourself.

If the test DO show an improvement, then I agree it is more up to the customer to determine if the extra cost is worth the extra performance.

Re:Untested performance... (3, Interesting)

Ethanol-fueled (1125189) | more than 6 years ago | (#23239484)

...And the picture won't be complete until we have real-world failure data for the solid-state drives.

Definitely not enough data (1)

DrYak (748999) | more than 6 years ago | (#23239750)

...And the survivability of mechanical drives in the ultra-portable form factor (more likely to be droped or tossed, more concentrated heat problems, etc.)

Although some data from the Palm LifeDrive (featuring a mecanical Microdrive CF module) could answer the drop-survivability in small form factor.

So, in short, they managed to produce only 1 single data i.e. bulk speed (well, not exactly. They also mentioned random access from a synthetic test, but no actual real-world application) when users would need about a dozen of others.
Specially in the long term range (wearout of SSD vs. damage of HDD).

Re:Untested performance... (1)

Jeff DeMaagd (2015) | more than 6 years ago | (#23240160)

Unfortunately there's no comparisons of battery life and speed tests with fragmented files.

Is file fragmentation really that big of a problem?

I know at one time I used to defragment a lot, but the difference has always been negligible for me. I only did it with the thought of keeping it "in tune", but even once a year doesn't make much apparent difference in computer performance.

Noise? Heat? (2, Interesting)

pipatron (966506) | more than 6 years ago | (#23239402)

Dunno about the author of this article, but got an "SSD" (hello buzzword) to get rid of the noise, the heat, and the annoying spin-up delay. A compactflash card doesn't cost eleventy billion dollars either.

Re:Noise? Heat? (1)

NeutronCowboy (896098) | more than 6 years ago | (#23239752)

Not to mention shock-insensitivity and power consumption. Write speed to me is fairly irrelevant by now.

Those Who Write (1)

fm6 (162816) | more than 6 years ago | (#23240344)

Write speed may be irrelevant to the applications you happen to run. But it's pretty relevant to your OS [] .

Combination? (1)

Lord Pillage (815466) | more than 6 years ago | (#23239416)

Could a combination of these two technologies produce a more efficient HD. Let's say you used the HDD for writing data and then during idle time have the newly written data copied over to the SSD. Then one could read from the faster SSD. That way you would get the faster reading and writing speeds of each technology. I suppose that this however would be more expensive, but if the SSD were optimized for retrieval of data instead of storing, maybe this could be optimized further.

Re:Combination? (1)

tab_b (1279858) | more than 6 years ago | (#23239808)

It seems a little bass-ackwards to have the mechanical device buffering for the solid-state one :) Maybe what you need is just more simply to have two separate devices in your laptop, a small SSD to hold your OS and apps, say a 32 GB Transcend [] for only $175 or so, and then a low-power, set to sleep ASAP traditional drive for your data, movies, or whatever else.

Re:Combination? (1)

alan_dershowitz (586542) | more than 6 years ago | (#23239868)

For files that aren't huge, the operating system write caching should already speed up this operation. Regarding what you asked about, that almost sounds like it could be a specialized modification of RAID-1, which would be cool.

Re:Combination? (2, Informative)

T-Bone-T (1048702) | more than 6 years ago | (#23240324)

There are already drives that have platters and flash. They cache frequently used files in flash and bootup files when you shut down.

Buzz (1)

esocid (946821) | more than 6 years ago | (#23239436)

It's nice to know all that buzz is worth ignoring since I just bought a fancy new 750gig sata hdd. Even 16mb caches beat them solidly, I wonder how 8 and 32 would compare. It's worth noting they didn't mention seek times, although I'm not sure how that would transfer into ssd terms.

What about reliability? (1)

maynard (3337) | more than 6 years ago | (#23239438)

Would an SSD take the hit of a drop better than spinning media? I betcha it would. Also, these are apples to oranges comparisons here - when was the last time you saw a MacBook Air equipped with a 3.5" 7200rpm Barracuda drive?

IMO: I'm already recommending the purchase of SSDs in laptops for all of our top professionals where I work. And the reason is not performance, it is for reliability.

Re:What about reliability? (1)

ColdWetDog (752185) | more than 6 years ago | (#23240094)

Would an SSD take the hit of a drop better than spinning media? I betcha it would ... I'm already recommending the purchase of SSDs in laptops for all of our top professionals where I work. And the reason is not performance, it is for reliability.

Maybe if your top professionals got some counseling, they wouldn't be tossing their laptops all over the place. Just who do you work for anyway? Microsoft?

Re:What about reliability? (1, Insightful)

Anonymous Coward | more than 6 years ago | (#23240104)

Well I hope your resume is up to date...

Not very good reasons... (4, Insightful)

MrKevvy (85565) | more than 6 years ago | (#23239446)

Computerworld compared four disks, two popular solid state drives and two Seagate mechanical drives, for read/write performance, bootup speed, CPU utilization and other metrics.

But of course not the metrics that really matter, which SSD's vastly excel at and make them worth the price for many people: MTBF, power consumption, ruggedness and noise level.

Re:Not very good reasons... (5, Insightful)

Mordok-DestroyerOfWo (1000167) | more than 6 years ago | (#23239682)

If I remember correctly the first LCD monitors were exorbitantly expensive and couldn't hold a candle to their CRT brothers. But since they saved so much space and energy, within a few years those problems vanished. I'd say it's still too early to close the books on SSDs.

I know it's not a car analogy, I humbly beg the forgiveness of the /. community.

Re:Not very good reasons... (1)

MightyYar (622222) | more than 6 years ago | (#23239856)

I know it's not a car analogy, I humbly beg the forgiveness of the /. community.
SSDs are just heated mirrors in a fancy 2.5" form factor.

Re:Not very good reasons... (1)

Digi-John (692918) | more than 6 years ago | (#23240092)

LCD monitors still don't match up to a decent Trinitron. The only thing that comes close, in my opinion, is the massive old Samsung SyncMaster 240T that I've been using at work. It's 24" widescreen, does 1920x1200, and has a power brick that is actually pretty close to brick size. It's a tank. It would have been something like $5,000 back when it was new... and I salvaged two of them from the re-app pile.

Re:Not very good reasons... (1)

PIBM (588930) | more than 6 years ago | (#23240244)

I can't find a single CRT that beats up my 3007 WFP at 2560x1600, and I have much more space available on my desk than with the old 19" trinitron at 1940x1600!

Re:Not very good reasons... (1)

Isao (153092) | more than 6 years ago | (#23240230)

...MTBF, power consumption, ruggedness and noise level.

Similar story over at StorageMojo [] and Robin draws a similar conclusion.

MTBF - Infant failures about the same as discs, return rates higher
Power - Flash already near the bottom of the power curve, drives appear to have room to drop
Ruggedness - No moving parts a plus, perhaps countered by whole-block rewrites on write. Not enough data here
Noise - Flash wins, no contest

Bottom line? Not enough improvement to justify the cost, except in certain edge-conditions (like the eee PC).

Noise level on new 2.5" drives ... (1) (1108067) | more than 6 years ago | (#23240336)

I can't hear the hd on my laptop, and I rarely hear the fan. The newest 2.5" drives are super-quiet.

Power Consumption (3, Insightful)

Ironsides (739422) | more than 6 years ago | (#23239448)

Too bad he didn't include power consumption. If I'm going to use an SSD for anytime soon, it will be in a laptop where power is my key concern. Performance is more of a desktop/high end issue right now.

Re:Power Consumption (2, Interesting)

NormalVisual (565491) | more than 6 years ago | (#23240176)

In addition to power usage, some measure of how warm they get under load would be useful too.

Shocking Revelations! (1)

capt.Hij (318203) | more than 6 years ago | (#23239468)

So if I read this right, if you purchase brand new, high priced technology it may not give you the same kind of bang for the buck as tried and true established technology. I... am... shocked....

Performance is not the key to SSD (2, Insightful)

avdp (22065) | more than 6 years ago | (#23239480)

IMHO, performance is not the critical factor regarding SSD. Power usage, and mostly no-moving-part (quiet and rugged) is why you want SSD in your laptop.

But on the performance front, they compared with 7200RPM hard drives, last time I checked (admittedly a while ago) most laptop are outfitted with 5400RPM drives.

Re:Performance is not the key to SSD (3, Insightful)

Sancho (17056) | more than 6 years ago | (#23239716) [] indicates that the battery usage (at least compared to the HDD shipped with the Macbook Air) is negligible. No moving parts is nice, though manufacturers have addressed some of the ruggedness issues by including drop sensors. Actual, real world wear hasn't had a chance to surface yet--I'll definitely be curious to find out if SSDs live up to the speculation.

Re:Performance is not the key to SSD (1)

esocid (946821) | more than 6 years ago | (#23239802)

Most probably were, but the two they compared the to are laptop hdds. Since the comparison is talking about Macbook Air, I looked at the specs:

Apple MacBook Air - 1.6GHz OS X 10.5.1 Leopard; Intel Core 2 Duo 1.6GHz; 2,048MB DDR2 SDRAM 667MHz; 144MB Intel GMA X3100; 80GB Samsung 4,200rpm
The stock hdd is 4200rpm so even that 5400 figure you had was over the stock drive speed. So they should have compared those two options as well as what they did to get a good idea. As well as including drives with 8mb and 32mb caches.

Why a "drive"? (3, Interesting)

Ossifer (703813) | more than 6 years ago | (#23239490)

Am I the only one questioning why these devices are implemented using a mechanical drive interface? Maybe it's a negligible cost, but to me it would seem that a memory bus optimized for flash memory would be a better way to go, than trying to piggy-back a mechanical drive's bus. How much faster could these be if their existence was planned into, say Intel's chipsets?

Re:Why a "drive"? (1)

Wesley Felter (138342) | more than 6 years ago | (#23239526)

We'll find out soon, since Intel is adding a flash controller to its chipsets.

Re:Why a "drive"? (3, Informative)

Alioth (221270) | more than 6 years ago | (#23239768)

Well, the IDE bus isn't mechanically oriented anyway - we don't actually use cylinders, heads and sectors (and haven't for years), we use block addressing and the drive electronics has figured out how to move the mechanics. Block addressing isn't all that far off from addressing an individual byte in memory anyway - except you're addressing a whole block rather than a single byte (and for mass storage, whether it's mechanical or flash, you're going to want to do it that way so you don't have an absurdly wide address bus). Parallel ATA uses a 16 bit wide data bus.

Re:Why a "drive"? (1)

rubeng (1263328) | more than 6 years ago | (#23239952)

Yeah, but think how much simpler it would be if the flash memory was directly mapped into the processor's address space, no IDE/SATA/SAS drivers, no logical blocks, just a memory range 0xwhatever to 0xsomething else that was non-volatile. You'd still want a filesystem to manage that memory though.

Re:Why a "drive"? (2, Interesting)

Ossifer (703813) | more than 6 years ago | (#23240008)

Thanks for the information and insight, but I wonder, why wouldn't we want a (maybe not "absurdly") wide address bus? A 16-bit wide bus seems a bit underscaled, considering core memory buses are 128 bit, and with block addressing we're obviously reading/writing much more than that. The core memory bus is already 16 times bigger than the smallest addressable unit. Granted, with say a 512-byte block, I'm not suggesting a 64k bit wide bus (16 * 512 * 8), but it would seem that 16 bit is simply not a good choice...

Re:Why a "drive"? (1)

Jeff DeMaagd (2015) | more than 6 years ago | (#23240202)

SSDs are not even close to maxing out the drive interface anyway, so is it really even a relevant consideration?

Stupid Test (4, Informative)

phantomcircuit (938963) | more than 6 years ago | (#23239496)

They only tested burst speeds, there was no random access testing.

SSD works best when accessing files randomly.

Re:Stupid Test (1)

somersault (912633) | more than 6 years ago | (#23240152)

Rex: Go back, go back, you missed it.
Hamm: Too late, I'm already on the 40's, gotta go around the horn, it's faster.

I've seen this before. (2, Insightful)

jskline (301574) | more than 6 years ago | (#23239536)

You really have to look deep into the advertising sometimes. Only a trained person willing to do the math on these would be able to see the differences. Clearly, these devices have a legitimate purpose and place, but at this point in time, its not in the client computer. The speeds need to come up to be really practical.

Now a good purpose for these might be in desktop bound short-stack storage arrays instead of that large tera-byte drive array. They're just quick enough for data retention backups off of the mechanical drives in the client PC.

Another use is small-scale server apps that usually are bound into hardware in some form of internet controllable appliance. Speed isn't really a major factor here for this and these would potentially work well.

Just my opinion. Subject to change.

SSDs are ideal for servers (4, Insightful)

ncw (59013) | more than 6 years ago | (#23239540)

As any sysadmin knows, on a busy server what creams the disk isn't Megabytes per second, it is IO transactions per second.

According to the article the Crucial SSD has an access time of 0.4 ms which equates to 2500 IOs/s as compared to the Barracuda HDD with 13.4 ms access time which equates to a mere 75 IOs/s.

So for servers SSDs are 33 times better!

Bring them on ;-)

Re:SSDs are ideal for servers (1)

Bill, Shooter of Bul (629286) | more than 6 years ago | (#23239674)

Exactly. I guess the point of this article was to examine weather or not it made sense in the case of a laptop,as many are now starting to offer one as an option. But it would have been nice to point out the real awesome potential they have for servers.

Re:SSDs are ideal for servers (1)

Uncle Focker (1277658) | more than 6 years ago | (#23239830)

Have fun changing out the drives every year as you've surpassed the maximum number of writes.

Re:SSDs are ideal for servers (2, Interesting)

hardburn (141468) | more than 6 years ago | (#23240136)

If your filesystem is designed to distribute the writes properly, the failure time is comparable to the MBF of hard drives.

Though personally, I think the way to go on servers is to use 64GB of RAM and put most of it as a RAM disk. Depending on your application, you can either have a shell script copy the data back to a hard drive for persistent data, or use that kernel driver to mirror the data to a hard drive. Software RAID 1 would work, too.

Re:SSDs are ideal for servers (0)

Anonymous Coward | more than 6 years ago | (#23239840)

I wonder what the performance in a RAID setup would be like? It should be awesome.

Re:SSDs are ideal for servers (1)

mapsjanhere (1130359) | more than 6 years ago | (#23239884)

Well, they would be if they had unlimited read-write cycles. But flash is rather more limited in that regard, some estimates are as low as 100,000 cycles.
If your 2500 IOs/s hit the same sector, your server SSD is fried in 7 min. SSD are distinctively NOT server suitable if you have a lot of write cycles (probably less of an issue if it's just answering read requests).

Re:SSDs are ideal for servers (1)

vux984 (928602) | more than 6 years ago | (#23240042)

If your 2500 IOs/s hit the same sector, your server SSD is fried in 7 min.

One would think that would actually be an ideal scenario. Your cache hits would be through the roof. Even if it wrote the sector back to the flash drive once every 2 seconds, that would be 5000 IO's worth of updates in one write op.

Factor in drive wear leveling (so that it moves the data sector around on the empty space on the physical disk rather than in the same physical place each time), and the disk would probably last nearly forever.

Re:SSDs are ideal for servers (2, Informative)

AlexCV (261412) | more than 6 years ago | (#23240352)

Not only does wear leveling on very large (> 100 GB) completely moot the point *even* with 100000 cycle life, but modern high-capacity flash has cycle life in the millions of cycles, they have extra capacity set aside to deal with cell failures, they have error correction/detection, they have wear leveling. We're a long way off using FAT16 on straight 128 megabits flash chips...

I don't know why everyone keeps repeating flash "problems" from the mid/late 90s. We're in 2008, flash has been widely used in huge quantities for well over a decade with thousands of engineers applying well understood solutions to the problems.

Re:SSDs are ideal for servers (1)

mapsjanhere (1130359) | more than 6 years ago | (#23240362)

I'm aware that you can get around this issue, I was just trying to point out that the raw "possible IO" number is not all it's cracked up to be.
Similar for using it as a specialty device for read heavy applications; it was the general "ideal server device" that I had a problem with.
When I first read about SSD it sounded like the second coming of sliced bread, it was the "devil in the detail" that soured me, especially the write limitations that seem to be a physical limitation, not something you can engineer away.

Re:SSDs are ideal for servers (1)

LWATCDR (28044) | more than 6 years ago | (#23240156)

That would really depend on the server.
A good example where an SSD might be a good solution is one of the database servers at my office.
A record gets created and then updated maybe twice. It then my get read a few hundred thousand times.
So yes for some servers it might be a really good thing. Lots of databases are very very read heavy and write light.

Re:SSDs are ideal for servers (1)

somersault (912633) | more than 6 years ago | (#23240214)

But in the case for read requests it is better. Obviously HDs are currently better for some applications, but likewise SSDs are also better for some applications. Yet again, people seem to be herding each other into camps and throwing rocks at each other rather than just learning the merits of each other's viewpoints and using the best tool for the job.

Re:SSDs are ideal for servers (1)

pancrace (243587) | more than 6 years ago | (#23240222)

Real-world results are even more dramatic, we run about 50x better than a U320 15k drive with a read-only database.

But the use of flash drives for servers is still quite limited. I wouldn't use it for anything with large amounts of sequential IO, but only for lots of random IO in very small chunks (4k or less). Even a slow HD beats the flash drive for reading many files over 512K and a HD clobbers flash on any kind of writing.

IME, large databases with very few UPDATES/INSERTS and very sparse file systems are pretty much all that run well.

- p

Well (0)

Anonymous Coward | more than 6 years ago | (#23239558)

Reliability - [] Nope
Speed - Nope

Tell me again what the whole point was

Re:Well (1)

tab_b (1279858) | more than 6 years ago | (#23239736)

An update on that Gizmodo page on reliability says Dell refutes those numbers pointing here [] .

What about the power usage? (1)

wiredog (43288) | more than 6 years ago | (#23239574)

That's the (potentially)biggest benefit of using SSDs over HDDs. No moving parts==less power used==longer battery life.

Re:What about the power usage? (2, Informative)

Overzeetop (214511) | more than 6 years ago | (#23240060)

That's true, but is almost a technicality with today's processors and video cards. With anything but the slowest ultra-portables, having a hd running just doesn't suck up much juice. A Seagate Momentus (5400rpm) takes between 1.9 and 2.3W when reading/writing/seeking, and only 0.8 watts when idle (not standby - that's .2W). Given a typical laptops with between 50 and 80 Wh batteries and a 2 to 3 hour charge life, you're HD comprises about 3% of the average draw at idle, and about 7-8% at full tilt - for those of you running active SQL servers on your lappies. If I give you 30% at non-idle, it's about 4%. That's more power than with a SSD at 0.2W, but it's really only about 4 to 6 minutes of extra time on a charge.

Re:What about the power usage? (1)

Nexus7 (2919) | more than 6 years ago | (#23240236)

While true, consider also the case where I'm using the browsing the web... a lot of small files keep getting written to the disk, as soon as I finish reading one web page and go to the next. So the HDD keeps spinning up. So I increase the idle timeout. Now the disk just keeps spinning, and the palm-rest above the disk gets hot, decreasing the disk's lifetime. For some reason, even putting Firefox's cache into /dev/shm doesn't seem to help, disk spins up frequently. Things like ndiswrapper like to write to disk. This is why I'm considering an SSD. I don't think I'm going to take much of a performance hit compared to a 4200 rpm HDD.

I live on a boat (1)

olele (785597) | more than 6 years ago | (#23239584)

that bounces a lot. and includes things like salt and generally high moisture in the air. I think anyone that uses their pc - whatever form factor - anywhere that's not sitting on a desk in a climate controlled environment might do well to take these results with a grain of salt *sorry*

But what about the Noise level (1)

dawnsnow (8077) | more than 6 years ago | (#23239586)

I've been suffering too long time from the hard drive noise. If the price is reasonable I want SSD to have quiet environment.

it's all about the battery (1)

Khopesh (112447) | more than 6 years ago | (#23239594)

SSD's performance boost is in battery life due to its lower power consumption from zero moving parts. Flash-based storage has always had a problem with writing; don't forget about the fact that it can only be written to ~1000 times.

Furthermore, SSD is just temporary relief for batteries; I envision a laptop with both SSD and HDD that almost never writes to the SSD; on Windows, C:\WINDOWS and C:\Program Files would live in SSD while C:\Documents & Settings would live on HDD and C:\WINDOWS\Temp (or wherever that part lives nowadays) would be on ramdisk. On FOSS systems, /usr would be SSD while /home and /var would be HDD and /tmp and /var/tmp would be in shared memory (like /dev/shm).

The real future is in racetrack memory [] and the like, which drastically improves speed in both directions, removes the 1000-writes issue, AND further boosts SSD's impressive battery life. However, we've got ten years before it hits the market, so we have SSD until then.

Re:it's all about the battery (1)

Alioth (221270) | more than 6 years ago | (#23239878)

That's not correct: even NOR flash (what you use for ROM, rather than mass storage) has been rated at 10,000 erase/write cycles for years - per sector (rather than the whole device). The typical flash mass storage is up to 100K erase/writes.

Swap is the main concern here - the solution is to give the machine enough RAM that you can turn swap off.

Re:it's all about the battery (1)

Uncle Focker (1277658) | more than 6 years ago | (#23239886)

Flash-based storage has always had a problem with writing; don't forget about the fact that it can only be written to ~1000 times.
You're a few orders of magnitude off. It's around 400-500 thousand reads for average flash drives. The more expensive, high performance stuff can max out at a few million writes.

Not surprising or bad to me. (2, Informative)

alan_dershowitz (586542) | more than 6 years ago | (#23239608)

Two things: first, booting is ideally going to be largely sequential reads because OS X caches files used in the boot process in order to speed up the boot by removing random access. SSD's have an advantage over hard drives in random reads because there's comparatively no seek time. So I wouldn't expect to see a huge advantage. Secondly, I'm not going to be using my macbook air's tiny SSD drive for analog video capture or something anyway, so high write speed is really not that relevant to me. On the other hand the thing is supposed to be light and use little battery, so SSD seems like it wins for the reasons it was used. Also, the tests bear out a higher average read speed, which is also what I would have expected. I don't see anything surprising here.

Anyone? (1)

ShiNoKaze (1097629) | more than 6 years ago | (#23239622)

Anyone ever hear what happened to the second generation i-Ram from gigabyte? There was mad intel on the first and then it all just went away... I want the 8Gig, SATA 2. It just never came out.

Re:Anyone? (1)

danbert8 (1024253) | more than 6 years ago | (#23240250)

Seriously, at current DDR2 prices, I could easily have a ludicrously fast hard drive big enough to store my OS for under 150 bucks.

What about battery life? (1)

dpbsmith (263124) | more than 6 years ago | (#23239624)

I would have thought that in a laptop, solid state drives would have a noticeable advantage in terms of power consumption leading to increased battery life.

Admittedly the article described itself as a performance showdown, but I'm disappointed that the reviewer made no attempt to compare power consumption and battery life.

If nothing else, I would have thought a solid state drive would eliminate that annoying pause when a hard drive awakes from sleep and spins up, and that this would feel like a worthwhile "performance" improvement--though whether it's worth the cost is another question.

Re:What about battery life? (1)

CastrTroy (595695) | more than 6 years ago | (#23240124)

However, the flash drives are only 32 GB. How small and how low power could you make a drive that only needed to be 32 GB. You could probably go for a much smaller form factor. Which would mean smaller platters, which would take less energy to spin. It would also mean that the read/write heads wouldn't have to move as far to reach the data. Comparing a 32 GB SSD to a 3.5 inch, 250 GB HDD is not a very good comparison.

Re:What about battery life? (1)

Champ (91601) | more than 6 years ago | (#23240204)

I see your point, which is probably theoretically true to some extent, but have you actually TRIED a microdrive? They're terrible! Slow to spin up, slow to seek, slow to transfer. Less energy than regular drives, sure, but quite a bit more than flash.

I have an 8 GB MicroSDHC card that's the size of my fingernail and about as thick. Four of those would be less than 0.5 cm^3. When you have a fast mechanical drive in the same form factor, let's talk.

Seek Times Make the Difference (5, Interesting)

pancrace (243587) | more than 6 years ago | (#23239644)

We installed one of these for processing millions of small, read-only database transactions. The database only gets written once a day, but is too big for efficient cacheing. Even with a U320 15k drive we were still suffering, only being able to run about 700/min. With a flash drive, we're running over 25,000/min, peaking at 50,000/min. But the weekly copy of the database takes about 20 minutes, vs the 3 or 4 minutes it used to take.

- p

He forgot the two crucial tests.... (0)

Anonymous Coward | more than 6 years ago | (#23239660)

A not-very-interesting comparison of read/write performance only. He should have run the two crucial tests that caused me to move to a solid state disk (made it myself out of an adapter and a high speed/large capacity CF card) for my traveling laptop. That would be the laptop that I throw in my carry-on and occasionally drop.

First, compare and contrast laptop battery life while continuously reading and writing disk files; second, compare and contrast hardware reliability while holding the laptop in both hands and shaking it as vigorously as possible while reading and writing disk files.

But Why? (0)

Anonymous Coward | more than 6 years ago | (#23239774)

A lot of people have been posting what the "correct" metrics would have been to test the SSD drives vs traditional drives, which is fair.

My question is why did the traditional drives outperform SSD drives in large data transfers? SSDs would have faster seek times as has been mentioned but I see no reason why a large file transfer would take longer on an SSD, anyone have insight?

Weird test (1)

The Clockwork Troll (655321) | more than 6 years ago | (#23239794)

I can't find an apples-to-apples comparison in this test.

If they wanted to compare the best laptop mechanical drive to the best laptop drive (price no object), why didn't they use an MTron or Memoright drive (> 100MB/sec sustained read AND write)?

If they wanted to compare the best laptop mechanical drive to the cheapest SSD drive, why didn't they use a Transcend drive ( $200)?

Vista for performance testing? (1)

Matthew Weigel (888) | more than 6 years ago | (#23239812)

That just seems silly. I'd like to see performance tests on a system where the disk's performance affects the end result, rather than all of the results being homogenized by the operating system's poor I/O capability. Given Vista's adoption, it's not even a test of what disk performance will be like "in the real world."

SSD Variation (0)

ProfessionalCookie (673314) | more than 6 years ago | (#23239820)

I dunno if anyone's paying attention but SSDs vary hugely in speed- both random access and sustained transfers. It's pretty easy to find a slow SSD- they've been around forever. The Macbook currently uses a Samsung part.

moD down (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#23239936)

anD that 7he floor

Sponsored by Seagate? (1)

FlutterBoundary (1277000) | more than 6 years ago | (#23239950)

I tried to read one of the other articles on the Computerworld website and was served a compulsory registration page - with a Seagate ad banner on it! One can't help but wonder if the reviewer's position might have been skewed a bit...

What the reviewer did wrong. (1)

Neoncow (802085) | more than 6 years ago | (#23239960)

So to sum up what the reviewer did wrong:
  • Only tested sustained write speeds.Has the impression that performance is copying multigigabyte files around all day.
  • Ignored the silence advantage.
  • Didn't consider power savings.
  • Didn't test seek speeds.
    When asked about ignoring the 20:1 advantage SSDs have in seek speed, responded:

    But keep in mind that it's only one component of the overall operation. These were all freshly formatted drives so fragmentation shouldn't be an issue and the longer the operation under that condition, the less it tends to matter.

    SSDs might even slow down slightly because some are built intelligently enough to not write to the same location each time (and thus prematurely "wear out" segments of memory which are, after all, limited use within context).
  • Comparing 3.5 inch 7200 rpm drives to 2.5 inch SSDs

Anything else to add?

Why are the results so bad? (1)

Peter H.S. (38077) | more than 6 years ago | (#23240066)

HD Tach test:

Burst Speed Average Read Random Access CPU Utilization
Crucial SSD 137.3MB/sec 120.7MB/sec 0.4ms 4%
Barracuda HDD 135.0MB/sec 55.0MB/sec 13.4ms 4%

Cold Boot Restart
Crucial SSD 39.9 78.4
Barracuda HDD 39.9 59.9
Yeah I know synthetic tests are problematic, but the two tests gives contrary results.
Is it because a MS Vista boot and reboot doesn't involve much random R/W and therefore doesn't shows the appearrent strength of SSD's? Or is it because an extremely low random access value on SSD's are offset by the fact that sequential R/W's are rare on SSD's because of wear-level algorithms? Or is it something else? Twice the read speed and 30 times the random access time ought to give some advantage.


Is Limited Number of Writes to SSDs a Problem? (1)

Ron Bennett (14590) | more than 6 years ago | (#23240086)

SSDs have greatly improved, and typically utilize wear leveling methods to more evenly distribute writes across memory cells.

However, in real-world situations, do SSD write limitations ever pose a problem or is it a total non-issue these days?


ya, so what? (1)

zogger (617870) | more than 6 years ago | (#23240100)

The SSDs are getting better/cheaper/faster/larger all the time and part of the interest is that they are much more robust/less fragile and use a lot less electricity. [bad car analogy] A Ferrari is faster but it can't do the same work nor is as tough as the F450 we got, and I bet the F450 with the diesel engine gets better mileage while doing that work [/bad analogy]

Reliability and shock resistance (1)

Aram Fingal (576822) | more than 6 years ago | (#23240112)

Flash media, like compact flash cards, are supposed to be very shock resistant compared to hard drives. That would give these SSD drives a big advantage in machines designed to be very rugged.

The real improvement - resource contention (1)

diamondsw (685967) | more than 6 years ago | (#23240158)

Completely missed the point. SSD's are not about extremely fast sequential access, they're designed for near-instantaneous random access. No seeking means faster random access, which also means MUCH improved performance when multiple processes are hitting the disk at once.

Just think back to when you moved to a dual-core CPU how much more responsive it was. Now take that same jump to I/O, which is always the performance bottleneck. We're leaving the age of simple increases in horsepower - Mhz, RPM, and throughput; now we're attacking the problems of resource contention. Multicore CPU's, solid state disks, more memory, better CPU-memory interconnects - all of this is making resource contention and "churn" a thing of the past.

Flash memory not true SSD tech (1)

0111 1110 (518466) | more than 6 years ago | (#23240184)

It is amazing to me that even other geeks have fallen for the corporate hype machine. This current gen of "SSD" has little to do with the actual promise of a solid state drive. Have you all forgotten the original point? We were getting tired of the slow incremental increase in speed that magnetic platter hard drive technology was giving us. Hard drives were and still are typically the bottleneck in many applications. They are what is holding us back from instant response times.

These flash based drives are little more than a straw man or distraction from the true goal and promise of solid state drives. Gigabyte had the right idea with their I-Ram device [] but apparently they found making their own memory controller to be too difficult. That is something more appropriate for someone like Intel, they claimed. And I think this is actually a good point. So why aren't Intel and AMD pursuing such a device? I don't know the answer to that but I don't think it has anything to do with ability. Clearly Intel or AMD could make their own version of an I-Ram device that could have the potential to finally realize the dream and promise of the original solid state drive idea.

The appeal of using a flash based drive for storing my data eludes me. Limited number of writes. Check. Unproven and highly suspect reliability (I have had several flash drives for my camera fail on me at unpredictable times). Check. No great speed advantage to the horribly slow archaic magnetic platter technology. Check. Expensive. Check. Sounds great. Really.

It is the 21st century already and not only do we not have HAL 9000 computers or replicants or flying cars or lunar vacation spots or fusion, but we don't yet even have drives that are significantly faster than they were 20 years ago. I am sitting here tapping my fingers on my desk waiting for the revolution in data storage that surely must be just around the corner. I am sorry but these flash based SSDs are not it.

Performance RAID (1)

SpryGuy (206254) | more than 6 years ago | (#23240210)

In other tests I've seen, the only time the SSD drives come out on top is when configured in performance RAID style, so that writes are parallelized across two or more SSD units.

If someone could put together a convenient RAID type package, the extra cost might actually result in extra, noticable speed improvements, even for writes. And two 64GB SSD units arranged in a performance RAID package would give a more usable 128GB "hard disk" to store things on anyway.

Speed is a subset of performance... (1)

RJFerret (1279530) | more than 6 years ago | (#23240240)

They didn't test performance, but speed. One of the biggest aspects of "performance" is efficiency. Despite talking about some aspects of performance in the article, "Having no moving parts is, naturally, important. ... in theory -- should use less power than equivalent mechanical hard drives."

Testing speed alone ignores the different applications for the different products!

MTBF (1)

weave (48069) | more than 6 years ago | (#23240256)

All I care about is MTBF. I am so sick and tired of trying to get data off of crashed drives and restoring computers for family members (and myself) . Even with current backups, it's a hassle and disks fail at the most inconvenient time.

My wife wanted a laptop recently and I made her spend the extra money for an SSD.

Native command queuing with newer SATA drives (1)

rlk (1089) | more than 6 years ago | (#23240264)

As others have said, using these things with streaming I/O doesn't make much sense.

I recently built myself a new system. The new processor (Xeon E3110, aka Core 2 Duo E8400) certainly did make boot time somewhat faster, but not dramatically so. Likewise for initial login -- the KDE desktop came up somewhat faster, but it wasn't overwhelming.

Then it occurred to me to move my root and home directory partitions from an older 250 GB 1.5 Gb/sec SATA drive to my newer 500 GB 3.0 Gb/sec compatible drive. There are more differences than just the interface speed; the most notable one is probably Native Command Queuing. This is similar to tagged command queuing on SCSI; it allows the host to queue multiple commands to the disk, which replies to 'em as ready.

*That* made a difference.

fsck insists on checking the partitions one at a time (it sequences the partitions on a particular disk even if you tell it otherwise in /etc/fstab), and it's single threaded and does no async I/O. Reiserfs check does a quick scan of the filesystem tree, which seems to take time proportional to the size of the partition, so that didn't really change very much. After that, it's a completely different story. The rest of the boot sequence now completes in maybe 10 seconds, tops (it previously took something like a minute), and login to my KDE desktop (even including Firefox) really is fast -- only a bit slower than a warm login, when everything's in memory.
The issue's not lack of memory -- I had 2 GB on my old system and 4 GB now, and anyway, the big change was on my new system only when I switched drives around.

Moral of the story: if you're suffering from slow boot, make sure that your motherboard supports SATA 3.0 Gb/sec, use AHCI and native IDE (not legacy IDE), and make sure that your drive supports SATA 3.0 Gb/sec and that it isn't jumpered for 1.5 Gb/sec.

Test protocol (1)

Yvanhoe (564877) | more than 6 years ago | (#23240298)

1. Drop both drives from a 3 meters height. 2. Do the test again 3. Repeat until one disk has performance problems.

Better disable atime if using SSD (0)

Anonymous Coward | more than 6 years ago | (#23240360)

You better disable atime if you're using SSD drives or you'll find they don't last as long as you would expect.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?