Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Costly SSDs Worth It, Users Say

Soulskill posted more than 2 years ago | from the dollars-per-patience dept.

Data Storage 288

Lucas123 writes "When you're paying $30,000 for a PCIe flash card, it had better demonstrate an ROI. While users are still struggling with why solid state storage cost so much, when they target the technology at the right applications, the results can be staggering. For example, when Dan Marbes, a systems engineer at Associated Bank, deployed just three SSDs for his B.I. applications, the flash storage outperformed 60 15,000rpm Fibre Channel disk drives in small-block reads. But when Marbes used the SSDs for large-block random reads and any writes, 'the 60 15K spindles crushed the SSDs,' he said,"

cancel ×

288 comments

Sorry! There are no comments related to the filter you selected.

My approach (5, Informative)

Anrego (830717) | more than 2 years ago | (#37321794)

Small (and cheap) 32GB SSD for my desktop...

Big powerful 12TB file server using traditional disks for the bulk of my data.

Performance for the stuff where the SSD makes a difference (program files), cheap storage for the stuff where it doesn't (just about everything else).

And if that 32GB drive dies (unproven technology.. MTBF is still a guess) .. I'll buy another cheap (probably cheaper at that point) one and restore from my daily backup.

Re:My approach (0)

Anonymous Coward | more than 2 years ago | (#37321968)

Performance for the stuff where the SSD makes a difference (program files), cheap storage for the stuff where it doesn't (just about everything else).

That's basically the whole point of Z68's SRT. Although I hear it's better if you have a fast backup configuration (e.g. 2xRE4 RAID0) for when the SSD cache is exhausted.

Re:My approach (-1)

Anonymous Coward | more than 2 years ago | (#37322166)

Small (and cheap) 32GB SSD for my desktop...

Big powerful 12TB file server using traditional disks for the bulk of my data.

Performance for the stuff where the SSD makes a difference (program files), cheap storage for the stuff where it doesn't (just about everything else).

And if that 32GB drive dies (unproven technology.. MTBF is still a guess) .. I'll buy another cheap (probably cheaper at that point) one and restore from my daily backup.

OMFG. You got modded "informative" for saying SSDs help binary files perform faster.

Files that get read from disk - ONCE.

After that, they're in RAM.

Whoever modded you "Informative" is, umm, ignorant.

Re:My approach (0)

Anonymous Coward | more than 2 years ago | (#37322376)

Do you have a SSD drive? The performance boost is significant. Boot time is significantly shorter, programs start instantly and if you run Windows there is no need to defrag.

Re:My approach (1)

Sancho (17056) | more than 2 years ago | (#37322546)

I have one, and I have not noticed significant increases in speed. I probably shaved 5 seconds off of a 20 second boot time, but I rarely shut the computer down, so that isn't such a big deal.

Re:My approach (1)

Anrego (830717) | more than 2 years ago | (#37322398)

Oh grow up...

First time those programs are loaded is blazing fast. Moving to SSD dramatically increased boot time. Yes subsequent loads are from cache.. but having stuff load damn near instantly the first time is significant.

In addition to that, I'm a Gentoo user, and that SSD makes building those program files a hell of a lot faster.

Re:My approach (4, Informative)

petteyg359 (1847514) | more than 2 years ago | (#37322580)

You're doing it wrong. Get some RAM and mount a tmpfs, and it'll be a hell of a lot faster than your SSD. It'll be at least 60% cheaper, too.

Re:My approach (2)

darkmeridian (119044) | more than 2 years ago | (#37322170)

I do the same thing as you do except I keep a hot spare in my computer, a regular hard drive that automatically mirrors the SSD using Norton Ghost. If the SSD dies, swap the SATA cables and reboot. I've done this a few times just to test it out, and it works.

Re:My approach (1)

Sabriel (134364) | more than 2 years ago | (#37322380)

Is that "except" or "except also"? If the primary dies whilst being mirrored... may I suggest two spares with an alternating schedule? :)

Re:My approach (1)

Anthony Mouse (1927662) | more than 2 years ago | (#37322462)

Just use rsync with "--link-dest", I've seen multi-TB backups take less than a minute because most of the files are the same.

Re:My approach (1)

interval1066 (668936) | more than 2 years ago | (#37322248)

I did the same thing for a small media server I set up; bought a Via picoitx system and on a whim chose a 60G SSD to install the os on, and use a 2 TB usb drive to serve up the media. Works really well. The ssd is good on heat.

Re:My approach (0)

dvdwholesale3 (2432850) | more than 2 years ago | (#37322500)

P90x is an extremely intense program.Sheer will and determination may get you to the finish line,but to achieve the best results,youâ(TM)ve got to have the proper quality and quantity of nutrition.We make these supplements optional,so you have a choice.But know that P90x supplements were designed for this program and will supply your body with the necessary nutrients to give you added strength energy,and stamina for each workout. As you may notice from the math on the following pages,P90x is not bulit around adaily âoecalorie deficitâ for weight loss like the general Beachbody plans found in Power 90,Kathy Smitsâ(TM)s Project :You!Type 2,and Slimin 6.Itâ(TM)s important that you understand why ,so you have the right training mentality with this program ,with the right expectations. http://blu-raydvds-wholesaler.com/ [blu-raydvd...esaler.com]

Re:My approach (1)

atomicbutterfly (1979388) | more than 2 years ago | (#37322502)

You have a 12TB file server? WTF are you putting on that thing, pirated content you'll never have enough time in your life to watch?

Re:My approach (1)

Anrego (830717) | more than 2 years ago | (#37322540)

Rips of my fairly massive DVD collection actually!

Ok, I won't lie, I do have some pirated content.. but I do pay for most of my media these days.

It adds up fairly quickly... most are in H.264 with fairly high settings.

And yes, I have seen everything in my collection. I almost always have something playing in the background while I work. I probably make it through my collection every few years ..

Re:My approach (1)

hedwards (940851) | more than 2 years ago | (#37322552)

12TB is a lot, unless you start backing up your entire blu ray collection to disk, in which case it might not be that much. Also if you're into shooting HD video.

Re:My approach (0)

Anonymous Coward | more than 2 years ago | (#37322592)

12TB is nothing... Of course, I work for a television station, and my daily data turnover rate is higher than that. The stuff I have to archive is a whole different story, especially since the switch to HD

Re:My approach (1)

drsmithy (35869) | more than 2 years ago | (#37322660)

You have a 12TB file server? WTF are you putting on that thing, pirated content you'll never have enough time in your life to watch?

12T isn't a lot once you start putting 8-12GB per movie and ~25GB per season 1080p rips. Especially if you keep them for rewatching.

Re:My approach (1)

demonlapin (527802) | more than 2 years ago | (#37322732)

25GB per season is 480 seasons of TV per 12 TB. Rewatching? You'd be lucky to watch all of those. Assuming you watch 3 hours of TV a day, and (this is very conservative) there are 12 episodes per season, that's 5 years of watching TV before you repeat a single episode.

Re:My approach (1)

cgenman (325138) | more than 2 years ago | (#37322568)

I have a 128GB SSD for OS and scratch partition, with data and programs on a larger traditional HDD. Going to SSD, the boot speed more than doubled. Applications that rely on scratch disks, like Photoshop... Well, Photoshop is a bit confused by the whole setup. But SMARTER applications that rely upon scratch disks are lightning fast. It's not just that the SSD functions as a scratch disk faster, but that by offloading those IO interactions means that the normal disk is entirely free for more traditional linear disk-ley things.

It was an extra $200 for the drive. But getting that kind of performance boost out of faster processors or more cutting edge FSB's, etc, would probably cost in the realm of a grand or more.

Re:My approach (0)

Anonymous Coward | more than 2 years ago | (#37322650)

"the boot speed more than doubled."

Measured including or excluding POST?

For me, well over half the time is spent in POST. The time from entering GRUB to the desktop is on the order of 5 seconds, so doubling that part isn't going to do a whole lot.

ureadahead does wonders.

Really? (1)

Troke (1612099) | more than 2 years ago | (#37321806)

SSD's outperform disc in applications where they perform better. Story at 11.

Re:Really? (2)

ArsonSmith (13997) | more than 2 years ago | (#37322422)

A tiny number of SSDs out perform a huge number of spinning disks except in certain situations. Story right now.

The TRUTH about Slashdot! (-1)

Anonymous Coward | more than 2 years ago | (#37321818)

Imagine a giant penis flying towards your mouth, and there's nothing you can do about it. And you're like "Oh man, I'm gonna have to suck this thing", and you brace yourself to suck this giant penis. But then, at the last moment, it changes trajectory and hits you in the eye. You think to yourself "Well, at least I got that out of the way", but then the giant penis rears back and stabs your eye again, and again, and again. Eventually, this giant penis is penetrating your gray matter, and you begin to lose control of your motor skills. That's when the giant penis slaps you across the cheek, causing you to fall out of your chair. Unable to move and at your most vulnerable, the giant penis finally lodges itself in your anus, where it rests uncomfortably for 4, maybe 5 hours. That's what using Slashdot is like.

Copypasta with SSDs (1)

MrEricSir (398214) | more than 2 years ago | (#37321830)

So you're saying copypasta can be copied even faster thanks to SSDs?

? crushed the SSDs? WHAT COMMIE LINGUA IS THIS ?? (-1)

Anonymous Coward | more than 2 years ago | (#37321824)

Crushed ?? In Soviet russia grapes CRUSH YOU !!

And still no SSD caching for Linux file systems (0)

Anonymous Coward | more than 2 years ago | (#37321832)

I know that bcache [evilpiepirate.org] exists but why isn't there a mainline kernel SSD cache available yet? I could use it for my whitebox SANs where there are dozens of terabytes but only the need for a few gigs to be written on a given day.

Am I supposed to just run out and buy SSDs for the whole load?

Re:And still no SSD caching for Linux file systems (2)

amicusNYCL (1538833) | more than 2 years ago | (#37321882)

Am I supposed to just run out and buy SSDs for the whole load?

No. I think that in the open source world you're expected to write the update yourself.

Re:And still no SSD caching for Linux file systems (5, Funny)

TheInternetGuy (2006682) | more than 2 years ago | (#37322114)

No, you are supposed to start a flame-war on lkml about how SSD cache is a stupid idea that will never amount to anything. Next hundreds of kernel developers will start develop the code to prove you wrong.

Re:And still no SSD caching for Linux file systems (1)

Pikoro (844299) | more than 2 years ago | (#37322312)

Damn, I wish I had mod points :)

Re:And still no SSD caching for Linux file systems (1)

MobileTatsu-NJG (946591) | more than 2 years ago | (#37322596)

No, you are supposed to start a flame-war on lkml about how SSD cache is a stupid idea that will never amount to anything. Next hundreds of kernel developers will start develop the code to prove you wrong.

I can't believe it's impossible to view Goatse in Linux!

Re:And still no SSD caching for Linux file systems (1)

amiga3D (567632) | more than 2 years ago | (#37322480)

Either write it yourself, pay someone to do it, beg nicely or shut up and just wait.

Meh (1, Interesting)

Nom du Keyboard (633989) | more than 2 years ago | (#37321840)

So sometimes it's faster, and sometimes it's slower, and always it's more expensive per GB. That makes this a pretty useless article.

Re:Meh (0)

Anonymous Coward | more than 2 years ago | (#37321896)

Not at all. It means: know what you are using it for.
It happens that for the required application here, ssd could be perfect, depending on whether small reads are dominant.

Re:Meh (0)

Anonymous Coward | more than 2 years ago | (#37321958)

So you didnt read the bit where it went into some detail why / why not, then gave examples of where SSD's can be highly useful?

Speak for your fucking self, I thought that was a great article to learn from

Re:Meh (1)

Blaskowicz (634489) | more than 2 years ago | (#37321962)

Although not as explicitly stated as in the story that was about Ebay, you get in situations where the SSDs save you money. Either because it replaces a rack of high speed HDDs, or because it may saves you buying a server with 1TB memory (not cheap).

That article gives more numbers and testimonies. if you don't care, fine, get over it.

Re:Meh (4, Informative)

Rockoon (1252108) | more than 2 years ago | (#37322188)

15K enterprise drives cost around ~$1/gigabyte ... not all that much cheaper than SSD's which cost around ~$2/gigabyte (MLC) or ~$5/gigabyte (SLC)

Now, the comparison in the summary is between 3 SSD's and 60 15K HDD's.. in other words, the HDD solution was enormously more expensive. (and thats NOT counting the cost of the stack of Fiber Channel raid enclosures, let alone the power that 60 stack draws)

You dont seem to know what you are talking about. SSD's arent much more expensive per gigabyte than HDD's in performance enterprise environments, and always significantly outperform for equal investment, with less power costs. The only place the "cheaper per gigabyte" argument is true is when you can get away with inexpensive HDD's.. in other words, you heard people talk about one thing but didnt know that it didnt apply to another.

When you dont know what you are talking about, act like it.

Re:Meh (3, Informative)

antifoidulus (807088) | more than 2 years ago | (#37322302)

The difference is even starker when you take rack space into account. The largest 15k drive I could find was 600 gb. Enterprise SSDs on the other hand(if you dont want to go the PCIE route) are right now approaching 1 TB for a 3.5" drive, and the difference in density between the two is only going to grow. The reduced amount of rack space SSDs take up is going to further decrease operating costs.

Re:Meh (0)

Anonymous Coward | more than 2 years ago | (#37322634)

The difference is even starker when you take rack space into account. The largest 15k drive I could find was 600 gb. Enterprise SSDs on the other hand(if you dont want to go the PCIE route) are right now approaching 1 TB for a 3.5" drive, and the difference in density between the two is only going to grow. The reduced amount of rack space SSDs take up is going to further decrease operating costs.

how do you compare? 3 SSD = ??? gb . 60 15k = ??,??? gb?

Some people read more in the articles than what the articles tell us. I have yet to find an economical solution where SSS's can replace an HD. While I can find solutions where I would prefer SSD's, I still need HD's for storage......

Re:Meh (0)

Anonymous Coward | more than 2 years ago | (#37322386)

The wear level isn't being taken into consideration.

Databases, particularly transaction (MySQL-InnoDB, Oracle, etc) based have high read contention but practically zero writes after the first write. So in the case of the 60 hard drive array costing probably 60,000$ vs the PCIe card which cost probably 7000, in this specific use, yes it was enormously cost effective. In the enterprise world, this is the ideal condition for SSD.

VM Images (eg the OS image, not the /tmp /var/logs /home) are the next logical use. Followed by Video Capture and processing (which 4K 3D... oh let me do the math... needs 4,954,521,600 bytes per second for uncompressed video. (4480 (h) x 2304 (v)x60(fps)x8(16bpp/channel)), yes 5GB/sec. Through clever tricks ( http://www.red.com/products/red-rocket ) dedicated compression hardware brings it down ( see http://www.red.com/products/epic )

But also note that the cameras can do 150fps. So 13GB/sec... yeah. When you get into the nitpickery of it, They're still using YUV colorspace and lossy compression up to 18:1. The cameras currently do 42 MB/sec. Pretty nice huh?

To give you an idea... red makes cameras capable of 28K (500MB/sec)... not just 4K.

Re:Meh (2)

antifoidulus (807088) | more than 2 years ago | (#37322590)

real-time video capture(or any sort of real time capture of streaming data really) is about the WORST application for SSDs. SSD writes can potentially take a REALLY long time to write if they need to clear the sector before writing. While TRIM lessens the probability of this happening, it by no means removes it. There is still the possibility of writing to a TRIMed sector again between when the TRIM command was issued and when the SSD does garbage collection, which can be delayed considerably if a lot of data is being written to the SSD. In a real time system, you just do not want to have to deal with this unpredictability, esp. when long sustained writes to SSDs arent that much better even when the sectors are clean.

Now compare this with spinning disks. Provided there is no additional disk contention(which if you are capturing HD video, there really shouldnt be), the write times for long sequential writes to a disk are VERY predictable as the disk doesnt care what may or may not have been there before, it just over-writes it.

Re:Meh (0)

Anonymous Coward | more than 2 years ago | (#37322612)

15K enterprise drives cost around ~$1/gigabyte ... not all that much cheaper than SSD's which cost around ~$2/gigabyte (MLC) or ~$5/gigabyte (SLC)

Wait...what? By your own figures the 15k enterprise drives cost at least half as much as the SSDs. That doesn't say "not all that much cheaper" to me.

The cheaper solution in this case is the SSDs, but, according to your own quoted prices, SSDs are cheaper per gigabyte.

Re:Meh (1)

drsmithy (35869) | more than 2 years ago | (#37322684)

Wait...what? By your own figures the 15k enterprise drives cost at least half as much as the SSDs. That doesn't say "not all that much cheaper" to me.

There's a lot more to enterprise storage than the drives. Taking into account those additional costs, twice as much per drive doesn't add up to a lot more expensive in the big picture.

So a good idea would be... (0)

joocemann (1273720) | more than 2 years ago | (#37321850)

... to produce hybrid technology that would make use of both. I'm not an engineer, but I'm sure some level of hybridization could be produced that would permit a system of prioritization/fragmentation that would make the most efficient use of the hybrid drive. I can even imagine someone developing software that would analyze system performance and make recommendations as to which fragment of the drive (solid/disk) an application ought be run from.

No doubt, a naysayer lacking creativity will tell me its not possible... Please refrain because I will only tell you to stfu and try inventing instead of reproducing.

Re:So a good idea would be... (2)

SpiralSpirit (874918) | more than 2 years ago | (#37321866)

except it's already on the market. http://www.seagate.com/www/en-us/products/internal-storage/momentus-xt-kit/ [seagate.com]

Re:So a good idea would be... (1)

amicusNYCL (1538833) | more than 2 years ago | (#37321924)

Are you sure that's not just a 500GB disk with a 4GB cache?

Re:So a good idea would be... (1)

SpiralSpirit (874918) | more than 2 years ago | (#37321942)

500 gb hdd, with intelligent caching using solid state memory...ie a hybrid of an SSD and an hdd.

Re:So a good idea would be... (2)

spire3661 (1038968) | more than 2 years ago | (#37322062)

Im running this drive on my gaming comp. Its a 4GB flash 'mirror' of all the files you use most frequently. All the logic and mirroring is handled by the drive controller so its OS-agnostic. It runs great, especially for the price and considering i built my gaming rig before z68 was available. The drive HDD tachs at 90MB/sec, besting my old Raptor 10k 74 GB.

Re:So a good idea would be... (1)

NuShrike (561140) | more than 2 years ago | (#37322558)

Sounds kinda slow. My triple-raid0 SSD setup tachs at 400MB/s (Sony Z11). For your price, I guess it's good enough for general usage. After experiencing this, it's a hard stretch to try to go back to HDD speeds.

Re:So a good idea would be... (1)

seawall (549985) | more than 2 years ago | (#37321926)

...and a beautiful thing it is for fast booting (after the first time) and if you mostly run the same few programs. Not so pleasant where files are rapidly changing but just the ticket for a laptop.

Re:So a good idea would be... (2)

PhrostyMcByte (589271) | more than 2 years ago | (#37321948)

Not quite. Seagate's tech is a simple block cache, where the most frequently accessed blocks get thrown on the SSD portion. It is no doubt quite effective, but we should be able to do better if we move the logic into the OS.

The OS has intimate knowledge of the filesystem, and can easily profile its use. Files that are small or randomly accessed should get put on the SSD, while large sequentially accessed files should get put on the HDD.

Re:So a good idea would be... (3, Insightful)

SpiralSpirit (874918) | more than 2 years ago | (#37322010)

...and would require integration with multiple OS's, require drivers (the quality of which we don't know), etc. Making it transparent to the OS means you don't have any of those problems. It's a trade off, and for a first gen drive, probably the better way to go.

Re:So a good idea would be... (2)

Microlith (54737) | more than 2 years ago | (#37322094)

and would require integration with multiple OS's

Of course. OS awareness of hardware capabilities lets you avoid workarounds like TRIM. Instead we have TRIM because the nature of the hardware (NAND flash) is hidden behind an interface designed for rotating media that behaves differently.

require drivers

Well, if you integrate it into the block layer, then it can be utilized by the filesystem with one layer of abstraction. Abstract it completely and you have to hope that the drive logic is capable of making good decisions, otherwise you'll end up with your swap occupying the SSD and not get any sort of speed increase.

Re:So a good idea would be... (1)

SpiralSpirit (874918) | more than 2 years ago | (#37322112)

from all reports the seagate drives do a fair job at selecting what to cache. For these drives I think your concerns are mostly moot. Perhaps there are better methods of doing what they do, but the drives perform admirably in terms of caching and I'm sure a better generation of drives is less than a year away.

Re:So a good idea would be... (0)

Anonymous Coward | more than 2 years ago | (#37322408)

...and would require integration with multiple OS's, require drivers (the quality of which we don't know), etc.

Making it transparent to the OS means you don't have any of those problems. It's a trade off, and for a first gen drive, probably the better way to go.

Why? Simply have the SSD component show up as another device.

ZFS has been using SSDs for transparent caching for years, as has ReadyBoost under Windows 7.

Re:So a good idea would be... (1)

Lennie (16154) | more than 2 years ago | (#37322218)

ZFS with L2ARC seems to do fine with that, haven't looked closely how it does it though. But I hear it does have some optimizations.

Re:So a good idea would be... (1)

cupcakewalk (1163109) | more than 2 years ago | (#37322032)

Momentus XT is not an enterprise drive. I have a 500GB Momentus XT in my MBPro. It gives me a (relatively) inexpensive speed bump with the best of both worlds. I have the capacity I need with the platters and the speed from the solid state. The built-in logic seems to know what I use the most and keeps things fast. All for a fair price.

Re:So a good idea would be... (1)

SpiralSpirit (874918) | more than 2 years ago | (#37322086)

...no one said it was. I was just replying to the parent who suggested the idea of a hybrid drive, without knowing that they do in fact exist to some degree. I assume they will become even more popular as other brands release them, perhaps in a 3.5" performance version.

Re:So a good idea would be... (0)

Anonymous Coward | more than 2 years ago | (#37321876)

actually they already have these lol

Re:So a good idea would be... (0)

Anonymous Coward | more than 2 years ago | (#37321878)

Perhaps you mean "http://en.wikipedia.org/wiki/Hybrid_drive"

They were produced, they did not provided the expected benefit vs the complexity, but you can still buy one today.

Buy one here: http://www.amazon.com/Seagate-Momentus-7200RPM-Hybrid-ST95005620AS-Bare/dp/B003NSBF32/ref=sr_1_2?ie=UTF8&qid=1315352905&sr=8-2 :-)

Re:So a good idea would be... (0)

Anonymous Coward | more than 2 years ago | (#37321888)

They already exist. Check out the Seagate Momentus® XT Solid State Hybrid Drives.

Re:So a good idea would be... (1)

Microlith (54737) | more than 2 years ago | (#37321900)

Hybridization generally takes the form of servers with it used in a tiered form. Actual hybridized devices (like the OCZ or Seagate devices) are of limited value in enterprise.

I can even imagine someone developing software that would analyze system performance and make recommendations as to which fragment of the drive (solid/disk) an application ought be run from.

This is currently a fairly hot area of research, though most of it is occurring behind closed doors at the moment.

Re:So a good idea would be... (1)

amicusNYCL (1538833) | more than 2 years ago | (#37321916)

I can't see a whole lot of development going into hybrid drives when it's entirely possible that the price point of SSDs will drop enough in a few years to justify mainstream use.

Re:So a good idea would be... (1)

Rockoon (1252108) | more than 2 years ago | (#37322254)

They have already dropped enough to justify enterprise use. Enterprise-grade 15K drives are expensive ($1/GB) compared to the consumer-grade stuff people are normally talking about when comparing the cost of a gigabyte of storage.

Its almost as if nearly all of slashdot has no idea that 15K drives cost so much... hybrid drives make no sense unless you give up on the RPM's

The high end RAID cards do this (1)

Sycraft-fu (314770) | more than 2 years ago | (#37321964)

You can cache your data using SSDs but still have a RAID. Adaptec, LSI, Intel, they all have cards that do it.

Re:So a good idea would be... (2)

JorDan Clock (664877) | more than 2 years ago | (#37321978)

In addition to the Momentus XT that several others mentioned, there is also SSD caching on Intel's Z68 motherboards allowing you to designate a 20GB+ drive as an automated cache of whatever data is traveling to and from your hard drives. The effects are quite noticeable and it seems like the system is pretty smart. Plus, it is done in software that is easily updated should a more efficient algorithm be found.

Re:So a good idea would be... (1)

antifoidulus (807088) | more than 2 years ago | (#37322012)

TFA mentions this, in fact the solution the company in the article uses is really a 3 level hybridization, they have RAM cache(which blows even SSD out of the water, but costs at least 5x per byte as SSD and of course is volatile), the SSD, and the hard disk array for seldom-used files.

Re:So a good idea would be... (3, Interesting)

wagnerrp (1305589) | more than 2 years ago | (#37322034)

Commercial SANs have had such tiered capability for years. Multiple levels of performance from bulk, long term storage on spinning disks to short term storage on SLC flash and finally a big memory cache. ZFS for Solaris and FreeBSD offers something similar with the L2ARC, allowing a cheaper but slower way to provide a large, high speed memory cache.

Re:So a good idea would be... (1)

Cylix (55374) | more than 2 years ago | (#37322586)

Actually, most newer SANs support even further tiered setup. Some will manage this automatically and others require a scan and move. Various technologies do this with varying levels of intricacy and transparency.

A few solutions looked at the heat map of access and would proactively move these to either SSD storage, traditional SAS storage or high density SATA storage. If you wanted to spring for the frontend cache systems they were sporting volatile cache based a memory backend. Though our workloads typically would just fly things right through the cache so it wouldn't provide much usefulness for the cost.

Everyone did it slightly different and there were some really newer implementations of the typical SAN environment. Our group being very geeky really liked one of the newer technologies, but everyone agreed it was a bit too new to gamble on. Sometimes vendors don't win out on just the geek points alone.

These guys are also happy to come out and talk about their technologies. I highly recommend picking at least four major vendors and talking about their technologies. It's just not going to be on the cheap side and chances are if you need that level of speed/durability it's because you can't lose data/system uptime.

Re:So a good idea would be... (1)

Lennie (16154) | more than 2 years ago | (#37322208)

Yes, it has been done. Even in software, one of the best known is probably ZFS with L2ARC on Solaris and other systems, look it up.

Have a nice day.

Re:So a good idea would be... (1)

icebraining (1313345) | more than 2 years ago | (#37322550)

I remember reading about MS doing research in that area years before SSDs where all the craze. In fact, Vista already supported them (in 2007).

Dan "Obvious" Marbes (5, Interesting)

demonbug (309515) | more than 2 years ago | (#37321870)

For example, when Dan Marbes, a systems engineer at Associated Bank, deployed just three SSDs for his B.I. applications, the flash storage outperformed 60 15,000rpm Fibre Channel disk drives in small-block reads. But when Marbes used the SSDs for large-block random reads and any writes, 'the 60 15K spindles crushed the SSDs,' he said,"

So when you need lots of small, random reads, 3x SSDs beat 60x HDDs. Most of the time is spent seeking the file on the HDDs, your ~4.6 ms random seek time is an order of magnitude or more slower than the flash-based drives. No surprise here.

When you are just transferring large files, most of the time is spent actually transferring data. A modern SSD might manage 300-400 MB/s read, but 20x as many HDDs are still going to beat the crap out of them.

The only mildly surprising part is that part about the HDDs winning for all writes, but I guess that really depends on how the test is set up - unless you are actually writing to random parts of the HDD, it is basically a straight-up write operation, so only throughput matters - and again, 60x HDDs are going to beat 3x SSDs (though it is important to note that SSDs are significantly slower at writing than reading in general, although still much faster than an HDD on an individual basis).

Re:Dan "Obvious" Marbes (1)

jittles (1613415) | more than 2 years ago | (#37322716)

the writes probably win because the raid controller stripes the writes across all 60 drives instead of just the 3. If you had 60 SSDs versus 60 platters, I would be willing to bet that the SSDs win every time.

Re:Dan "Obvious" Marbes (0)

Anonymous Coward | more than 2 years ago | (#37322740)

You're forgetting about the hard drive cache memory. If you have 16MB per drive, then that's almost a gigabyte of RAM. The seek latency won't show up in write tests until you start to write a significant fraction of a gigabyte to the array, and by then, the SSDs lack of bandwidth will be a problem.

TFA is a bit more interesting (1)

santax (1541065) | more than 2 years ago | (#37321902)

While this summary should never have been posted here (please taco, give us good stuf, not 'I know how to move the mouse pointer so I am a l33t hax0r-stuf) but the full article is interesting. It gives a couple of hands-on experiences from users that are quite interesting. It seems the SSD can gain you speed in more situations than previously thought/marketed. Although for my uses I'm not going to spend more than 2 dollar/euro per GB.

Re:TFA is a bit more interesting (-1)

Anonymous Coward | more than 2 years ago | (#37321944)

Taco's gone, dude...

Re:TFA is a bit more interesting (0, Offtopic)

Anonymous Coward | more than 2 years ago | (#37321954)

OP is just in denial

Re:TFA is a bit more interesting (0)

Anonymous Coward | more than 2 years ago | (#37322446)

And the mods too.

Re:TFA is a bit more interesting (0)

Anonymous Coward | more than 2 years ago | (#37322420)

Ain't no taco 'round these parts no more.

Re:TFA is a bit more interesting (0)

Anonymous Coward | more than 2 years ago | (#37322602)

Although for my uses I'm not going to spend more than 2 dollar/euro per GB.

I would love to be your broker. Arbitrage like that is hard to find.

Re:TFA is a bit more interesting (0)

Anonymous Coward | more than 2 years ago | (#37322636)

You have a high user ID, so you probably are not used to actually reading the content of the articles here.

Taco checked out a couple weeks back.

Re:TFA is a bit more interesting (0)

Anonymous Coward | more than 2 years ago | (#37322706)

Dave...err...Taco's not here, man.

hopefully SRT can be more fully advanced (1)

atarione (601740) | more than 2 years ago | (#37321952)

i've had a bad experience with SSD drives having returned 2 due to deal breaking problems for me .... on caused BSODs the other could handle suspend mode without locking up.

I have now 2x samsung F3s in raid 0 (plus back up on NAS)

personally i have no desire to have to do the sort of file management small sized SSDs currently demand and since reading the reviews they all pretty much suck.... however SRT is a compelling option... all the convenience of big mechanical drives plus a speed boost of SSD ... unless the price per GB of SSD can be brought down alot then i think SRT may well be the way to go.

Re:hopefully SRT can be more fully advanced (1)

tayhimself (791184) | more than 2 years ago | (#37322028)

This is partly your fault for trusting a company called OCZ with a brand new controller (SandForce) that was only apparently optimized for speed. Be a little conservative and it'll pay off. I'm a happy owner of a WD Sliconedge Blue branded drive in a gaming box and I've been very happy with it. It's no speed demon but it works well. Not sure what SSD came with my imac but needless to say that works fine as well.

SSDs for lower power, low noise environments (2)

oracleguy01 (1381327) | more than 2 years ago | (#37321980)

To me the cost per GB of an SSD really makes it tough to justify in a lot of cases especially if you want to store large programs like games where you'll need at least a 100GB SSD. However one area place I have started to use low capacity (8 or 16GB) SSDs are in low noise and/or low power environments. If you team them with an ITX Atom board and the right power supply you can build a small computer with no moving parts whatsoever. And the computer will have a very low power usage for applications like HTPCs or network appliances (like firewalls) where the machine might always be powered on.

Re:SSDs for lower power, low noise environments (0)

Anonymous Coward | more than 2 years ago | (#37322338)

It's also quite nice for relatively low load/low total storage web servers. I keep mine in a cardboard box with a 2GB USB flash drive and no hard drive. Runs a MySQL database and Apache web application using Debian for the last 3 years. I keep expecting the flash drive to die, but no sign yet. I have another backup one in case something does go wrong. If the current one gets fried I just shutdown, take the old one out and boot with the new one.

And, no, I'm not telling you the URL. Cardboard is too flammable :-)

Re:SSDs for lower power, low noise environments (1)

icebraining (1313345) | more than 2 years ago | (#37322572)

Same here, although I don't run Apache. but Lighttpd. But my workload is basically read only, so it's very cache friendly. The flash drive only lights up during boot.

Confidential Data not safe on SSD. (0)

Anonymous Coward | more than 2 years ago | (#37322014)

Confidential Data not safe on SSD.

http://hardware.slashdot.org/story/11/02/17/1911217/Confidential-Data-Not-Safe-On-Solid-State-Disks [slashdot.org]

So the solution is to totally encrypt your SSD, which slows down it's performance as the CPU has to decrypt/encrypt everything on the fly. Good luck recovering data off the drive in case it won't boot.

If you don't encrypt, then a device like this or software can be used to probe your SSD

http://www.thenewspaper.com/news/34/3458.asp [thenewspaper.com]

I've had a boot RAID 0 of two 10,000 RPM drives for years before SSD's ever came out, mostly never used it's extreme speed, except when cloning or copying huge folders of GB's in size. Which was rare. Then the other drive also had to be just as fast and expensive.

So for most SSD is a waste, the larger, less expensive and more private HDD's are still the best value.

SSD's also have limited writes, so they don't lend themselves to a lot of data transfer and changes.

Get a 64Bit machine and bone up on the RAM, perhaps a 7,200 RPM or faster drive, that's the best solution for most users. Storage speed alone isn't a cure all for a fast computer as reads and writes don't occur all the time.

In other news (0)

gmhowell (26755) | more than 2 years ago | (#37322036)

The Emperor insists his new clothes look AWESOME!

SSD and Expense (0)

Anonymous Coward | more than 2 years ago | (#37322048)

Why are they still so expensive?

The outcome is not exactly what they said (2)

Courageous (228506) | more than 2 years ago | (#37322116)

If you think about it, the outcome of this test is 100% in favor of the SSD.

Think about it:

The tester was willing to test only 3 SSD's versus *60* 15K drives. So the tester thought that 20 times fewer drives was a fair test for the comparison. What is the tester actually saying here? I think I have a feeling I know. :-)

Anyway, 15K drives are not long for the market. Soon, all that will be left are economy class, 10K, and SSD's.

Only a little more maturity, and the enterprise flood gates will open. When that happens, the hapless victim will be the short stroked IOPS environment, where total IOPS was always the requirement, and that requirement was for more IOPS than capacity. I.e., if a 15K drive offers 400 IOPS, and you need 400,000 sustained, but don't have to store very much at all, your only current choices are buying a lot of 15K drives. Or a bazillion less SSD's.

The switchover point is only a heartbeat away.

Bye 15K drive. I'll miss you.

C//

Re:The outcome is not exactly what they said (0)

Anonymous Coward | more than 2 years ago | (#37322292)

Enter the 30K and 60K drives, which are coming soon. With rarified sealed cases, they could be a real powerhouse. Magnetic media is not dead yet and has a long way to go. The reason is that it's so cheap and easy to make. Yes, moving parts are bad, but if you could get 60K 2+TB drives in a 2.5" form factor, there's a large existing market for that type of speed and size. I have no doubts that SSD will be able to get that much or even much more in a similar form factor, but the manufacturing is expensive and they would have to throw out 10 for every one they make. I'm pretty sure they use a litho technique, which is a long and error-prone process. Whereas with magnetic media, the "storage' is "manufactured" by the drive itself (laying down tracks with the record head) and the media is just metal particles on a glass platter which is basically sprayed on in bulk.

Don't be poor... (1)

sdguero (1112795) | more than 2 years ago | (#37322124)

Get one. It's worth it.

Re:Don't be poor... (0)

Anonymous Coward | more than 2 years ago | (#37322624)

Oh Snap!

How come nobody in the ghettos thought of that one! Golly, and the solution was so simple!!!

Damn.. you know how to cure cancer? Get better!

Man... you have some crazy insights... you should write a book!

Seeks are an issue (3, Interesting)

Anonymous Coward | more than 2 years ago | (#37322128)

Just as an info dump for anyone who's not familiar with why SSDs perform so much better: SSDs have far better seek performance.

A normal HDD takes about 10ms to seek (3ms at the very high end, 15ms at the low end- 10ms is a good rule of thumb), which means you've got a princely 100 seeks per second per spindle (i.e. HDD). SSDs don't have seek limitations. Looking up a contiguous block of data vs not looking up a contiguous block of data makes no difference to an SSD.

It turns out that 100 seeks isn't a lot in serving infrastructure or, in some cases, on a desktop. When you go to read a file off disk multiple seeks are involved- you need to look up the inode (or equivalent), find the file and a large file will probably be in many different chunks require separate seeks to access them.

Even on a desktop you'll frequently be seek bound not throughput limited. Lets say you are starting up a largish java application (Eclipse might be a good example). It references a huge number of library (.jar) files which are certainly large enough to require many seeks to access. And those libraries are often linked in to system libraries which also have their own dependencies and may have additional dependencies all of which require further seeks. Plus with Eclipse it will look up the time stamps on files in the project... and so on.

During boot of a system is another time when HDD are usually seek bound- lots of different applications/services/daemons are starting at the same time, loading lots of libraries causing lots and lots of seeks.

On server infrastructure a highly utilized database will probably be seek bound not throughput limited.

The article is kind of stating the blindly obvious- if you are seek bound SSDs are better. And 60 drives gives ~6000 seeks. A typical modernish desktop HDD can get in the order of 100MB/s data transfer (average sustained), more expensive HDD can get quite a lot more. If we take 3.0Gb/s as a ceiling (i.e. the SATA 3.0 max transfer rate) then at 6000 seeks/second you are getting 3000MB/6000seeks=0.5MB per seek. So the result makes perfect sense if you looking up data that is either entirely non-contigous or smaller than 500kB- an SSD will beat you every time on seeks (since it has no seek time).

The limitations on SSDs are: they have throughput limitations, just like HDD and more importantly their write performance is usually significantly worse than a HDD (writing on an SSD often involves reading and re-writing large chunks of data, even for very small writes). You can easily construct tests where HDD perform better than SSDs (particularly something like a 60 spindle array of HDD where an awful lot of writes can be cached in the on disk's ram buffer, which is common on hire performance drives- often battery backed so they can "guarantee" the write has been committed without having to wait for a write to the magnetic media).

Of course SSDs other obvious application are where you want robustness and silence, i.e. laptops. Oddly enough their power performance isn't that much better than a normal HDD (although that might have changed since I last read about it).

 

Most users with a mechanical drive... (1)

Osgeld (1900440) | more than 2 years ago | (#37322192)

Say the cost is worth the wait, whats your point mr non article?

Perhaps more relevant to home/SOHO users is . . . (2)

Kunedog (1033226) | more than 2 years ago | (#37322230)

. . . a Storage Review experiment from over a year ago:

http://www.storagereview.com/western_digital_velociraptors_raid_ssd_alternative [storagereview.com]

They put WD Raptors in RAID 0 to form a high performance (yet still affordable) platter drive setup, and then faced them off against Western Digital's new (at the time, first) SSD. Makes sense, right? Except that WD's first SSD was a complete joke, an underperforming [anandtech.com] , laughably expensive POS that I forgot about a couple days after Anand's review. When I first read about it I couldn't help but think that WD was deliberately setting it up to fail. It was at the bottom of every benchmark yet priced higher than any other (MLC) SSD. They even put a jmicron controller in it for fuck's sake (not the infamous original one, but still . . .)! Storage Review's calling it a "mid-range" SSD is very generous at best.

Even so, this supposedly screaming platter drive setup could only occasionally hang with the bottom of the barrell of SSDs, and mostly lagged behind it. And as I said, this was over a year ago. It goes without saying that they didn't worry much about heat, noise, reliability (of RAID 0), or power consumption.

Anand doesn't even list platter drives in his benchmark results anymore because they'd skew the charts so badly.

As a previous poster said, a winning strategy is to get a SSD boot drive just big enough for your OS and programs, and use platter drives for everything else. And since the SSD takes care of your performance needs, you can get the cheapest, slowest, coolest, quietest platter drives. There are some cases where both high performance and high capacity are needed at once (like video editing) but they're not the norm.

Mix this: (1)

alphatel (1450715) | more than 2 years ago | (#37322276)

There is no such thing as poor performance on an SSD, unless you allocate it poorly. The fact that most companies aren't paying attention to fantastic ways to get reasonably priced SSDs into their equipment just proves that hype is awesome and smart still sucks. Luckily for me, smart is still making money (though not nearly as much as hype).

My own very recent experience (2 weeks ago) (4, Informative)

Raleel (30913) | more than 2 years ago | (#37322690)

I moved a small 4TB database from 24x 256G 15k SAS drives to 24x 240G OCZ Vertex 3 SATA3 drives. I ran a few queries on the old and the new. same data, same parameters, same amount of data pulled. Both were hooked up via PCIe 8x slots.

the SSD crushed the SAS. Not just a mere 2x or 3x crushing. A _FIFTEEN TIMES FASTER_ crushing. This was pulling about a million rows out. 12 seconds (SSD) vs 189 seconds (spindles)

Cost difference? under $50 per drive more expensive for SSD. I think our actual rate was around $10 per drive more. However, the system as a whole (array+drives+computer) was $12k less. No contest... for our particular application, SSD hands down makes it actually work.

we'll be moving the larger database (same data, same function) to SSD as soon as we can.

RAID SDD for teh win (0)

Anonymous Coward | more than 2 years ago | (#37322718)

I run 3 Vertex 2 120 GB SSD in raid 0. I set the array to only use 230 GB so each drive has extra OP area. I have a dedicated 500 GB HDD that I do weekly backups to and two 2 TB HDD for storage.

Been running this way for months and I will NEVER go back to spinners for a boot drive.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>