Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

"Limited Edition" SSD Has Fastest Storage Speed

timothy posted more than 4 years ago | from the genuine-leather-bits dept.

Data Storage 122

Vigile writes "The idea of having a 'Limited Edition' solid state drive might seem counter-intuitive, but regardless of the naming, the new OCZ Vertex LE is based on the new Sandforce SSD controller that promises significant increases in performance, along with improved ability to detect and correct errors in the data stored in flash. While the initial Sandforce drive was called the 'Vertex 2 Pro' and included a super-capacitor for data integrity, the Vertex LE drops that feature to improve cost efficiency. In PC Perspectives's performance tests, the drive was able to best the Intel X25-M line in file creation and copying duties, had minimal fragmentation or slow-down effects, and was very competitive in IOs per second as well. It seems that current SSD manufacturers are all targeting Intel and the new Sandforce controller is likely the first to be up to the challenge."

cancel ×

122 comments

Sorry! There are no comments related to the filter you selected.

Can you hack one in? (1)

leighklotz (192300) | more than 4 years ago | (#31206772)

Is the cap left off the board so you can just put one in yourself or is it size-reduced as well?

Re:Can you hack one in? (1)

mrsteveman1 (1010381) | more than 4 years ago | (#31206868)

There's a blank space with pads, yes, but i don't think it would be a good idea to just solder it in there. Going out on a limb here, but the supercap may require firmware support to actually complete the writes when the main power is yanked. Then again maybe its just wired in parallel with the power from the SATA connector, dunno. I'm not much help :)

Re:Can you hack one in? (1)

Ethanol-fueled (1125189) | more than 4 years ago | (#31207660)

Or the supercap could also depend on other componments omitted or shorted(zero-ohm resistors) to the board(s).

Re:Can you hack one in? (1)

leighklotz (192300) | more than 4 years ago | (#31207782)

If it doesn't require firmware someone will figure it out. It's unlikely it requires firmware though, unless they specifically decided it had to somehow. I can believe some other parts might be omitted though (maybe a diode or jumper as mentioned) but I bet it's no biggie. Now, what's the SMT supercap part they use?

Re:Can you hack one in? (0)

Anonymous Coward | more than 4 years ago | (#31209542)

So, you are working on your PC and the power goes; the drive powers off. Without super caps to provide the extra few seconds of power to the drive the tables are not moved from RAM to flash and the table is lost. Or does this drive not store the tables in RAM but on flash instead?
 
Excuse my ignorance but I work with enterprise SSDs and not consumer SSDs.

Marketspeak, or as normal people call it: lies. (0)

Anonymous Coward | more than 4 years ago | (#31206782)

No fragmentation? What about the fact that SSDs get their blazing fast speed from fragmenting everything they write across the nodes?

Another /.versiment of little value.

Re:Marketspeak, or as normal people call it: lies. (2, Informative)

beelsebob (529313) | more than 4 years ago | (#31206902)

Why does it matter if they get their blazing fast speed by fragmenting all the data all over the place? On hard disks fragmentation is a bad thing, on SSDs it's a good thing, what's your point?

Re:Marketspeak, or as normal people call it: lies. (0)

Anonymous Coward | more than 4 years ago | (#31206956)

The whole issue with SSDs is that their blazing speed gained in this fashion eventually slows down to almost a halt, once the nodes near being full. And as the icing on the cake, the full can be actual information or deleted stuff that cannot be written over separately on a node, since the whole block must be erased all at once to make it writeable again.

I've heard they're working with a Trim function thingy to remedy this, but I haven't really paid attention since.

Love, OP.

Re:Marketspeak, or as normal people call it: lies. (4, Insightful)

MrNemesis (587188) | more than 4 years ago | (#31207012)

If "almost a halt" is 200MB/s read speeds as opposed to 260, I think I can live with it before I upgrade to my TRIM firmware, which negates the whole issue... whoops, I started using TRIM on my home drives months ago.

Seriously, the SSD market has exploded in the last 12 months. It's gone from being an expensive tool useful to enthusiasts to a not-quite-as-expensive-but-faster-than-any-number-of-hard-drives-can-provide utility that's worth five times it's price, especially for enterprise users.

* Proud owner of 1 intel SSD, 3 OCZ SSD's and administrator of about 3TB of SSD SAN and >8GB FusionIO cache with a bunch of spinning magnetic domains in the background that we can't get rid of fast enough

Re:Marketspeak, or as normal people call it: lies. (1)

PopeRatzo (965947) | more than 4 years ago | (#31207046)

especially for enterprise users.

Why just for "enterprise" users, and what does Star Trek have to do with it anyway?

Would SSDs make a big difference for people who create and edit sound or video? If you tell me it'll improve the performance of my digital audio workstation or video editing software, I'll blow my fat tax refund check on some SSDs right now. Can I just hook up SSDs to the SATA controller on my machine?

Hell, I'm just full of questions. Irish coffees tend to do that to me.

Re:Marketspeak, or as normal people call it: lies. (1, Informative)

Anonymous Coward | more than 4 years ago | (#31207172)

If you're doing non linear sound and video editing with multiple simultaneous streams coming out of many files, and those files are greater than the amount of your available RAM (common), then yes, an SSD would make a big difference. You'd also experience comparatively blazing boots on your workstation. And yes, SSDs will connect to your SATA controller. As far as the system is concerned, they are hard drives. Very very fast hard drives.

Re:Marketspeak, or as normal people call it: lies. (1)

nitehorse (58425) | more than 4 years ago | (#31207372)

Make sure you get either an Intel X25-M (though the biggest one they offer is 160GB) or something with an Indilinx controller (OCZ Vertex, for example, up to 256GB). Stay away from anything with a JMicron controller - those drives might be cheaper for bigger sizes, but the performance is crap.

Re:Marketspeak, or as normal people call it: lies. (4, Insightful)

MrNemesis (587188) | more than 4 years ago | (#31207390)

Irish coffee's bring out the best in everyone ;)

Reason I started using them at home was due to video editing - not very useful for encoding when you can rarely outpace your CPU's capability to encode stuff, but for random seeking/non-linear stuff/extracting streams/muxing, SSD's are a boon. Depending on your workload you can even get away with using crappy SSD's that are shit at random workloads but awesome at sequential.

TBH though you'll get the most noticeable improvement with using it as your system drive; apps start almost instantly and there's never any thrashing as $bloaty_app loads. Heck, my linux machines boot in 5s with the comparatively cheap OCZ Agility drives; the difference is less noticeable in windows however. Try running a laptop off an SSD for a month and then go back to a mechanical drive - the apparent slowness will drive you crazy :)

The benefits for enterprise users are especially large because 20k of SSD can replace 100k of fibre channel whilst getting 10x the performance and greater reliability. Plus Picard totally loves SSD's as he can rest his tea, earl grey, hot, on them without risking Data loss.

Re:Marketspeak, or as normal people call it: lies. (2, Informative)

seifried (12921) | more than 4 years ago | (#31208894)

Try running a laptop off an SSD for a month and then go back to a mechanical drive - the apparent slowness will drive you crazy :)

Not to mention the battery life decrease, HD -> SSD got me 40% longer battery life on my netbooks. About 11 hours in total now, which is the way it should be. Plus no more worries about vibrations, decreased heat and it's quieter.

Re:Marketspeak, or as normal people call it: lies. (1)

PopeRatzo (965947) | more than 4 years ago | (#31209652)

TBH though you'll get the most noticeable improvement with using it as your system drive;

Thanks, MrNemesis. That's exactly what I'm going to do. I'm happy with the data throughput that SATA drives with big caches give me for streaming samples or video, but it would be great to have my system a little peppier.

I'm going to haunt the online stores later today, as soon as I get some breakfast.

Re:Marketspeak, or as normal people call it: lies. (1)

FlyingBishop (1293238) | more than 4 years ago | (#31207580)

You can hook it up to the SATA.

Though if you have a spare PCIe slot, that would probably give you more throughput. Of course, the model that plugs into PCIe is different from the SATA model.

It's likely you would notice the benefit regardless.

ZFS (0)

Anonymous Coward | more than 4 years ago | (#31207260)

* Proud owner of 1 intel SSD, 3 OCZ SSD's and administrator of about 3TB of SSD SAN and >8GB FusionIO cache with a bunch of spinning magnetic domains in the background that we can't get rid of fast enough

ZFS, best of both words:

http://blogs.sun.com/brendan/entry/hybrid_storage_pool_top_speeds

:)

Re:Marketspeak, or as normal people call it: lies. (1)

BikeHelmet (1437881) | more than 4 years ago | (#31208720)

>8GB FusionIO cache with a bunch of spinning magnetic domains in the background that we can't get rid of fast enough

Is that supposed to be TB? Don't ioDrives come in 160GB multiples?

Mind you, if I had 8TB of ioDrives, there'd be no need for anything else. Each one of those has read speeds of close to 1GB/sec, and enough IOPS to beat a dozen of the next best competitor. Now if only they cost 15x less per GB.

Re:Marketspeak, or as normal people call it: lies. (1)

PopeRatzo (965947) | more than 4 years ago | (#31207032)

I've got a question here, if you don't mind me asking:

Are SSDs more prone to errors than disk drives? If so, why?

There seems to be some strangeness about SSDs and if I try to go read some technical papers on them on a Friday night when I've half a snoot full, it's going to make me all headache-y.

And regarding this "trim" function, can't they just make the nodes smaller?

And is this problem just going to go away once they get the manufacturing capacity to make gigantic SSDs the way they make gigantic hard drives? I remember when there was a lot of effort to be really efficient in the use of hard disks and now it doesn't seem as important when you've got 2TB drives on sale.

Re:Marketspeak, or as normal people call it: lies. (1)

Idiomatick (976696) | more than 4 years ago | (#31207376)

SSDs are improving very quickly. Yes they are more prone but that isn't the main story. We know much more about regular HDDs and their failure modes than we do for SSDs. Because of this SSDs seem to fail without warning, or don't degrade as nicely. Which is more of the problem tbh.

Re:Marketspeak, or as normal people call it: lies. (2, Informative)

Jeffrey Baker (6191) | more than 4 years ago | (#31207990)

This is completely backwards. It is hard drives which fail without warning. See Google's recent paper on the futility of S.M.A.R.T. And when an HDD fails, your data is _gone_. The best you can hope for is spending huge amounts of money to put the platters into another drive and reading the data back. The predominant failure mode for flash is erase cycle endurance exhaustion, upon which time the flash reverts to being read-only. Compared to a HDD the flash failure mode is hugely desirable. You can also monitor an SSD and replace it when it reaches the 100,000 erase cycle limit (or 10,000 for MLC). HDD has no such mechanism.

Re:Marketspeak, or as normal people call it: lies. (1)

beelsebob (529313) | more than 4 years ago | (#31209084)

Except that most modern SSDs actually have a 50,000,000 erase cycle limit, not 100,000. For reference, an X25-M writing continuously as fast as it could, constantly, wouldn't hit this until 140 years.

The other nice thing here is that with hard disks, the risk disk failure is constant when compared to the capacity of the device, with SSDs, the risk halves as the capacity doubles, because the controller can spread its writes out more.

SSDs really are *massively* more reliable than HDDs. They last for longer (assuming our estimates of their tollerances are right), their failure modes are good, and their chance of losing lots of data goes down the more data you squeeze in there. The only thing HDDs have in their favour is that we know roughly that they tend to fail in the 4-8 year window, so we should probably replace them at 3 years or so.

Re:Marketspeak, or as normal people call it: lies. (1)

TheTyrannyOfForcedRe (1186313) | more than 4 years ago | (#31210686)

This is completely backwards. It is hard drives which fail without warning.

I hate to break it to you but SSDs also fail without warning and go completely dead. Visit the PC Enthusiast forums and do some searches. 100% dead SSDs are very common despite their short time in the market. It's scary how many of these things are going belly up after only months of use.

Platter based disks and SSDs both have big complex controller boards and those controllers are subject to failure. The only thing you're missing with SSDs is the risk of mechanical failure. If a platter based disk is treated properly through it's lifetime there is little risk of mechanical failure until it is quite old.

I was going to grab a few 30GB SSDs when they dropped south of $100 but the failure rate has caused me to put my plans on hold. I'm going with 500GB per platter mechanicals for the time being. RAID a couple and you have sequential rates in the ballpark of the affordable SSDs.

Re:Marketspeak, or as normal people call it: lies. (2, Informative)

timeOday (582209) | more than 4 years ago | (#31207080)

The whole issue with SSDs is that their blazing speed gained in this fashion eventually slows down to almost a halt, once the nodes near being full.

I've had one in my laptop for about 8 months and write gigabytes to it every day, particularly suspending VMWare images to disk. It still writes at 140 MB/s sustained (to ext3 filesystem, not just raw write speed). That might be slower than when it was new, I don't remember, but it destroys any laptop harddrive. This drive was expensive though, like $800 IIRC, but it also supports full-disk hardware encryption which was mandated at my workplace.

Before that I had a first-gen X25M. It did slow down more, but still completely blew away hard drives. "Slowing down to almost a halt," no, not even close. Especially for multitasking, which brings HDDs almost to a halt.

As you can see for this newer drive, there is practically no slowdown [pcper.com] , and in any case even its slowest results are many times faster than any laptop HDD.

Re:Marketspeak, or as normal people call it: lies. (1)

Weaselmancer (533834) | more than 4 years ago | (#31207120)

I've heard they're working with a Trim function thingy to remedy this, but I haven't really paid attention since.

If you're going to take the time out to post and bitch, at least read up and know what you're bitching about. They've had TRIM for a while now, and Indilinx firmware can collect fragmented nodes during pause time. [engadget.com]

These are solved problems you're bitching about.

What do you do for an encore? Complain about how annoying it is to get across town in a horse and buggy?

Re:Marketspeak, or as normal people call it: lies. (1)

feepness (543479) | more than 4 years ago | (#31206946)

It's just like LCDs getting their amazing thinness by having individual pixels instead of scanning an electron beam across a vacuum tube.

Disgusting!

oh god... (1)

the brown guy (1235418) | more than 4 years ago | (#31206798)

"I was so eager to test it that I pounded on this drive all night "

Possible poor choice of words?

Re:oh god... (4, Funny)

goldaryn (834427) | more than 4 years ago | (#31206824)

"I was so eager to test it that I pounded on this drive all night "

Possible poor choice of words?

"Er, I was testing IOs per second."

Re:oh god... (0)

Anonymous Coward | more than 4 years ago | (#31207958)

"I was so eager to test it that I pounded on this drive all night " Possible poor choice of words?

"Er, I was testing IOs per second."

However you call, man.

Re:oh god... (1)

Courageous (228506) | more than 4 years ago | (#31209996)

Maybe you were testing table insertions in SQLer plus?

Back int' day (0)

Anonymous Coward | more than 4 years ago | (#31206830)

I remember t' days when you could create a ramdrive ont amiga that'd survive warm resets, that was a persistent as yer needed, by 'eck

Re:Back int' day (1)

noidentity (188756) | more than 4 years ago | (#31207122)

I remember t' days when you could create a ramdrive ont amiga that'd survive warm resets, that was a persistent as yer needed, by 'eck

Up until 2003, I used an old PowerMac as my main machine, running Mac OS 7.6.1. I kept the system folder on a RAM disk, and booted off that. Blazing fast, but had the bad habit of losing all data whenever there was a power failure (nothing a periodic mirror-to-disk couldn't remedy, though). I still use that old PowerMac all the time (running Mac OS 9.2.2 now) and keep a persistent 130 MB RAM disk for storing temporary files. It persists across reboots, though I don't keep the system on it anymore.

Dial M (0)

Anonymous Coward | more than 4 years ago | (#31206852)

Why is besting the Intel X25-M "news"? The M stands for Mainstream. It's not their fastest drive.

Re:Dial M (2, Insightful)

XanC (644172) | more than 4 years ago | (#31206870)

Because we're talking about the home/enthusiast market, which is completely different (including and especially in price point) from the enterprise storage market.

"improve cost efficiency" - press releases on /. (1)

seifried (12921) | more than 4 years ago | (#31206864)

Is that new-speak for "cheaper"? I also love "the drive was able to best the Intel X25-M" this is one of the worst written pieces of commercial press release I have ever seen on Slashdot.

Re:"improve cost efficiency" - press releases on / (1)

maxume (22995) | more than 4 years ago | (#31207034)

Calling the article a 'press release' unfairly tarnishes OCZ. Their press release is still full of press release though:

http://www.ocztechnology.com/aboutocz/press/2010/362 [ocztechnology.com]

Re:"improve cost efficiency" - press releases on / (1)

seifried (12921) | more than 4 years ago | (#31207228)

I have no problem with OCZ releasing press releases, they're a company that sells stuff so that's what they do. Slashdot OTOH is supposed to be some sort of quasi-news site (or at least it used to be) with discussion, not a PR mouthpiece.

Re:"improve cost efficiency" - press releases on / (1)

maxume (22995) | more than 4 years ago | (#31207374)

Right, but this isn't PR, PC Perspective thinks they are a news site (and they didn't simply parrot the OCZ press release).

Your CO-mmand of the Englash (0)

Anonymous Coward | more than 4 years ago | (#31208442)

Is pitiful, if you don't realise that "besting" someone or something, is a correct use of the word. Probably from Chivalric era.

Plus, I just Farted ... tweet tweet

"to lower the cost" (1)

Futurepower(R) (558542) | more than 4 years ago | (#31206866)

"to improve cost efficiency"

should be

"to lower the cost"

Has Fastest Storage Speed = Is Fastest? (0)

Anonymous Coward | more than 4 years ago | (#31206984)

Is "fastest storage speed" another way of saying that "fastest"? I ask because I've got a a fast drive, but I'm not sure whether its speed itself is fast.

All things are a "Limited Edition" (0)

Anonymous Coward | more than 4 years ago | (#31207144)

I always crack up when I see this in advertising. They will make them until they run out of materials to make them (hence they are limited to the silicon that exists on earth).

Re:All things are a "Limited Edition" (1)

peipas (809350) | more than 4 years ago | (#31207350)

I think Seinfeld, rather, had it right. "Limited to what, how many you can sell?"

Re:All things are a "Limited Edition" (2, Insightful)

TheLink (130905) | more than 4 years ago | (#31207690)

This is computer stuff, so "Limited Edition" is more likely to mean: "After a few months when we need something 'new' for marketing reasons, we'll just add the super capacitor, call it the 'Pro' edition, and phase out the 'Limited Edition'".

Re:All things are a "Limited Edition" (1)

peipas (809350) | more than 4 years ago | (#31208438)

Maybe in 1995 [wikipedia.org] .

Re:All things are a "Limited Edition" (1)

TheRaven64 (641858) | more than 4 years ago | (#31210282)

Not exactly. In this kind of market it usually means 'we've got a better technology that we were hoping to get to market by now, but we had some delays. In the meantime, we worked out how to make the older tech a bit faster, so you can buy that while we fix the bugs keeping the new tech from the market. When we've finally got the new tech working, this line will look really overpriced because it's much more expensive than the newer design to produce and doesn't provide any significant benefits, so we'll discontinue it'.

Misleading title (4, Informative)

dnaumov (453672) | more than 4 years ago | (#31207150)

The new OCZ SSDs, while a welcome addition to the market aren't anywhere near "fastest storage".
Crucial RealSSD C300: http://www.tweaktown.com/reviews/3118/crucial_realssd_c300_256gb_sata_6gbps_solid_state_disk/index5.html [tweaktown.com]
Fusion-IO: http://storage-news.com/2009/10/29/hothardware-shows-first-benchmarks-for-fusion-io-drive/ [storage-news.com]

Re:Misleading title (5, Informative)

AllynM (600515) | more than 4 years ago | (#31207696)

- We included some early C300 results with the benches. The C300 will read faster (sequentially) under SATA 6Gb/sec, but it is simply not as in most other usage.
- Fusion-IO - good luck using that for your OS (not bootable). Fast storage is, for many, useless unless you can boot from it.

Allyn Malventano
Storage Editor, PC Perspective

Re:Misleading title (2, Informative)

Khyber (864651) | more than 4 years ago | (#31208194)

"Fusion-IO - good luck using that for your OS (not bootable)."

Not until Q4, when we release the firmware upgrade to get it working.

Then, your point will be moot.

Re:Misleading title (1)

MarkoNo5 (139955) | more than 4 years ago | (#31209134)

His point is that you _currently_ cannot boot from it, so it is useless for many people _today_. That point cannot become moot unless find a way to time travel back to today with your Q4 firmware. It's not like we need to wait just a few more days until you release it.

Re:Misleading title (3, Informative)

AllynM (600515) | more than 4 years ago | (#31209180)

I've got a copy of the fusion-IO faq from early 2008 that reads as follows:

> Will the ioDrive be a bootable device?
> This feature will not be included until Q3 2008 ...Then it was promised for the Duo (and never happened). ...Then it was promised for the ioXtreme and even it was released without the ability.

Don't get me wrong, I'm a huge fan of fusionIO, but you can only fool a guy so many times before he gives up hope on a repeatedly promised feature.

Allyn Malventano
Storage Editor, PC Perspective

Re:Misleading title (1)

TheRaven64 (641858) | more than 4 years ago | (#31210310)

Seriously? It's going to take you over three years to write the two hundred or so lines of x86 assembly required to let the BIOS see your product as a disk? Why does this not fill me with confidence in your company's technical ability? Possibly the same reason that you are selling a storage product using solid state storage with marketing material telling everyone that it's not an SSD...

Re:Misleading title (1)

BikeHelmet (1437881) | more than 4 years ago | (#31208726)

So use a regular SSD for the OS, and multiple ioDrives for heavy DB work, and whatever else you can throw at it?

Re:Misleading title (1)

raynet (51803) | more than 4 years ago | (#31209984)

I don't think the OS needs any kind of fast media for boot. Just boot from USB stick or similar and set the Fusion-IO as root device. The USB stick will be fast enough to transfer the 20-40MB that are required to load the kernel.

Re:Misleading title (1)

rolfwind (528248) | more than 4 years ago | (#31207998)

I, for one, will never buy another OCZ product again. I bought a "Solid Series" a little over a year ago when newegg reviews (about a dozen at the time) only had good things to say about them. They were pretty fast in the beginning.

About half-a-year later, the thing started stuttering for seconds on end, much worse than any non-broken spinning disk I encountered. It was a little over half full, that's it. Turns out that they put in crappy controllers, I guess. Not fully sure. Now the company says they're not good at stand-a-lone performance, suddenly they called it a "value series", and that you should "upgrade" to a premium series for that, but they're still good for arrays and the like.

They certainly didn't assert or say that anywhere at the time of sale, it's just a belated excuse for shipping a crap product to people that paid good money.

http://www.newegg.com/Product/Product.aspx?Item=N82E16820227373 [newegg.com]

They even go on the reviews to excuse for this. But of course they don't lift a finger to fix the problem. Stick with Intel or some reputable company.

Re:Misleading title (0)

Anonymous Coward | more than 4 years ago | (#31208162)

i got my Intel X25-M G2 and can't be happier...until i get SATA6Gbit ports and some drives to max out that bus as well :)

Re:Misleading title (1)

pantherace (165052) | more than 4 years ago | (#31208580)

Welcome to jumping on a new technology, you got burned, as everyone with the exception of Samsung and Intel drives of the time used the same Jmicron controller. OCZ actually went and designed some cache and paired controllers into their middle offering (I forget the name), and I believe switched to Samsung controllers and single layer flash for a time on the high end. (I don't know what their current offerings are)

Everyone else for the most part kept selling parts that used the same chip as the OCZ Value Series. Most makers, including OCZ released firmware updates. You likely got pretty much what you paid for at that time in the SSD race.

Any other product from that time, even including Intel and Samsung didn't deal with what you've got which is random writes to fragmented memory. The problem comes from it having to erase a block, then rewrite the block. That's likely several times your data's size. It would cause especially windows machines to freeze for a few seconds. Most Linux testers I saw, didn't have quite the same problem of complete lockup, due to differences in how Linux caches to memory and when it writes to disk, but still had issues with very limited IO on fragmented devices.

In fact: http://www.tomshardware.com/forum/247280-32-slow-freeze-stuttering-vista-outlook-solved [tomshardware.com]
So while it may suck for you, take a look at where the links point to.

Because of the fact that they were the only ones that seemed to be trying to seriously solve or work around problems, if I were to get a new SSD, I'd probably get an OCZ. Of course, jumping on new or old technologies, tends to cause problems.

Re:Misleading title (1)

BikeHelmet (1437881) | more than 4 years ago | (#31208736)

Their Solid Series 2 is pretty good. Ridiculously cheap. It's reliably fast for read speeds, at least. But stay away from any of the older SSDs that had those horrible JMicron controllers.

How hard can it be? (4, Interesting)

bertok (226922) | more than 4 years ago | (#31207220)

I'm kinda fed up waiting for the SSD manufacturers to get their act together. There's just no reason for drives to be only 10-50x faster than physical drives. It should be trivial to make them many thousands of times faster.

I suspect that most drives we're seeing are too full of compromises to unlock the real potential of flash storage. Manufacturers are sticking to 'safe' markets and form factors. For example, they all seem to target the 2.5" laptop drive market, so all the SSD controllers I've seen so far are all very low power (~1W), which seriously limits their performance. Also, very few drives use PCI-e natively as a bus, most consumer PCI-e SSDs are actually four SATA SSDs attached to a generic SATA RAID card, which is just... sad. It's also telling that it's a factor of two cheaper to just go and buy four SSDs and RAID them using an off-the-shelf RAID controller! (*)

Meanwhile, FusionIO [fusionio.com] makes PCI-e cards that can do 100-200K IOPS at speeds of about 1GB/sec! Sure, they're expensive, but 90% of that is because they're a very small volume product targeted at the 'enterprise' market, which automatically inflates the price by a '0' or two. Take a look at a photo [fusionio.com] of one of their cards. The controller chip has a heat sink, because it's designed for performance, not power efficiency!

This reminiscent of the early days of the 3D accelerator market. On one side, there was the high-performing 'enterprise' series of products from Silicon Graphics, at an insane price, and at the low-end of the market there were companies making half-assed cards that actually decelerated graphics performance [wikipedia.org] . Then NVIDIA happened, and now Silicon Graphics is a has been because they didn't understand that consumers want performance at a sane price point. Today, we still have SSDs that are slower that mechanical drives at some tasks, which just boggles the mind, and on the other hand we have FusionIO, a company with technically great products that decided to try to target the consumer market by releasing a tiny 80GB drive for a jaw-dropping $1500 [amazon.com] . I mean.. seriously... what?

Back when I was a young kid first entering university, SGI came to do a sales pitch, targeted at people doing engineering or whatever. They were trying to market their "low-end" workstations with special discount "educational" pricing. At the time, I had a first-generation 3Dfx accelerator in one of the first Athlons, which cost me about $1500 total and could run circles around the SGI machine. Nonetheless, I was curious about the old-school SGI machine, so I asked for a price quote. The sales guy mumbled a lot about how it's "totally worth it", and "actually very cost effective". It took me about five minutes to extract a number. The base model, empty, with no RAM, drive, or 3D accelerator was $40K. The SSD market is exactly at the same point. I'm just waiting for a new ''NVIDIA" or "ATI" to come along, crush the competition with vastly superior products with no stupid compromises, and steal all the engineers from FusionIO and then buy the company for their IP for a bag of beans a couple of years later.

*) This really is stupid: 256GB OCZ Z-Drive p84 PCI-Express [auspcmarket.com.au] is $2420, but I can get four of these 60GB OCZ Vertex SATA [auspcmarket.com.au] at $308 each for a total of $1232, or about half. Most motherboards have 4 built-in ports with RAID capability, so I don't even need a dedicated controller!

Re:How hard can it be? (5, Informative)

Microlith (54737) | more than 4 years ago | (#31207298)

It should be trivial to make them many thousands of times faster.

Not really. You're limited to the speed of the individual chips and the number of parallel storage lanes. They also target the 2.5" SATA market because it gives them an immediate in. Directly into new desktops and systems without consuming a slot the high performance people who would buy these are likely shoving an excess of games into. The high end is already using those slots for storage.

Believe me, the industry -is- looking into ways of getting SSDs on to faster buses, but it takes time and some significant rearchitecture. Also, NAND sucks ass, with high block failure rates fresh out of the fab outweighed by sheer density. And it's only going to get worse as lithography gets smaller.

The controller chip has a heat sink, because it's designed for performance, not power efficiency!

No, it's because the thing's running an Xilinx Virtex5 FPGA. It also costs a ton as it's using 96GB of SLC NAND, and is part of a fairly modular design that is reused in the io-drive Duo and io-drive Quad.

Today, we still have SSDs that are slower that mechanical drives at some tasks

If you're referring to the older JMicron drives that failed utterly at 4K random reads/writes, then you're mistaken. That was the case of a shit controller being exposed. Even the Indilinx controllers, which paled next to the Intel chip, outclassed mechanical drives at the same task.

on the other hand we have FusionIO, a company with technically great products that decided to try to target the consumer market by releasing a tiny 80GB drive for a jaw-dropping $1500. I mean.. seriously... what?

If you think that's bad, consider that the Virtex5 they're using on it costs on the order of $500 for the chip itself. You linked the "pro" model, which supports multiple devices in the same system in some fashion. You want this one [amazon.com] , which is only $900. Both models use MLC NAND, and neither are really intended for mass-market buyers (you can't boot from them, after all.)

Re:How hard can it be? (2, Interesting)

bertok (226922) | more than 4 years ago | (#31207578)

Not really. You're limited to the speed of the individual chips and the number of parallel storage lanes. They also target the 2.5" SATA market because it gives them an immediate in. Directly into new desktops and systems without consuming a slot the high performance people who would buy these are likely shoving an excess of games into. The high end is already using those slots for storage.

Believe me, the industry -is- looking into ways of getting SSDs on to faster buses, but it takes time and some significant rearchitecture. Also, NAND sucks ass, with high block failure rates fresh out of the fab outweighed by sheer density. And it's only going to get worse as lithography gets smaller.

From what I gather, the performance limit is actually largely in the controllers, otherwise FusionIO's workstation class cards wouldn't perform as well as they do, despite using a relatively small number of MLC chips. Similarly, if the limit was caused by Flash, then why is it that Intel's controllers shit all over the competition? The Indilinx controllers got significant speed boosts from a mere firmware upgrade! There's a huge amount of headroom for performance, especially for small random IOs, where the controller makes all the difference (storage layout, algorithms, performance, caching, support for TRIM, etc...).

And there's no need to "rearchitect" at all! PCI/PCI-e is old, storage controllers of all sorts have been made for it for decades. There are RAID or FC controllers out on the market right now that can do almost 1GB/sec with huge IOPS. It's not rocket science, storage controllers are far simpler internally than, say, a 3D accelerator.

I also disagree that people are running out of expansion slots. On the contrary, other than a video card, I haven't had to use an add-in card for anything for the last three machines I've purchased. Motherboards have everything built-in now. Server and workstations boards have so many expansion sockets, it's just crazy.

If you think that's bad, consider that the Virtex5 they're using on it costs on the order of $500 for the chip itself. You linked the "pro" model, which supports multiple devices in the same system in some fashion. You want this one [amazon.com] , which is only $900. Both models use MLC NAND, and neither are really intended for mass-market buyers (you can't boot from them, after all.)

Precisely my point! Every vendor is making some stupid compromises somewhere. Using an FPGA is really inefficient, but still better in some ways than what everyone else is doing, which ought to really make you wonder just how immature the market is.

Similarly, look at the price difference between the two FusionIO drives, the "Pro" and the "Non-Pro" model. I bet there's no physical difference, because all of the specs are identical, but there's a 2x price difference! It's probably just a slightly different firmware that allows RAID. This is artificial segmentation. If they had decent competition, the drive would cost 1/4 as much per GB, and all models would allow RAID.

Re:How hard can it be? (1)

petermgreen (876956) | more than 4 years ago | (#31210210)

I also disagree that people are running out of expansion slots. On the contrary, other than a video card, I haven't had to use an add-in card for anything for the last three machines I've purchased.
It used to be that you had a dedicated slot for your graphics card (AGP or PCIe), maybe an AMR or CNR slot that noone actually used and all the other slots were PCI. High end server/workstation boards had PCI-x but even there in general you could still put most cards in most slots (unless the card manufuacturer was an idiot and made it 5V only or the motherboard manufacturer was an idiot and put a component in a place that blocks the overhanging connector of a 64-bit card in a 32-bit slot).

Nowadays you have a mixture of PCI and PCIe slots with various different subtypes of PCIe. Add this to the fact that you can't usually (i've seen the odd board with a non-standard open back slot) overhang PCIe cards like you could with PCI and most motherboard manufacturers are cheap and fit slots exactly matched to the lane count provided (rather than fitting larger slots to allow a wider range of cards to be used albiet not at max performance) and your chances of having a free slot that is suitable for a given card are much lower than they used to be.

The fact that many video cards are double width and hence block the slot next to them doesn't exactly help either (though most manufacturs are at least sensible enough to put the narrowest slot in the postition that will be blocked).

Re:How hard can it be? (3, Interesting)

AllynM (600515) | more than 4 years ago | (#31208030)

> Not really. You're limited to the speed of the individual chips and the number of parallel storage lanes.

There's the thing. Most SSD's are only using the legacy transfer mode of the flash. The newer versions of ONFi support upwards of 200MB/sec transfer rates *per chip*, and modern controllers are using 4, 8, or even 10 (Intel) channels. Once these controllers start actually kicking the flash interface into high gear, there will be no problem pegging SATA or even PCI-e interfaces.

Allyn Malventano
Storage Editor, PC Perspective

Re:How hard can it be? (1)

atamido (1020905) | more than 4 years ago | (#31210912)

When do you see the introduction of bootable PCIe FusionIO type cards for the consumer?

Re:How hard can it be? (1)

Skapare (16644) | more than 4 years ago | (#31209304)

Believe me, the industry -is- looking into ways of getting SSDs on to faster buses, but it takes time and some significant rearchitecture. Also, NAND sucks ass, with high block failure rates fresh out of the fab outweighed by sheer density. And it's only going to get worse as lithography gets smaller.

How about the PCIe bus? It's already reasonably mature technology and there's a huge installed base. They can build small cards and huge cards.

I'm looking for an SSD for the OS and programs to reside on, mounted read/only almost all the time (only writing when I need to upgrade it). This does not need sheer density as 16GB will be sufficient (that's GB, not TB). What I want is sheer SPEED. Speed of access and speed of transfer. Single level cells, not multi-level cells, is all that would be needed. And if the current fab tech still gives more space than I need, then mirror the data across the excess space to cover for block failures. Just make sure the access to the emulated drive works in an open way.

The thing to NOT do is build some drive device designed to talk SATA or SAS or some I/O interface, slap it onto a PCIe card, and try to squeeze PCIe bus speeds through that interface. This needs to be as native PCIe as possible. One controller between the PCIe bus and the flash chips is all that is needed. Build it on a x16 PCIe v3.0 and deliver 16 GB/s. I'd be very happy with an x8 at 8GB/s.

Re:How hard can it be? (1)

adisakp (705706) | more than 4 years ago | (#31207438)

FWIW, the FusionIO product is not a simple drive replacement the way a SSD is. It doesn't boot and requires drivers to operate, plus the "control logic" is not self-contained but rather part of the driver. It uses your System CPU and system RAM to help handle bookkeeping rather than just the controller and cache on the drive itself.

Re:How hard can it be? (1)

Khyber (864651) | more than 4 years ago | (#31208196)

'FWIW, the FusionIO product is not a simple drive replacement the way a SSD is. It doesn't boot and requires drivers to operate, plus the "control logic" is not self-contained but rather part of the driver."

Everything you address is fixed at the end of this year with a firmware upgrade.

Re:How hard can it be? (1)

Courageous (228506) | more than 4 years ago | (#31210020)

Promises, promises. I like FusionIO, I have 8 of the cards. But they have been promising this fix in a few quarters since they released the cards, man.

C//

Re:How hard can it be? (1)

atamido (1020905) | more than 4 years ago | (#31210920)

Everything you address is fixed at the end of this year with a firmware upgrade.

Funny, they've been saying that exact thing for the past two years. Fortunately this time we can trust them. You know, because the year ends in a zero.

Re:How hard can it be? (2, Interesting)

hlge (680785) | more than 4 years ago | (#31207694)

If you want to go real fast http://www.sun.com/storage/disk_systems/sss/f5100/ [sun.com] OK, not something that you would use in home setting, but it shows that there is still lot of room for innovation in the SSD space. But to your point, rather than using traditional SSDs Sun created a "SO-DIM" with flash that allows for higher packing density as well better performance. Info on the flash modules. http://www.sun.com/storage/disk_systems/sss/flash_modules/index.xml [sun.com]

Re:How hard can it be? (2, Interesting)

m.dillon (147925) | more than 4 years ago | (#31207754)

Yah. And that's the one overriding advantage to SSDs in the SATA form factor. They have lots and lots of competition. The custom solutions... the PCI-e cards and the flash-on-board or daughter-board systems wind up being relegated to the extreme application space, which means they are sold for tons of money because they can't do any volume production and have to compete against the cheaper SATA-based SSDs on the low-end. These bits of hardware are thus solely targeted to the high-end solution space where a few microseconds actually matters.

Now with 6GBit (600 MByte/sec) SATA coming out I fully expect the SATA based SSDs to start pushing 400MB+/sec per drive within the next 12 months. If Intel can push 200MB/sec+ (reading) in their low-end 40G MLC SSD, then we clearly already have the technological capability to push more with 6GBit SATA without having to resort to expensive, custom PCI-e jobs.

-Matt

Re:How hard can it be? (2, Interesting)

bertok (226922) | more than 4 years ago | (#31207988)

You are basically saying contradictory things:

"lots and lots of competition" is the opposite of an "overriding advantage". It's a huge disadvantage. No company wants to enter a market with massive competition.

The PCI-e cards aren't any more "custom" than the SATA drives. Is a 3D accelerator a "custom" PCI-e card? What about a PCI-e network card? Right now, a SATA SSD and a PCI-e SSD is actually more or less the same electronics, except that the PCI-e card also has a SATA controller built in.

There's zero need to squeeze a solid-state storage device into the form factor that was designed for mechanical drives with moving parts. Hard drives are the shape and size they are because it's a good size for a casing containing a couple of spinning platters. They are connected with long, flexible, but relatively low-bandwidth cables because mechanical drives are so glacially slow that the cabling was never the performance limit, and having flexible cabling is an advantage for case design, so in that case, it was worth it.

Meanwhile, SSDs have hit the SATA 3 Gbps bus speed limit in about two generations, and will probably saturate SATA 6 Gbps in just one more generation. There are drives already available that can exceed 2x the speed of SATA 6, which means that we'll have to wait years for some SATA 12 Gbps standard or something to get any further speed improvement.

Meanwhile, there's already several 20-80 Gbps PCI-e ports on every motherboard, which is cheap and easy for manufacturers to interface with. If flexible cabling is an absolute requirement, then there is PCI-e cabling [wikipedia.org] .

Re:How hard can it be? (2, Insightful)

m.dillon (147925) | more than 4 years ago | (#31208294)

I think you're missing the point. The SATA form factor is going to have much higher demand than any PCI-e card, period, for the simple fact that PCI-e is not really expandable while SATA is. SATA has a massive amount of infrastructure and momentum behind it for deployments ranging the gauntlet, small to large. That means SATA-based SSD drives are going to be in very high volume production relative to PCI-e cards. It DOES NOT MATTER if the PCI-e card is actually cheaper to produce, it will still be priced at a premium verses the SATA form factor due to the lack of volume and PCI-e will never achieve the same volume due to its lack of flexibility.

The fact that the form factor has volume demand means that many manufacturers can get a piece of a large pie by selling devices in that form factor. A larger piece than they could get selling PCI-e cards.

In addition, the competition in the space creates innovation. This is why we are seeing such a fast ramp-up in SSD performance and features. The SATA form is driving the ramp-up.

Yes, SSDs are hitting the 3Gbit SATA II phy limit. And your point is what? 99.9% of the installations out there don't actually need more bandwidth, so hitting the limit is not going to magically create more demand for PCI-e and other non-SATA solutions. The SATA phy standards will progress along with everything else. We'll have 6Gbit/s soon enough, and the delay is not going to have any real effect on SATA being the dominant form factor standard for the technology. The single port limit isn't even that big of a hurdle today since most motherboards have several SATA/E-SATA ports.

PCI-e based solutions will track the same lines as all other bus-card solutions have tracked: Low volume, premium pricing, highly-specialized, and non-standard drivers. If you are hoping to see SATA based SSDs disappear in favor of a PCI-e card you are in for one hell of a disappointment.

-Matt

Re:How hard can it be? (2, Insightful)

bertok (226922) | more than 4 years ago | (#31208632)

I think you're missing the point. The SATA form factor is going to have much higher demand than any PCI-e card, period, for the simple fact that PCI-e is not really expandable while SATA is.

I think you're missing *my* point. The PCI-e standard is for expansion slots. You know, for... expansion. There already are 1TB SSD PCI-E cards, and you can plug at least 4 into most motherboards, and 6-8 into most dual-socket server or workstation boards. Just how much expandability do you *need*?

Keep in mind that 99% of the point of SSD is the speed. It finally removes that hideous mechanical component that's been holding back computing performance for over a decade now. Nothing stops you from having a couple of 2TB spinning disk drives in there for holding your movies and photos and all that junk that doesn't need 100K IOPS.

The jump from 100 IOPS of mechanical drives to the 5K IOPS of a typical SSD is huge. The improvement from 5K to 100K is just as noticeable, especially for people doing real work on their machines. I've heard from owners of both the Intel and Indilinx controller based drives that the Intel is noticably "snappier", even though the performance difference there is at most 2x.

The fact that the form factor has volume demand means that many manufacturers can get a piece of a large pie by selling devices in that form factor. A larger piece than they could get selling PCI-e cards.

How do you know? The form factor is not the only consideration, performance counts also. If a PCI-e SSD at the same price as an equal capacity SATA drive provided literally 100 times the performance, would people ignore it because.. wait... it's a funny shape for a drive? Seriously? Do you expect to buy your 3D accelerators in little brick shaped metal boxes? No? Why not? Maybe it's because the performance is more important!

Speaking of 'expandability', people often buy multiple PCI-e 3D accelerators. Two is common [wikipedia.org] , and some people go as high as 3 or 4 in a single system. Nobody talks about the "limited" market of PCI-e 3D cards because they are "insufficiently expandable".

PCI-e based solutions will track the same lines as all other bus-card solutions have tracked: Low volume, premium pricing, highly-specialized, and non-standard drivers. If you are hoping to see SATA based SSDs disappear in favor of a PCI-e card you are in for one hell of a disappointment.

Err.. what? Most PCI-e SSDs look like a generic SATA host bus adapter to the OS, or use some generic SCSI HBA interface. The SATA speed limit is in the cable, not the drivers or the protocol.

I'm saying that the reason the volumes are low is because the pricing is insane. There's no need to price PCI-e devices higher than the SATA form factor. It's the same electronics. There is basically no difference, except that the PCI-e devices can be much, much faster.

I'm betting that you'll be the one shocked to find that in 5-10 years, most entry-level motherboards, especially those designed for corporate desktops, will have something like a 64GB flash drive built right into them. Heck, we're half-way there already [wikipedia.org] , just give it time.

Re:How hard can it be? (1)

Rockoon (1252108) | more than 4 years ago | (#31208846)

I have to concur. Back when I got my first hard drive, it was a whopping 40 megabytes and came as an ISA expansion card. It was cheaper than buying both a HD and controller separately. They were called "Hard Cards" at the time, and they werent just some novelty high end equipment. They were priced for consumers.

I believe that there will be a true Hard Card revival because of the facts of this current market.

SATA 3.0 adoption will be slow (motherboard with the 6Gb sata are noticeably more expensive) and even if it was to be adopted overnight, it just doesnt carry enough bandwidth. SATA itself is holding back this market. There is a reason that SSD speeds leveled off between 200MB and 300MB per second and its the fact that SATA 2.0 was cock blocking them at 3Gb. Now they are cock blocked at 6Gb, so expect them to level off between 400MB and 600MB per second.

We need to ship new controllers, just like we needed to ship controllers in the late 80's and early 90's.

This isnt to say that the current hard card ssd offerings arent made of expensive parts. They are taking regular SATA drives and raiding them, and good high bandwidth raid controllers are (currently) expensive. A single Intel or OCZ Vertex SSD can max out the throughput on many of the low end RAID controllers (check out the difficulty these guys had [nextlevelhardware.com] finding a raid controller than can do 1GB/sec)

Re:How hard can it be? (1)

TheRaven64 (641858) | more than 4 years ago | (#31210394)

I believe that there will be a true Hard Card revival because of the facts of this current market.

This current market? Laptops are now, what, 60% of total PC sales? They passed the 50% mark a year or so back, but I haven't been paying attention much since then. Laptops don't have multiple internal PCIe slots. There is some advantage in custom form-factors for fitting inside a laptop, but the 1.8" and 1" hard disk form factors are a logical place to go. Maybe a PCIe bus rather than SATA sounds sensible, but it needs more wires (and more motherboard traces), which makes things much more expensive and the extra speed is only relevant once people start seeing 6Gb/s as a bottleneck (which, since you're moving them from mechanical disks that are under 300Mb/s is not going to be for a while).

If you're looking at the handheld market, which is a rapidly-growing segment, then things are even simpler. They just use the flash chips directly, with a controller on the SoC, often in a PoP configuration so to just clip the flash on top of the SoC and don't need any motherboard traces.

Re:How hard can it be? (1)

Courageous (228506) | more than 4 years ago | (#31210050)

If a PCI-e SSD at the same price as an equal capacity SATA drive provided literally 100 times the performance, would people ignore it because.. wait... it's a funny shape for a drive?

No, of course not. But it cannot happen, because you have to recoup your driver creation and maintenance costs, for plural operating systems.

C//

Re:How hard can it be? (1)

fast turtle (1118037) | more than 4 years ago | (#31210608)

I think you're missing *my* point. The PCI-e standard is for expansion slots. You know, for... expansion. There already are 1TB SSD PCI-E cards, and you can plug at least 4 into most motherboards, and 6-8 into most dual-socket server or workstation boards. Just how much expandability do you *need*?

I think you've missed the boat entirely. I'm a small business owner and as a business owner, I buy the cheapest computers that allow my employees to get their work done. This means they're MATX form factor and as you stated earlier, everything is on the board (Video, Sound, Networking) and are lucky to have even a PCIe-16 slot for a video card upgrade. So where are all the business desktops with four or more PCIe slots? I've never seen one in a MATX business class board but I have seen plenty of boards with 5 SATA ports.

Even if the PCIe based card offered me 100x the throughput of a SATA drive, if it cost 2x and was harder to get, I still wouldn't buy it simply because of support issues. Sure standard drives can and do fail w/o warning but you know what, I can walk into almost any store and buy a replacement right now for that paperweight unlike having to go to the single store who even offers the PCIe based drives and that probably doesn't have them in stock when I need the damn thing. Sorry but from the business standpoint, that's an EPIC FAIL.

Why is it that ever damn gear head/gamer insists that performance is the one and only metric to judge a car/computer by? I prefer handling and ecomony myself along with some luxury like A/C and Stereo's. Then my business judges any computer purchase by how much money does it save me in the long run? That means my upgrade cycles for soft/hard-ware are extended simply due to the If it aint broke, don't fix it methodology that many companies subscribe to.

Re:How hard can it be? (1)

amorsen (7485) | more than 4 years ago | (#31210756)

Keep in mind that 99% of the point of SSD is the speed. It finally removes that hideous mechanical component that's been holding back computing performance for over a decade now. Nothing stops you from having a couple of 2TB spinning disk drives in there for holding your movies and photos and all that junk that doesn't need 100K IOPS.

You may have missed it, but the desktop is dead. The major markets of SSD are notebooks and servers. Modern servers are 1U or blade and have ~0 available PCI-e slots. Notebooks don't have any PCI-e slots either, and manufacturers can't yet make models without support for regular hard drives.

Re:How hard can it be? (1)

Courageous (228506) | more than 4 years ago | (#31210034)

The PCI-e cards aren't any more "custom" than the SATA drives.

You don't have to write driver software for all of the individual platforms you might support if you pick SATA. So, yes, in that sense SATA is less "custom" than the PCIe interface, because the PCIe approach requires quite literally so much more customization work.

C//

Re:How hard can it be? (1)

petermgreen (876956) | more than 4 years ago | (#31210622)

Meanwhile, there's already several 20-80 Gbps PCI-e ports on every motherboard
ROFLMAO

Pretty much every board has one x16 slot (though in some cases it may be x8 or even x4 electrical). However given that most desktop users buying SSDs will probablly be using this for graphics and the fact that some boards don't like anything but a graphics card in this slot it can't really be considered as a general purpose slot.

The remaining PCIe slots on most boards (if there are any, there are still machines being made where the only PCIe slot is the graphics one) will likely be 1.0 and either all x1 or one x4 (which may be x4 or x16 mechanical) and a few x1 (which will almost always be x1 mechanical). Further they will most likely be 1.0 (afaict southbridges don't generally support 2.0 yet) so that limits you to 5Gbps for x1 and 20Gbps for x4. Worse since most x1 electrical slots are also x1 mechanical you would need to either limit yourself to x1 or produce two different versions of your card. If you also want to take advantage of the extra x16 slots available on higher end boards you will need to make even more variants of the card.

Re:How hard can it be? (1)

Kjella (173770) | more than 4 years ago | (#31207772)

"We Lose Money On Each Unit, But Make It Up Through Volume"

Take a look at memory sticks and memory cards - they're just one of the dumbest chips possible wrapped in a few cents of plastic. Multiply it up to desired SSD size. It actually comes out to quite a bit in parts before you start trying to build an SSD out of it. Now I haven't looked at FusionIO's products in a while but their early products at least were basically banks of RAM with a battery powered backup. Neat, but didn't really help unless you could afford to buy tons and tons of RAM.

Just a quick check from a price guide here in Norway:
Memory sticks: 32GB/489 NOK = 15.28 NOK/GB
Memory card: 32GB/529 NOK = 16.53 NOK/GB
Kingston SSDNow V-Series 40GB: 40GB/693 NOK = 17.33 NOK/GB

I didn't include one of every SSD size but they pretty much scale linearly, a 256 GB SSDs still works out to about 17 NOK/GB for the cheapest. So they all work out to about the same, you pay a premium on top of that for faster/more intelligent stuff from Intel/OCZ etc. but the floor doesn't go down until the flash gets cheaper. And memory sticks and memory cards are sold at volume pricing already, which means SSDs are too despite being in low volume because they use much the same chips.

Re:How hard can it be? (1)

tlhIngan (30335) | more than 4 years ago | (#31208074)

I suspect that most drives we're seeing are too full of compromises to unlock the real potential of flash storage. Manufacturers are sticking to 'safe' markets and form factors. For example, they all seem to target the 2.5" laptop drive market, so all the SSD controllers I've seen so far are all very low power (~1W), which seriously limits their performance. Also, very few drives use PCI-e natively as a bus, most consumer PCI-e SSDs are actually four SATA SSDs attached to a generic SATA RAID card, which is just... sad. It's also telling that it's a factor of two cheaper to just go and buy four SSDs and RAID them using an off-the-shelf RAID controller! (*)

You can get SSDs in standard drive form factors, as well as PCIe.

Hell, Asus had a laptop with 2PCIe SSDs, Acer had one too. The Eee 700, 900, 901 used a miniPCIe SSD. The SSD versins of the Acer Aspire One used a mini PCIe SSD as well.

These aren't PCIe controllers attached to a SATA SSD, either, but native SSDs that behaved as a mass storage controller. They react as if they were SATA disks to make BIOS boot simple and require no drivers to work.

FusionIO devices are great, but I don't know why they don't simulate a standard mass storage controller so you can boot from them. Either a standard mass storage controller SATA emulation, or with a BIOS option ROM to allow booting.

Re:How hard can it be? (1)

blackraven14250 (902843) | more than 4 years ago | (#31208174)

*) This really is stupid: 256GB OCZ Z-Drive p84 PCI-Express [auspcmarket.com.au] is $2420, but I can get four of these 60GB OCZ Vertex SATA [auspcmarket.com.au] at $308 each for a total of $1232, or about half. Most motherboards have 4 built-in ports with RAID capability, so I don't even need a dedicated controller!

Let me just point out, I bought 2 SSD drives and used my onboard RAID, only to find out that I was limited to 1 PCIe lane due to the onboard controller's design, and thus was running at the speed of a single one of my SSDs instead of 2, realizing no performance gains from RAID0.

Huh? (0)

Anonymous Coward | more than 4 years ago | (#31207334)

What a usless article.... you won't even be able to buy this drive after a month or two. What is this, an advertisement?

Removed super-cap (0)

Anonymous Coward | more than 4 years ago | (#31207378)

... You know that the cap is there so if you are saving something while the power goes out you don't corrupt the file right?

OCZ? So... (0)

Anonymous Coward | more than 4 years ago | (#31207672)

...does that mean you'll need to replace it 5 times before you get one that works?

Not really impressed with OCZ (3, Interesting)

m.dillon (147925) | more than 4 years ago | (#31207708)

At least not the Colossus I bought. Write speeds are great but read speeds suck compared to the Intels. The Colossus doesn't even have NCQ for some reason! There's just one tag. The Intels beat the hell out of it on reading because of that. Sure, the 40G Intel's write speed isn't too hot but once you get to 80G and beyond it's just fine.

The problem with write speeds for MLC flash based drives is, well, its a bit oxymoronic. With the limited durability you don't want to be writing at high sustained bandwidths anyway. The SLC stuff is more suited to it though of course we're talking at least 2x the price per gigabyte for SLC.

--

We've just started using SSDs in DragonFly-land to cache filesystem data and meta-data, and to back tmpfs. It's interesting how much of an effect the SSD has. It only takes 6GB of SSD storage for every 14 million or so inodes to essentially cache ALL the meta-data in a filesystem, so even on 32-bit kernels with its 32-64G swap configuration limit the SSD effectively removes all overhead from find, ls, rdist, cvsup, git, and other directory traversals (64-bit kernels can do 512G-1TB or so of SSD swap). So its in the bag for meta-data caching.

Data-caching is a bit more difficult to quantify but certainly any data set which actually fits in the SSD can push your web server to 100MB/s out the network with a single SSD (A single 40G Intel SSD can do 170-200MB/sec reading after all). So a GigE interface basically can be saturated. For the purposes of serving data out a network the SSD data-cache is almost like an extension of memory and allows considerably cheaper hardware to be used... no need for lots of spindles or big motherboards sporting 16-64G of ram. The difficulty, of course, is when the active data-set doesn't fit into the SSD.

Even using it as general swap space for a workstation has visible benefits when it comes to juggling applications and medium-sized data sets (like e.g. videos or lots of pictures in RAW format), not to mention program text and data that would normally be throw away overnight or by other large programs.

Another interesting outcome of using the SSD as a cache instead of loading an actual filesystem on it is that it seems to be fairly unstressed when it comes to fragmentation. The kernel pages data out in 64K-256K chunks and multiple chunks are often linear, so the SSD doesn't have to do much write combining at all.

In most of these use-cases read bandwidth is the overriding factor. Write bandwidth is not.

-Matt

Re:Not really impressed with OCZ (2, Interesting)

AllynM (600515) | more than 4 years ago | (#31207984)

Matt,

Totally with you on the Colossus not being great on random-IO, that's why we reviewed one!:
http://www.pcper.com/article.php?aid=821&type=expert&pid=7 [pcper.com]
The cause is mainly that RAID chip. It doesn't pass any NCQ, TRIM or other ATA commands onto the drives, so they have no choice but to serve each request in a purely sequential fashion. The end result is even with 4 controllers on board, the random access of a Colossus looks more like that of just a single Vertex SSD.

Allyn Malventano
Storage Editor, PC Perspective

Don't we want raw access + NILFS? (0)

Anonymous Coward | more than 4 years ago | (#31208098)

I don't see why manufacturers would want to spend any amount of money for using someone else's controller, when they could just give us raw access to the device and let us use a filesystem such as NILFS on it.

Do you?

Re:Don't we want raw access + NILFS? (1)

petermgreen (876956) | more than 4 years ago | (#31210864)

I see a few reasons

1: Most people will be running either windows which pretty much means you use NTFS whether you like it or not (or you could use fat32 but that isn't exactly going to be any better). Even on linux i'd imagine the number of clueless newbies who would set up a standard filesystem on the device and quickly ruin it would be pretty high. This means high RMA expenses and pissed off users.

2: putting the wear leveling control on the drive puts the drive manufacturer in control of it. That means they can tweak it to match the particular memory array they have, implement new developments without waiting for the operating system to catch up and so on.

3: exposing the memory array directly would require a different paradigm on the interface than current drives use to expose things like the difference between write blocks and erase blocks. Without a major change in interface specs this would mean the filesystem would have no way to get the true characteristics of the array.

Don't we want raw access + NILFS? (1)

phess (1116955) | more than 4 years ago | (#31208118)

I don't see why manufacturers prefer spending large amounts of time and money into producing smart controllers when they could just give us raw access to the device and let us use something as NILFS on top of it. Do you?

Re:Don't we want raw access + NILFS? (3, Funny)

Rectal Prolapse (32159) | more than 4 years ago | (#31208690)

I think I would prefer MILFS on top, don't you?

Re:Don't we want raw access + NILFS? (0)

Anonymous Coward | more than 4 years ago | (#31208932)

I would prefer raw access to MILFS

Re:Don't we want raw access + NILFS? (1)

BiggerIsBetter (682164) | more than 4 years ago | (#31209344)

Personally, I'd be using something like YAFFS (Yet Another Flash File System) rather than NILFS.

Anyone else... (1)

cyberjock1980 (1131059) | more than 4 years ago | (#31208234)

Anyone else agree that SSD speeds are plenty fast for the tasks given to it? When I shop for SSD drives I look for a reputable company that doesn't stutter like crazy with reads and writes for the lowest price. I've owned Intel X25Ms as well as other brands and I can't tell the difference in performance. Of course, the benchmarks do paint different numbers.

But who is REALLY gonna notice that 0.03ms difference in "seek time" for one SSD over another and 150MB/sec over 220MB/sec sequential? SSDs these days are so fast I don't see a reason to "upgrade" to a faster SSD if I already have one.

What do I want to see improved on SSD? Reliability and price. This "Limited Edition" seems like a waste, and I'd bet that less than 1% of users here at slashdot would really and truely notice this. I'd bet most of us would be unable to tell the difference if tested blindly.

I'm sure this will hurt my karma, but I can't believe that I'm alone in thinking this.

Re:Anyone else... (1)

Skapare (16644) | more than 4 years ago | (#31209330)

No. I want speed. I want to be able to suck 16GB of the OS out of the SSD and into RAM in 0.03ms. So there :-)

Re:Anyone else... (1)

Courageous (228506) | more than 4 years ago | (#31210080)

By and large, for ordinary user space apps and workloads you are certainly right. But even some home users do intense things, such as video encoding or 3 rendering that, because of high intensity I/O that can be associated with that, will certainly benefit from faster disk. Now, if one hasn't already upgraded to SSD, I'll say this: one is missing out on the best upgrade you can do for your daily experience of your computer, barring a really nice monitor.

C//

Limited Edition = artificial scarcity (1)

Hurricane78 (562437) | more than 4 years ago | (#31209468)

It should be illegal to label products like this. The only thing limited, is the mental capacity of those who buy it because of this label. ;)

Re:Limited Edition = artificial scarcity (1)

Skapare (16644) | more than 4 years ago | (#31210136)

I specifically avoid products with such a label because I know that means I can't replace it if it fails. One exception is Mountain Dew's limited edition with real sugar (but that's not something that fails).

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?