×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Micron Demos SSD With 1GB/sec Throughput

timothy posted more than 5 years ago | from the macht-schell-macht-schnell dept.

Data Storage 120

Lucas123 writes "Micron demonstrated the culmination of numerous technology announcements this year with a solid state disk drive that is capable of 1GB/sec throughput with a PCIe slot. The SSD is based on Micron's 34nm technology and interleaving 64 NAND flash chips in parallel. While the techology, which is expected to ship over the next year, is currently aimed at high-end applications, a Micron executive said it's entirely possible that Micron's laptop and desktop SSDs could have similar performance in the near future by bypassing SATA interfaces."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

120 comments

Yes that's nice. (4, Funny)

electrosoccertux (874415) | more than 5 years ago | (#25910901)

This reminds me of all the demos of holographic disc technology. It'll be on the market in just 1 year! But it never is, and it's never affordable for us /. browsing types.

Re:Yes that's nice. (4, Informative)

Joce640k (829181) | more than 5 years ago | (#25910977)

Yeah, but ... Intel is shipping SSDs with 220Mb/s read/write:

http://hardware.slashdot.org/article.pl?sid=08/11/25/015209 [slashdot.org]

What's so fantastic about 1Gb/s? It's only four times faster...a RAID with four Intel devices will do it so just put four of them in a box with a RAID controller and Bob's your uncle...

Re:Yes that's nice. (1, Interesting)

amazeofdeath (1102843) | more than 5 years ago | (#25911001)

RAID does not actually work that way. Yes, you can get increased speeds with certain RAID configurations, but this is a whole different beast.

Re:Yes that's nice. (2, Informative)

mikkelm (1000451) | more than 5 years ago | (#25911087)

Actually, in RAID 5, five 250MB/s drives will roughly offer you the same performance as a 1Gbps drive for most sequences of IO operations. SSDs feature almost linear scaling due to the extremely low seek times.

Re:Yes that's nice. (2, Funny)

mikkelm (1000451) | more than 5 years ago | (#25911121)

Err, watching Thanksgiving football and posting on slashdot is not a good idea. s/1Gbps/1GB\/s

Re:Yes that's nice. (1)

Smauler (915644) | more than 5 years ago | (#25911281)

That depends on your RAID system. I used to love the idea of RAID 5, until I actually looked at the benchmarks. Unless you have proper dedicated decent hardware calculating the parity (which really costs), writing to a RAID 5 is dog slow. Like, a lot slower than writing to a single disk in most cases. Reading is quicker, but far from wonderful on consumer level hardware.

In my opinion, if you want a consumer level raid solution that will actually offer increased performance, RAID 1+0 is a good option. Personally for my home system I just stripe 2 drives, backup my important stuff, and cross my fingers.

Re:Yes that's nice. (2, Informative)

cheater512 (783349) | more than 5 years ago | (#25911719)

Erm my home server with four disks in RAID 5 (software RAID) handles wonderfully.

I've never seen the RAID take more than 2% CPU and write speeds are far faster than a single drive.

Re:Yes that's nice. (1)

Eivind (15695) | more than 5 years ago | (#25914151)

Nonsense, writing to raid-5 should not be "dog slow".

In the absolute worst case (silly raid-setup with one deficated parity disk rather than spread-out-parity) and writing of only a single block, and the parity-disk and the disk holding that block both being on the same channel, so writing needs to be sequential, you'd absolutely worst case end up with half the speed of a single disk.

In real-life a 4+1 raid-5 setup is about 3 times as fast as a single disk, sometimes more. It depends on write-pattersn offcourse, but quite often you write many blocks close to eachother (i.e. a nonfragmented file significantly larger than a single block) so what happens is you need 5 writes to write those 4 blocks, but those 5 writes can happen in parallell since they're to different disks.

Calculating the parity should be down in the noise. Your CPU is much MUCH faster than your disks, and it's not as if xor is a fancy operation or anything. I've never been able to load the CPU more than 1-2% by fully loading the i/o-capacity of a raid-5 array.

Re:Yes that's nice. (1)

kasperd (592156) | more than 5 years ago | (#25911765)

Actually with SSD the performance should be even better. Sequential reads on RAID-5 with n+1 harddisks will give you n times the speed of a single disk. The reason you don't get a factor of n+1 is that you have to skip the parity blocks on all the disks, and for skipping this few sectors the cost of a seek is going to be the same as just reading them. But with SSD you don't have the cost of seeks, so you should be able to read just the sectors you need and get n+1 times the speed of a single disk.

Writes to RAID-5 are slow for a completely different reason, and using SSD does not eliminate that cost. Random access writes require two physical writes for each logical write, and you also have to do two reads, unless the data is already in cache. So a five disk RAID-5 will not be significantly faster than writing to a single disk. Using SSD of course speeds up random access, no matter if you use RAID or not. Sequential writes to RAID-5 can be quite fast though since you don't have to write parity multiple times, and if there is sufficient data in the write queue, you don't have to read from disks either, since the data is going to be overwritten anyway.

A RAID system that actually takes into account that the underlying storage is SSD could do things in a more efficient and/or more reliable way.

Re:Yes that's nice. (0)

Anonymous Coward | more than 5 years ago | (#25913295)

250MB/s drives will roughly offer you the same performance as a 1Gbps.

I'm so confused. I'm confused because I can't figure out if you're confused or not, which confuses me. You talk about bytes and bits in the same sentence.

Re:Yes that's nice. (5, Informative)

Glonoinha (587375) | more than 5 years ago | (#25911091)

Say what?

Actually, RAID can work EXACTLY in this way. Set up a RAID 0 array of 250Mb/s devices and if the host controller can handle it - bingo, Gigabit throughput from the array. There's a guy out there that RAID'ed six of the Gigabyte iRAM cards on some high end RAID card a year ago - and he managed somewhere in the neighborhood of 800MB/s - surely a year later we can do better than that. The only limitations his rig encountered were the limited space available, and of course the volatile nature of the iRAM cards.

The things by Micron appears to have handled the issue of volatile memory when the memory goes down, and getting all the bandwidth through a single channel bus. When it becomes commercially available - count me in for one (when the price comes down enough for me to afford it.)

Re:Yes that's nice. (1)

amazeofdeath (1102843) | more than 5 years ago | (#25911109)

"Can" is not the same as "does". Have you checked the actual performance in *all* the situations, not just raw read speed?

Re:Yes that's nice. (2, Insightful)

Free the Cowards (1280296) | more than 5 years ago | (#25911617)

And you think the 1GB/sec quoted in the title is actual performance in all situations, not just raw read speed?

Re:Yes that's nice. (1)

amazeofdeath (1102843) | more than 5 years ago | (#25911759)

No, I was commenting on RAID performance, and on the anecdotal evidence of its workings by Glonoinha.

Re:Yes that's nice. (2, Interesting)

Glonoinha (587375) | more than 5 years ago | (#25913609)

There are a few videos on youtube of guys that RAID'ed iRAM's showing just insane performance.

If it weren't for the cost of adding four of these (plus four 1G sticks of pc3200 on each) I would have already scored a similar rig - but right now I'm working on a limited R&D budget. Maybe next year.

That said - these are really, really sweet - but I have to ask whether the RAID'ed iRAM or the new Micron SSD can hold a candle to a ramdisk ( see also : http://www.ramdisk.tk/ [ramdisk.tk] ) - I figure on a machine that can actually address 8G or more of memory (likely : Windows 2003 Server based) and use a massive chunk of it as a ramdrive - which is going to come out ahead?

How safe is it (1)

hbr (556774) | more than 5 years ago | (#25914693)

Trouble with RAM is that it disappears when the power fails.

Even with the iRAM you lose it after 16 hours if I understand correctly.

So SSD has a real advantage there by the sound of things (in being more like a real hard disk).

oops... (1)

hbr (556774) | more than 5 years ago | (#25914707)

Trouble with RAM is that it disappears when the power fails.

Ooops - obviously I mean you lose the information stored in the RAM, and not the RAM itself!

Re:Yes that's nice. (1)

Joce640k (829181) | more than 5 years ago | (#25911099)

Yes, but you know what I mean. RAW transfer speeds can scale quite linearly when you put multiple storage devices in parallel.

It's just a case of sorting out the controllers. SATA isn't fast enough for 1Gb/s so I assume it will be a mini-PCIe card or something like that.

If it is mini-PCIe then I'll definitely be getting one for my Eee PC.

Re:Yes that's nice. (0)

Anonymous Coward | more than 5 years ago | (#25913197)

SATA isn't fast enough for 1Gb/s so I assume it will be a mini-PCIe card or something like that.

Actually, SATA does 3Gb/s, so it would be fast enough for 1Gb/s, but this device does 1GB/s, which is 2 2/3x faster than the theoretical max of SATA.

Re:Yes that's nice. (1)

nbert (785663) | more than 5 years ago | (#25911189)

If you are talking about RAID 0 you are almost correct (two disks in RAID 0 only get close to 2x the speed - they will never reach it) . The problem is that all data is gone in case that one hdd dies and the chances for that grow with every disk you add. The probability is 1 (1 p)^n (n being the number of disks). So at a failure rate of 2% per disk in 3 years you get 8% for 4 disks or even 28% for 8. Of course you could compensate this by combining parity with striping (RAID 0+1 or RAID 5 for example), but you'll have to invest into more disks (higher cost) and they will not perform as fast as a RAID 0.

I'm not sure if Micron's demonstration is a big leap forward or part of their marketing efforts. But in case that this technology is viable we might see similar SDDs from Intel very soon, because Micron is connected with Intel over the joint venture IM Flash Technologies, which produces NAND flash.

Re:Yes that's nice. (1)

dfghjk (711126) | more than 5 years ago | (#25911309)

The failure and scaling issues you mention aren't any different from those that Micron faces inside their product since the approach they take to performance is basically the same. You think doubling the flash parts and the number of channels isn't directly analogous to RAID 0?

Re:Yes that's nice. (1)

nbert (785663) | more than 5 years ago | (#25911359)

Not at all: If a disk in RAID 0 fails all data is gone. If one block within a SSD fails it will affect only the files which had parts stored in them.

They are not using multiple disks. The reason why the article mentions two disks is because they needed a source and a target.

Re:Yes that's nice. (1)

cheater512 (783349) | more than 5 years ago | (#25911739)

And if I short two pins on one of the flash chips, all data is lost from all the chips. :P

What they are doing is getting a bunch of flash chips and using RAID 0 on them.
One disk but with many chips in RAID 0 configuration hence the speed.
A single flash chip cannot hit 1Gbps.

Re:Yes that's nice. (1)

joshuao3 (776721) | more than 5 years ago | (#25912339)

Joce, it doesn't say 1Gb/s. It says 1GB/s, which is 32x's faster than 220Mb/s SSDs. (bear in mind, 1GB/s may be a typo....)

Re:Yes that's nice. (0)

Anonymous Coward | more than 5 years ago | (#25913987)

RAID with SSDs doesn't scale quite so beautifully as you imply. Most mobo SATA controllers couldn't manage 1GB of throughput, or even 200MB/s of throughput.

We'll need better controllers before raiding Intel's SSDs results in impressive performance.

I'm keeping my eye on FusionIO; they promised to release a consumer-grade PCIe 4x SSD in 2009 for " $1200".

http://www.fusionio.com/

Re:Yes that's nice. (4, Informative)

Kjella (173770) | more than 5 years ago | (#25910987)

This reminds me of all the demos of holographic disc technology. It'll be on the market in just 1 year!

That one has always been in the mysterious future (3-10 years away, never next year) and never really showed up outside of labs. SSDs on the other hand aren't really "new", they're in essence flash chips like we've been using in cameras and USB sticks for many years plus RAID0 that's been a well known way to make slow storage devices faster by running them in parallel. There's quite a bit more controller magic than that, but it's nothing really revolutional in the creation of SSDs. Only the regular miniturization process that's happening all around which means they are reaching capacities and speeds that are useful for main computer storage.

Re:Yes that's nice. (2, Insightful)

TheRaven64 (641858) | more than 5 years ago | (#25911085)

300GB+ holographic disks are shipping now, but they definitely aren't in the 'affordable to /.-browsing types' category.

Re:Yes that's nice. (1)

Yvan256 (722131) | more than 5 years ago | (#25911203)

I can buy an external USB2, Firewire 400 and eSATA 1TB drive at Costco, today, for 235$ CAD. How much are those puny 0.3TB holographic disks, and how fast and reliable are they?

Re:Yes that's nice. (3, Interesting)

TheRaven64 (641858) | more than 5 years ago | (#25911265)

The InPhase disks are $180 and the drives are $18,000. Unlike your external disk, the disks are rated to last 50 years. Not sure how much the Optware versions cost, but they start at 1TB and go up from there.

Re:Yes that's nice. (1)

Yvan256 (722131) | more than 5 years ago | (#25911297)

Even if we take for granted that:
- hard drives will never increase in capacity
- the price of a 1TB drive will never drop

Add to that the following assumptions:
- a hard drive never lasts more than 1 year
- we need to be in RAID 1 (two drives) to be safe

That's 235$ CAD per drive, multiplied by two drives per year, multiplied by 50 years: 23 500$ CAD.

I'll assume your 18 000$ InPhase drive price is in US$, which means it would cost me 22 000$CAD for the drive alone without any InPhase disks.

So, it's either shell out 22 220$ CAD right now for an InPhase drive and a single 1TB disk, or spread out 23 500$ CAD over 50 years.

And since drive capacity keeps increasing while prices keep dropping, not to mention the unknown factor of still being able to buy InPhase disks in 50 years, I'll take the (currently magnetic) hard drives in RAID 1 option, thank you very much.

Re:Yes that's nice. (2, Informative)

TheRaven64 (641858) | more than 5 years ago | (#25911355)

You are assuming you'd only want a single disk. The target market is people who are generating several disks worth of data per day. If you are recording HD footage, and especially if you are editing it, then you burn through a TB very quickly. The cost of the drive becomes tiny per disk if you're using a lot of them. Even if you're only burning one disk a day, you're paying $50/disk over the course of the year. If you burn two a day then it brings the cost of disk and drive to around $200 each, very cheap for a 50-year archive of your content. The disks are the same size as a DVD, and so can be stored in a lot less space than a hard disk, which is also a factor when you are archiving hundreds of TB per year.

Like I said - they're not affordable by the average /. reader, but they do exist and there are segments of the market where they make good sense.

Re:Yes that's nice. (2, Interesting)

Yvan256 (722131) | more than 5 years ago | (#25911445)

In that context, then yes I do see how that would be a huge advantage.

I was looking at this as an average /. reader, as you say.

Re:Yes that's nice. (1)

quetzalblue (953290) | more than 5 years ago | (#25913185)

Hey that's great ! I gotta buy me some stock from this InPhase company .. apparently their drives never break ! Imagine that ! not like those sorry assed revolving dohickeys that break every year. Planned obsolescence ..

Re:Yes that's nice. (4, Informative)

SanityInAnarchy (655584) | more than 5 years ago | (#25911029)

It was about $300 extra for a 128 gig SSD in this Dell laptop. I just ran a casual test. Keep in mind, this is currently being used (lightly), and I haven't done anything to improve the results of this test -- in fact, probably just the opposite, as the file in question was downloaded via BitTorrent, and I've never defragmented this hard drive. It certainly hasn't been read since the last boot.

dd if=foo.avi of=/dev/null
348459+1 records in
348459+1 records out
178411124 bytes (178 MB) copied, 1.82521 s, 97.7 MB/s

Keep in mind, that's throughput -- it gains nothing from the real killer feature of no seek times.

I can always buy big, slow spinning disks and put them in a NAS somewhere. I can take old desktops, put Linux on them, and turn them into a NAS. For the kind of stuff that takes hundreds of gigs, I don't need much speed.

But for the places where it counts -- like booting an OS -- there is a definite, real benefit, and it's not entirely out of reach, if you care about this kind of thing.

Re:Yes that's nice. (2, Interesting)

nbert (785663) | more than 5 years ago | (#25911301)

AFAIK no SSD apart from Intel's newest line provides any real advantage over spinning disks. They are faster in some areas, but in others they perform very poorly (write times for example). You'll get far more realistic numbers if you specify a real file in of. Here is the difference:

Desktop nerdbert$ dd if=test.zip of=/dev/null
136476+1 records in
136476+1 records out
69876088 bytes transferred in 2.249553 secs (31062211 bytes/sec)
Desktop nerdbert$ dd if=test.zip of=Herbietest
136476+1 records in
136476+1 records out
69876088 bytes transferred in 3.291721 secs (21227829 bytes/sec)
Desktop nerdbert$ dd if=test.zip of=/dev/null
136476+1 records in
136476+1 records out
69876088 bytes transferred in 0.876336 secs (79736653 bytes/sec)
Desktop nerdbert$ dd if=test.zip of=/dev/null
136476+1 records in
136476+1 records out
69876088 bytes transferred in 0.843004 secs (82889392 bytes/sec)


The last two runs illustrate that caches speed up the process, so a real file should be even slower than in this example.

Re:Yes that's nice. (1)

cytg.net (912690) | more than 5 years ago | (#25911751)

I'd really like to see how these drives perform compiling HUGE codebases ... java/c/whatnot

Re:Yes that's nice. (1)

Repossessed (1117929) | more than 5 years ago | (#25913099)

The limit on compiling has always been the processor for me, which spends most of its time at 90% or higher (one core only, really wish they'd muti thread gcc) so I doubt the drive would help.

Re:Yes that's nice. (0)

Anonymous Coward | more than 5 years ago | (#25912619)

You're little 'dd' tests are useless. They mean nothing because Linux caches all kinds of shit. At least use hdparm but even better would be an actual benchmarking tool.

Your computer or drive is slow as hell by the way. My regular hard-drive system pushes close to 1 GB per second with those same pointless dd tests.

HAPPY THANKSGIVING!! (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#25910905)

I LOVE YOU ALL!

interesting (-1, Troll)

Anonymous Coward | more than 5 years ago | (#25910945)

but even if this does make it into laptops or servers, CmdrTaco is still a cocksucker.

Re:interesting (0)

Anonymous Coward | more than 5 years ago | (#25911623)

On-topic, concise, and highly effective. Well played.

No SATA, eh? (4, Interesting)

fuzzyfuzzyfungus (1223518) | more than 5 years ago | (#25910967)

SSDs built into mini-PCIe cards aren't new, so obviously they are possible(and I remember the concept going back as far as 44pin IDE drives on special PCI cards). Historically, though, these cards have appeared, from the perspective of the computer, as ordinary IDE or SATA adapters that just happen to have storage attached.

Does anybody know if this widget from Micron is similar, or are they actually pushing some new flavor of interconnect that will require BIOS tweaks and/or special drivers?

Re:No SATA, eh? (0)

amazeofdeath (1102843) | more than 5 years ago | (#25910989)

It's a card with the flash chips on it, not a controller card, which you seem to be referring to. The latter acts as an interface between the actual drives and the motherboard, the former *is* the drive itself.

Re:No SATA, eh? (1)

Kjella (173770) | more than 5 years ago | (#25911049)

It's a card with the flash chips on it, not a controller card, which you seem to be referring to. The latter acts as an interface between the actual drives and the motherboard, the former *is* the drive itself.

No, the parent asked it right. He was wondering if it still appears to the host as a controller w/drive atatched. And the answer is that I don't see why not, there are faster things like SATA2 like FC that as far as I know works in the same way.

Re:No SATA, eh? (1)

amazeofdeath (1102843) | more than 5 years ago | (#25911165)

I meant that there's a difference between drive -> IDE -> controller card -> PCI/PCI-E -> motherboard, and card=drive -> PCI/PCI-E -> motherboard. Of course there's no difference in motherboard's POV.

Re:No SATA, eh? (1)

owlstead (636356) | more than 5 years ago | (#25912063)

I don't know where I got the info, I think in some comments on the video, but there was mention that it looked to the system just like a normal drive. I don't think they mentioned that it looked like SATA/SCSI or IDE, but I don't think that would matter much.

Re:No SATA, eh? (1)

zdzichu (100333) | more than 5 years ago | (#25911163)

They may present SSD memory region accessible by PCI address space. The same way graphic cards present their memory. There's already driver for such kind of memory, it is included in Linux Kernel for few years and was used in implementing swap-over-videoram.

Re:No SATA, eh? (1)

kasperd (592156) | more than 5 years ago | (#25911861)

or are they actually pushing some new flavor of interconnect that will require BIOS tweaks and/or special drivers?

BIOS tweaks shouldn't be necessary. ATA controller cards that you plugged in a PCI slot came with a ROM chip on the card containing a driver that would allow the BIOS to use the disk. You could even work around bugs in the BIOS own driver for the onboard controller that way. I'm sure this new card will come with a driver on the board that will allow the BIOS to boot from it. However such a driver is only useful for booting and for running DOS era operating systems. Any modern operating system will need its own driver for such a card, which means your question is still relevant. I don't know the answer though. The card could have been made compatible with existing ATA, SCSI, or SATA drivers. But it's a new kind of device, and possibly the most efficient way to use it is different from protocols designed with disks in mind.

Oblig (2, Interesting)

PearsSoap (1384741) | more than 5 years ago | (#25910981)

64 NAND flash chips in parallel should be enough for anyone!
I'm curious, what are the applications for this kind of disk speed?

Re:Oblig (5, Funny)

Narnie (1349029) | more than 5 years ago | (#25911043)

Perhaps loading Vista in less than a minute?

Maybe?

Re:Oblig (1)

Poorcku (831174) | more than 5 years ago | (#25911621)

actually you are not that off... i make my vista installs from a USB chip. Customized with vlite the install lasts no more than 15 mins. And i am a consumer not an IT specialist :)

Re:Oblig (4, Interesting)

im_thatoneguy (819432) | more than 5 years ago | (#25911075)

Uncompressed HD, 2k and 4k film playback and capture.

At work we regularly are working with dozens of layers of 2048x1024 32bit uncompressed footage at the same time.

Re:Oblig (1)

w0mprat (1317953) | more than 5 years ago | (#25911733)

Simple: Swap file.

With this kind of SSD throughput it doesn't become necessary for an OSS to use cpu cycles populating a a file cache in memory. Which is the technique that hides the slack performance of modern storage compared to system ram.

Infact you could swap ALOT of process memory out of ram, and only experience a tiny percieved slow down in application performance. To put a number guesstimate on it: you could run a application with a footprint 5-10 times bigger than system memory with only a small percent reduction in performance in the worst scenario. Another side effect is you would not see much OS performance scaling as you increased system ram.

throughput IS NOT most important parameter (1)

Cronq (169424) | more than 5 years ago | (#25911047)

"throughput" isn't that important. Random reads/writes is what shows that most of SSD are crappy and weak unfortunately.

The worse thing is that everyone things that throughput is so important :-/

Re:throughput IS NOT most important parameter (2, Interesting)

myxiplx (906307) | more than 5 years ago | (#25911067)

Trust me, throughput is still important if you're running these in a fileserver on a fast link (10Gb ethernet link, infiniband, fibre channel, etc). The read & write speeds of standard SSD's mean you need a whole bunch in parallel to prevent them becoming a bottleneck, which makes them hard to integrate into existing servers.

In contrast, a single fast PCIe SSD can drop right in. There's definately a market for high bandwidth SSD's in high end storage devices.

Re:throughput IS NOT most important parameter (0)

Anonymous Coward | more than 5 years ago | (#25911101)

Last I checked, SSD drive capacity ran around the 32Gb mark? So the throughput means you can transfer the entire contents of the drive in under a minute. Who cares? In a drive that size, random access is likely what will matter more.

I guess what I'm saying is, give me a 1 Tb drive or bigger that can transfer at 1 GB/s and then we're talking, because I can store my large files there and retrieve them in an instant.

Re:throughput IS NOT most important parameter (1)

Pentium100 (1240090) | more than 5 years ago | (#25912243)

In contrast, a single fast PCIe SSD can drop right in. ...

Also you are limited so no more than 7 of them, in contrast to SCSI or even SATA (with 8 port controllers).

Now if someone created a PCIe "hub"...

Re:throughput IS NOT most important parameter (2, Informative)

lysergic.acid (845423) | more than 5 years ago | (#25911373)

the Micron video shows a 2-drive setup performance of 200,000 I/Os per second. (2KB) random read = ~400MB/sec.

a benchmark performed by Linux.com [linux.com] also shows that SSD absolutely creams SATA [linux.com] , even 6 SATA drives in RAID 6, in terms of random seek. in other tests a single Mtron 16GB SSD gave 111 MB/s sustained read with .1 ms access time, outstripping the WD Raptor 150, which was the fastest SATA drive at the time the test was performed (12/13/07). the only area where SSD lags behind is random write, which it suffered 23% over the raptor. but with the several-fold increases in I/Os per second achieved by Micron's PCIe cards, even random write speeds would be be faster than normal mechanical rotating drives.

Re:throughput IS NOT most important parameter (3, Insightful)

XDirtypunkX (1290358) | more than 5 years ago | (#25912921)

Actually, random reads are a very big strong point of SSDs, because they have 2 orders of a magnitude less seek time than a platter drive.

Random writes are good on SLC SSDs (the expensive variety) and average on MLC SSDs (although, many MLC drives cause a pause after too many random writes at the moment).

Interleave (1)

poity (465672) | more than 5 years ago | (#25911105)

So is this pretty much like placing the chips in a RAID within a single device? So 1 chip failure brings down all the data and makes the entire drive unreadable until you replace the bad chip? How easily can you plug in a new chip to recover all your data?

Re:Interleave (5, Interesting)

billcopc (196330) | more than 5 years ago | (#25911227)

You're implying that SSDs fail as often and disastrously as fast-spinning disk platters.

They don't, which is why a beowulf cluster of SSDs is a beautiful thing, though my concern is DDR2 can deliver much faster throughput and ns-latency, while the density trails a bit behind SSD but not that bad.

With 4gb DDR2 modules hitting the mainstream, and 8gb modules in the high end, what's stopping someone from putting a bunch of them on something like Gigabyte's i-Ram (minus the stupid SATA bottleneck) and having themselves a DIY uber-SSD ? Sure, there are differences but it's nothing a battery can't fix.

Re:Interleave (1)

Ungrounded Lightning (62228) | more than 5 years ago | (#25911381)

With 4gb DDR2 modules hitting the mainstream, and 8gb modules in the high end, what's stopping bunch of them on something like Gigabyte's i-Ram (minus the stupid SATA bottleneck) and having themselves a DIY uber-SSD ?

Possibly power density and/or signal integrity hooking up so many devices.

Not unsurmountable. But not straightforward, either.

But you need a use case to pay for the development and devices. Fast long-turn bulk storage built out of active devices isn't it: With that many in a box backup power won't be a battery (or at least not a little battery on the board) and you'll need to keep the cooling running, too.

I wouldn't trust MY long-term data to a device that forgets it if the power is lots - and requires the whole system to run for half an hour or more on backup power to spool it into something that can survive an outage.

(For shorter term stuff it might make sense.)

Re:Interleave (1)

b4upoo (166390) | more than 5 years ago | (#25911389)

What alterations does a PC need to really take advantage of this blistering high speed device?Obviously we can't pump that kind of speed through the internet. I can see certain programs being rewritten to make good use of such speed such as compression and context search scripts.

Re:Interleave (3, Insightful)

owlstead (636356) | more than 5 years ago | (#25911991)

You *are* joking right? Currently the memory bandwidth is only a minor problem against disk performance. Disk IO is either really slow or really really expensive. Even nowadays, I can download faster than that I can save / PAR2 and unrar my binaries. I won't go into playing games at the same time: impossible. Disk spaed is a slow crawl. And that's just consumer stuff, I won't go into tuning high throughput databases.

Re:Interleave (0)

Anonymous Coward | more than 5 years ago | (#25915289)

First rule of the Fight Club is...

Re:Interleave (1)

kasperd (592156) | more than 5 years ago | (#25911903)

If the data is stored redundantly, then a single chip failing does not render your data inaccessible. You would be able to go on without even noticing (which in some sense is bad, because you wouldn't replace it until it is too late). If data is not stored redundantly, then you cannot replace a chip to get your data back. It may be possible to replace a chip to make the card work again, but you'd have to reformat and start from scratch.

I guess the data is not stored redundantly. Making it redundant would either double the price per GB or quadruple the price per GB/s. And even if the data is stored redundant on the card, you could still lose the data for other reasons. The question really is, how frequently do the chips fail. You don't want to use RAID as a backup strategy. Besides, if you want RAID, you don't want it to be implemented by this card, since if the card fails you are in trouble anyway. You could do RAID-1 over two cards of this type if you really want to, but even with that, you still need a backup strategy.

There is a market... (4, Insightful)

Max Romantschuk (132276) | more than 5 years ago | (#25911127)

...for really high bandwith stuff.

For example, these puppies from Edgeware, designed for video streaming, can do 20GB/sec:
http://www.edgeware.tv/products/index.html [edgeware.tv]

(And these aren't vaporware, I've seen the actual hardware in action.)

Granted it's very custom stuff, but putting tech like this in a box with a SATA interface is really just evolutionary... Cool none the less though. :)

Re:There is a market... (1)

James Youngman (3732) | more than 5 years ago | (#25911737)

Granted it's very custom stuff, but putting tech like this in a box with a SATA interface is really just evolutionary... Cool none the less though. :)

Well, it's already happened. Take a look at the RAMSAN [superssd.com] . It kicks ass.

Re:There is a market... (1)

c_g_hills (110430) | more than 5 years ago | (#25911805)

I was about to be really impressed, but their website shows hardware doing 20Gb/sec, not GB/sec. Did you really mean that?

Re:There is a market... (1)

Max Romantschuk (132276) | more than 5 years ago | (#25913939)

Whoops... No, by bad, shift must have gotten stuck and the kids gotten my attention while previewing my post... ;)

But 20 Gb/sec already saturates two 10 Gb network ports, which is enough to impress me for now...

Is it the end of SATA? (1)

Espinas217 (677297) | more than 5 years ago | (#25911133)

So should we start thinking about replacing SATA with something else that can handle this?

Re:Is it the end of SATA? (1)

Yvan256 (722131) | more than 5 years ago | (#25911247)

The end of SATA? Dude I'm still using parallel IDE hard drives over here, and some are over FireWire 400 or USB 2.0.

In any case, Firewire 1600 and Firewire 3200 are just around the corner.

Re:Is it the end of SATA? (1)

Rockoon (1252108) | more than 5 years ago | (#25912463)

While its true that SATA had very low sights set (unfortunately), its not like PATA's limitations.

A single SATA-I channel can deliver 150MB/s and a SATA-II can deliver 300MB/s, but unlike PATA, SATA channels are independent (No master/slave sharing relationships.) Most motherboards come with 4 SATA ports of some kind right on the motherboard, so can be delivered 600MB/s or 1200MB/s via RAID0 or some other striped setup.

An additional factor is that not all SATA controllers are equal. Most cannot handle a full blast from all 4 channels and indeed the controller has been found to be the limiting factor in many cutting edge RAID setups (see the "Battleship MTRON" tests) This is where these Hard-Card's come into play. By bundling the controller with the drives, the RAIDed nature of these devices becomes completely transparent.

The adoption of a standard greater than SATA-II is inevitable, but that will just push these Hard-Card's even further, because after all... SATA-III or whatever it is will still be overshadowed by RAIDed versions of it.

Also a possibility is that these Hard-Card's are given very large DRAM caches (1GB, for example) which pretty much eliminates the SSD Random Write issue for nearly all consumers, since they will not be writing 1GB randomly.

Re:Is it the end of SATA? (1)

hbr (556774) | more than 5 years ago | (#25914737)

While its true that SATA had very low sights set (unfortunately), its not like PATA's limitations.

...like losing all your nails when you try to unplug the little bastards :-)

Just what Fusion I/O needed... (1)

brianjcain (622084) | more than 5 years ago | (#25911207)

...some competition! Seriously, I think they'll be better off. There's probably too many nervous nellies out there unwilling to dive into non-SAS/SATA/FC storage with a newcomer like Fusion I/O.

BTW, WTF is up with "The second generation of PCIe is expected out next year ..."? It's been out for a while now, I've seen motherboards, GPUs, and IB HCAs that support gen2.

This product doesn't seem to have a niche (1)

FreakerSFX (256894) | more than 5 years ago | (#25911415)

It is exciting to see this sort of development on the server front, though these technologies never seem to offer the huge advantage we'd expect. The fact that multiple companies are going in multiple directions for storage technology is excellent for the marketplace.

It seems unlikely that this will really benefit servers because generally for applications that need high IOPS numbers, you're looking at a SAN or some sort of fibre-optic storage.

Database and related apps (like SAP, Oracle, or Exchange) needs a lot of space and while SAN technology is expensive, it provides a lot of advantage over in-chassis storage devices. I can't see it being that useful. I suppose for a small Exchange or Oracle deployment that needs high IOPS without needing a lot of space or the redundancy features of a SAN, it might find some use.

All of this technology is still useful though if for closing off technology development along certain lines when a product dead-ends or for future product development where the underlying technology has merit.

Bottleneck removed (2, Interesting)

w0mprat (1317953) | more than 5 years ago | (#25911495)

This would be the first time a storage device would significantly saturate system memory bandwidth.

Indeed Intels SSD has a internal NCQ like command queue system to mask latency of the host. Common storage controllers are (obviously) not up to the job.

1gb/s from a single drive, that finally brings storage speed back in line with moore's law, which only capacity has followed it seems.

btw what's the reliability of ssd drives (1)

Sark666 (756464) | more than 5 years ago | (#25911649)

say with a power outage. I've lost a couple of drives over the years that way.

Re:btw what's the reliability of ssd drives (0)

Anonymous Coward | more than 5 years ago | (#25912137)

An SSD should be less resistant, to power surges than a HDD.

For the reason a CRT can withstand lightning better than an LCD or plasma TV. The CRT operates at very high voltages, and has big capacitors. Both can absorb energy. A HDD has a motor in it. An SSD has nothing to absorb spikes.

But, only the people that built the SSD know for sure. It could be your HDDs failed because the spike jerked the head violently, fried its protection diodes or scratched a platter. If the ICs on the HDD were fine, then maybe an SSD in its place would've been unaffected.

My advice is to spend slightly more than $8 for a decent powerstrip, to protect an expensive SSD. And it couldn't hurt to check the outlets for bad grounding. (I've seen that often in old & badly remodeled homes. yikes...)

What's so high-end about this? (0)

Anonymous Coward | more than 5 years ago | (#25911831)

While Micron's SSD technology is aimed at high-end applications that would run on Fibre Channel SANs, such as transactional databases or streaming video, [...]

PCIe x8 (or x16 for the 2-in-1 card) is nothing rare even in a consumer PC. What is so high-end about it (except, probably, the price)?

If they're trying to say "consumers don't need this speed", I'm sure most consumers don't need speed faster than what an HDD can offer, either.

Re:What's so high-end about this? (1)

owlstead (636356) | more than 5 years ago | (#25912385)

64x SSD, probably single level, where intel just goes to 10 chips for its high end SSD devices? Multiple controllers on one board, needing active cooling? You bet it will be high priced. I would guess 600 times 6 = 3600 dollar at the minimum, but probably with a price premium of say 2400 because this is an emerging market, and they don't have too much competition on this level (yet).

6000 would be 6 times the price of my current desktop.

SATA/300 (0)

Anonymous Coward | more than 5 years ago | (#25912095)

At this point, I'd be happy with a cheap, reliable piece of storage that can fully saturate an SATA/300 (300 MiB/s) bus. Don't get me wrong, I think SSDs are the future, and faster than SATA/300 would be great. At this point, however, it's what most new hardware has and that shouldn't be overlooked just because there technically are ways to make the things faster. The fastest drives would probably cost more, for one thing, and wouldn't work with as many existing systems.

SOC (1)

wikinerd (809585) | more than 5 years ago | (#25912673)

So if we are going to saturate our data links with fast SSDs now, why not get the SSD into the CPU die together with a GPU, BIOS, OS, and everything else. There are many embedded SOCs around built in this way, but these are aimed at low-power always-on applications. But I think that the era when we will have a desktop SOC is not so far away if we find a way to keep it cool and cheap.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...