×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Fusion-io IoXtreme's Consumer-Class PCIe SSD — Impressive Throughput

timothy posted more than 4 years ago | from the annoying-use-of-the-word-play dept.

Data Storage 110

MojoKid writes "When Fusion-io's first ioDrive product hit the market, it was claimed to be a 'disruptive technology' by some industry analysts, with the potential to set the storage industry on its ear. Of course the first version of the ioDrive was an enterprise-class product that showed the significant potential of PCI Express direct-attached SSD storage, but its cost was such that the mainstream market couldn't possibly justify it, no matter what the upside performance looked like. Then we heard of Fusion-io's more consumer-targeted play, the ioXtreme, that was announced this past summer. Fusion-io has only very recently released these new, lower cost cards to market. The first-ever full performance review of the product over at HotHardware shows the half-height PCI Express X4 cards are capable of a robust 800MB/sec read bandwidth and about 300MB/sec of write bandwidth. The cards particularly excel versus a standard SSD at random read/write requests and even perform relatively well with small block transfers."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

110 comments

First post? (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#30122184)

I am the greatest?

In the right place (3, Insightful)

Froze (398171) | more than 4 years ago | (#30122216)

This is the proper place for memory, on the system bus.

Putting memory behind a drive controller is just like making your gas pedal respond to a buggy whip (OK, car analogies aren't my strong point).

Re:In the right place (4, Funny)

MobileTatsu-NJG (946591) | more than 4 years ago | (#30122366)

Putting memory behind a drive controller is just like making your gas pedal respond to a buggy whip (OK, car analogies aren't my strong point).

Yeah, no kiddin. I mean if the whip has bugs in it, isn't that a driver issue?

Re:In the right place (2, Insightful)

Wesley Felter (138342) | more than 4 years ago | (#30122512)

SATA does have its advantages, though: laptop support, bootability, hot-swap, cross-platform (no drivers needed), etc.

Re:In the right place (2, Informative)

tlhIngan (30335) | more than 4 years ago | (#30123092)

SATA does have its advantages, though: laptop support, bootability, hot-swap, cross-platform (no drivers needed), etc.

A proper PCIe (miniPCIe) card supports bootability (appears as a regular controller+disk), laptops often boot from miniPCIe SSDs (netbooks notably - Asus eeePC and the SSD Acer Ones, amongst others). Hot swap not so much (I know SATA supports it, but do real world motherboard controllers support it?), though I suppose if someone were to make it an ExpressCard design, possibly. Cross-platform/no drivers if it appears as a regular IDE controller+disk.

Booting of SATA is effectively booting off a PCIe card - the SATA controller hangs off the PCIe bus (or virutal PCIe for on-chipset controllers - but they still enumerate the same as normal PCI(e) devices).

Re:In the right place (1)

Wesley Felter (138342) | more than 4 years ago | (#30123252)

The ioDrive/ioXtreme doesn't appear as a regular IDE or AHCI controller because that would significantly degrade its performance; most of Fusion-io's "special sauce" is in the driver.

Re:In the right place (1)

petermgreen (876956) | more than 4 years ago | (#30123340)

But as long as it has a bios extention rom on it that knows how to make basic use of the interface (it doesn't have to be particulally fast, just good enough to let the OS kernel/drivers be loaded by the bootloader) then it should be bootable.

Re:In the right place (1)

Amouth (879122) | more than 4 years ago | (#30131152)

thats the point - to make it so the generic OS can see the card and get to that point it has to advertise it's self as an existing standard compliant card - which it isn't because the existing standard isn't fast enough for it. instead you end up with OS/Application specific drivers to present the card as storage space.

Sure they might be able to make it present it's self as a standard ATA or SCSI interface and volume with degraded performance and then some how load a driver in the OS to talk over that existing connection in it's specialized way but to be honest - that is just another layer of abstraction and another spot for problems to creep up. If you actually have a need for the kind of performance this thing offers - being able to boot from it isn't a nice to have not a need to have from a company that is still refining it's tech (if they aren't still refining it there would be no need to put them in production with FPGA's)

Maybe on day they will get to point X where they are happy with their custom interface and present it for standard adoption - then the generic OS can implement support for not just it but all devices using the new interface.

Re:In the right place (1)

petermgreen (876956) | more than 4 years ago | (#30133018)

it has to advertise it's self as an existing standard compliant card
No it does not, it just has to have a rom that knows how to talk to the card, is loaded as a bios extension and traps interrupt 13h. The bootloader uses that interrupt to access the drive and load critical parts of the OS including the drivers needed for the main hard drive. The OS then switches into protected mode and the driver takes over.

There are plenty of SATA/SCSI/RAID cards/chips that are bootable and yet need a special driver for windows/linux to access them. Remember the "press f6 to install a third party scsi or raid driver" prompt?

Re:In the right place (2, Insightful)

marcansoft (727665) | more than 4 years ago | (#30123466)

Most motherboards these days do implement SATA hotplugging. In fact, it's pretty important for eSATA.

Re:In the right place (1)

AnonGCB (1398517) | more than 4 years ago | (#30123918)

Most netbooks I know about use a modified minipcie pinset, with usb and sata onboard that most cards use, rather than actual minipcie.

Re:In the right place (1)

LoRdTAW (99712) | more than 4 years ago | (#30131220)

Those laptop PCIe SSD's are not PCIe. The compact PCIe connector also has pins for SATA and USB just like express card. So the SSD's use the SATA pins. I thought I could use one in a PCIe x1 adapter card to boot an ATX motherboard but after some research found out that was not possible.

Re:In the right place (1)

quitte (1098453) | more than 4 years ago | (#30122684)

It's in the right place, but will it behave the right way?

When those mainboards with extra flash for vista were announced I hoped for it being accessible directly via linux mtd.
Without reading the article I still assume that it will again be just another hdd simulator that doesn't allow the os to do the wear levelling or map the storage directly into the accessible memory.

Too bad. Since Debians live-helper made building live systems easy I'm running my desktop and laptop from squashfs anyways so I'd love to see them do execute in place,too. http://elinux.org/Application_XIP [elinux.org]

sweet (4, Insightful)

Lord Ender (156273) | more than 4 years ago | (#30122242)

I bought a SATA SSD which can read and write at around 200MB/s. It was the greatest upgrade I've ever done, and for just $200 (less than my CPU or GPU). Now, I can't stand waiting for things to load when I have to work using mechanical hard drives.

If 200MB/s is that big a difference, 800MB/s is going to be... actually probably not that much better. My computer already feels "instant."

Re:sweet (2, Insightful)

XanC (644172) | more than 4 years ago | (#30122368)

It's the read latency, not MB/s that's most important for desktop usage or for most databases. Everybody quotes the numbers that they're used to quoting, but the game is different with SSDs.

Re:Latency (2, Informative)

InvisiBill (706958) | more than 4 years ago | (#30122580)

This ioXtreme is rated at 80 microseconds, while the Intel X25-M G2 is rated at 50 microseconds.

Re:Latency (1)

ShooterNeo (555040) | more than 4 years ago | (#30123558)

And the worthless JMicron controller SSDs probably have read latencies under 100 microseconds as well.

It's not read latency that matters at all, it's total THROUGHPUT for the smallest, random, reads and writes.

Re:Latency (1)

Fulcrum of Evil (560260) | more than 4 years ago | (#30123912)

It's not read latency that matters at all, it's total THROUGHPUT for the smallest, random, reads and writes.

And that throughput is dominated by latency in HDDs. Much less so for SSDs.

Re:Latency (1)

tepples (727027) | more than 4 years ago | (#30124416)

It's not read latency that matters at all, it's total THROUGHPUT for the smallest, random, reads and writes.

For one thing, in a hard disk, seek latency dominates for throughput for random loads. SSDs improve throughput by cutting latency. For another, interactive tasks demand high throughput on a burst of transactions, which needs low latency.

Which? (1)

Singularity42 (1658297) | more than 4 years ago | (#30122388)

I got the Kingston v-series 64M for around $120--I think it's only rated around 100MB. Still feels a lot faster, especially after boot-up.

Re:sweet (1)

Xiph1980 (944189) | more than 4 years ago | (#30122414)

it's not the throughput that makes your computer feel that much responsive. It's the latency (or lack thereof). Access times of harddrives are easily a factor or 100 higher than of SSD's

Re:sweet (1)

Spatial (1235392) | more than 4 years ago | (#30122452)

The random access speed is what makes it seem faster, not the throughput. That's only about twice as fast as a good HDD in terms of throughput, but the access times are orders of magnitude lower.

Re:sweet (3, Insightful)

Ivan Stepaniuk (1569563) | more than 4 years ago | (#30122518)

A lot of people feel their fast mechanical disks "instant" too. I guess that there are a lot of things you -can- do four times faster with this SSD than with the one you have. Killing mosquitoes with a gunshot is also fast.

Re:sweet (1)

compro01 (777531) | more than 4 years ago | (#30122948)

It's not the throughput you're noticing. It's the seek latency, at which SSDs are many times faster (comparing Intel's X-25M to WD's 10K RPM Velociraptor, you're looking at about 65x faster. Comparing to a 7200rpm drive, you're looking at about 100x difference.) than mechanical drives.

Re:sweet (0)

Anonymous Coward | more than 4 years ago | (#30123182)

My whole computer isn't worth more than $200. Fortunately I don't use any heavy apps (or desktop environments), so my computer still feels "instant". :P

Re:sweet (1)

AbRASiON (589899) | more than 4 years ago | (#30124268)

and my rebuttle to your post (copied and pasted from my reply to someone on another forum)

I recently installed 2 of the 120gb Agility OCZ drives in RAID0 - apparently SSD's scale better with raid than a regular hard disk.
I can read at 390mb/s write at 220mb/s and the random 4k reads and writes are about 23mb/s (regular disks can do about .7mb/s in such tests)

According to benchmarks, a single OCZ disk is pretty darn close to the intel in the real world performance tests and one can only guess that 2 of them would come close to matching the performance of a single intel.

I was using Windows 7 with readyboost on a USB stick before I got the SSD's and I have an absoloute monster PC. I also know how to configure the thing, the /temp/ directory was on one disk, the /swap/ was on another disk, the C: was on a 7200rpm drive and it was defragged and kept pretty clean.

The problem is that while some benchmarks show my machine vastly superior with the 2 SSD's, and I'm pretty goddamn impatient,... I errr.. kinda can't really notice the difference much. :(
Sure Crysis loads a level now as quick as 24 seconds instead of 34 seconds and HL2 opens as quick as 20 seconds instead of 30 but......... not as snappy as I was hoping.
Yeah ok if I open 12 applications in a single click, yep it'll destroy a HDD but for general use I honestly don't notice - torchlight loads were still slow by my opinion, Steam doesn't magically open in a second (has to connect to the net) anyhow, I had the money and I wanted to fiddle around with some new hardware.

FWIW, SATA 3.0 is next year, ONFI 2.0 is next year and Intel and Indilinx (ocz) revision 3 is next year,... I am almost tempted to change my stance and suggest waiting.
Finally, yes if you have a shit PC with a slow disk, it'll totally make Windows usable - but Windows 7 is so well optomised (I'm genuinely surprised) and a meaty PC is so quick, it really is hard to notice.

Re:sweet (1)

Lord Ender (156273) | more than 4 years ago | (#30124972)

With a mechanical disk, you must wait on apps to load. With a fast SSD, they load as fast as you click. That is a huge difference. Your train of thought is never derailed due to disk waits.

There is no cure for net latency yet. This is irrelevant. My computer works as fast as I think, and I love that!

Re:sweet (1)

AbRASiON (589899) | more than 4 years ago | (#30125188)

I know all this, search for my history on disks - I know how they work, I know about latency, I know which portions of disk operations should be quicker and I'm telling you, on a high end machine with a 7200RPM disk and 6gb of ram the difference is negligable, especially on a quad core rig which used a 2gb readyboost disk.

Re:sweet (1)

Lord Ender (156273) | more than 4 years ago | (#30126006)

In my experience, Netbeans takes about ten times longer to load on a mechanical disk. If you call that negligible, you have a very strange definition of "negligible."

Re:sweet (1)

WuphonsReach (684551) | more than 4 years ago | (#30125856)

FWIW, SATA 3.0 is next year, ONFI 2.0 is next year and Intel and Indilinx (ocz) revision 3 is next year,... I am almost tempted to change my stance and suggest waiting.

I'm waiting until they hit my price point of under $1/GB for the better units. MLC based SSDs are still up around $2.25 to $2.45 per gigabyte for the low-end stuff, with the better MLC in the $2.50 to $3.25 per gigabyte range. I think the best spot price I've seen yet is around $1.90 for MLC.

At $1/GB, I'd quickly replace the 2.5" SATA magnetic drive in my laptop with an SSD. And maybe the boot drive on my game PC. Once it gets below $0.65/GB, I'd definitely use it on the game PC for the boot drive. For servers, the SLC stuff needs to get below $5/GB (not the $10-$15/GB that it is now).

(Sigh) Probably another year to break $1/GB and another year after that to get below $0.50/GB for the MLC stuff.

Still can't boot off of it. (2, Informative)

jandrese (485) | more than 4 years ago | (#30122464)

It still has many of the limitations that the original FusionIO cards have: It's pricey at $11/GB (although not astronomical like the original products), and you still can't boot off of it. This means you'll need at least one old fashioned drive with the OS on it to get your machine going, which is a shame because the system files can often make good use of SSD performance.

On paper, I don't think the performance difference between this and something like an Intel X-25m is going to justify the 4 fold price difference. When people went from their laptop HDD to the Intel drive, they often saw startup times and whatnot go from multiple (tens!) of seconds to less than a second. This card is likely to push them from less than a second to a smaller less than a second, it's just not worth it to most people.

Re:Still can't boot off of it. (2, Insightful)

TeknoHog (164938) | more than 4 years ago | (#30122614)

It still has many of the limitations that the original FusionIO cards have: It's pricey at $11/GB (although not astronomical like the original products), and you still can't boot off of it. This means you'll need at least one old fashioned drive with the OS on it to get your machine going, which is a shame because the system files can often make good use of SSD performance.

I have a Linux machine that boots off a hard drive (i.e. bootloader and kernel) and the rest of the system runs on a SSD. The HD can then spin down until next boot. I guess other real operating systems can do this too.

Re:Still can't boot off of it. (1)

linzeal (197905) | more than 4 years ago | (#30122650)

Try that with windows. All the operating system files for windows must be on the same drive and partition.

Re:Still can't boot off of it. (0)

Anonymous Coward | more than 4 years ago | (#30122770)

So what? Just use GRUB to chainboot.

Re:Still can't boot off of it. (0)

Anonymous Coward | more than 4 years ago | (#30123246)

Can't chainboot a non-int13 device

Re:Still can't boot off of it. (0)

Anonymous Coward | more than 4 years ago | (#30122812)

He said other REAL OSes.

Re:Still can't boot off of it. (1)

afidel (530433) | more than 4 years ago | (#30123366)

Uh, Boot and System volumes can in fact be different. The GUI mode setup might not let you do this but multibooters have known it for years.

Re:Still can't boot off of it. (1)

Barny (103770) | more than 4 years ago | (#30125612)

Wrong actually.

I have mounted the Program Files (x86), the Users and the UserData folders as HDD partitions mounted via NTFS folders to these points, and it means your system boots and runs some apps as fast as the SSD can let it, but for storage of big files and junk, the rust takes over.

Re:Still can't boot off of it. (1)

TheRaven64 (641858) | more than 4 years ago | (#30128008)

If you're not writing to it much, then you can plug a (very cheap) 4GB CF card in via a $2 CF to IDE adaptor as a boot volume and get rid of mechanical disks altogether. O course, if your SSD is bootable then this isn't an issue.

Re:Still can't boot off of it. (0)

Anonymous Coward | more than 4 years ago | (#30122832)

It still has many of the limitations that the original FusionIO cards have: It's pricey at $11/GB (although not astronomical like the original products), and you still can't boot off of it.

I've never understood why they wouldn't make these bootable. People use bootable PCI-E RAID cards (or just SATA controllers, for that matter) all the time. What's so tricky about making a bootable PCI-E SSD? Is it really that hard to write the code for it?

Re:Still can't boot off of it. (2, Interesting)

fuzzyfuzzyfungus (1223518) | more than 4 years ago | (#30123136)

It probably isn't all that hard to write the code for it, at least not by the standards of whoever developed the firmware for this product.

Making sure that "the code" is present, and actually functions, on god-knows-how-many motherboards, each with its own BIOS horror show, is probably pretty tricky.

By far the easiest way is simply emulating an SATA controller; but then you would lose out on the assorted FusionIO special sauce and might as well just buy the cheaper intel drives and plug them into your existing SATA ports.

Re:Still can't boot off of it. (1)

petermgreen (876956) | more than 4 years ago | (#30123472)

PCI (including PCIe) devices can supply a bios extention rom to make themselves bootable, afaict this is how most scsi and sata cards make themselves bootable.

Still workable (1, Interesting)

ciroknight (601098) | more than 4 years ago | (#30123060)

You don't really still need the spinning media. There's a cheap, incredibly easy, fast and inexpensive media that's perfect for booting your computer, and your computer is loaded with ports for it. It's called a USB thumbdrive.

It's pretty simple actually: they're cheap and easily available in all kinds of different sizes ranging from "I just need to boot Linux" (256MB) to "I want all of my apps on it too" (32GB+), they're writable so you can update the OS, and you've likely got a multitude of ports inside of your computer that go completely wasted because they're not connected to anything (and a pigtail for one of these is a nickel at a computer store, if your motherboard didn't come with a few in the box). Just plug it in, plug in the USB drive, install your OS on it, and be done. You can choose to swap to it or the faster media at your own discretion.

Re:Still workable (1)

ShooterNeo (555040) | more than 4 years ago | (#30124160)

Erm. Booting Windows 7 off of a USB thumbdrive? (you'd need that 32GB model)

Dunno, doesn't sound like a very good idea. The OS is huge, and needs lots and lots of IO accesses, both for booting and during normal operation. Thumbdrives generally aren't really designed for that kind of continuous use. And finally, the slowdown from waiting to boot would possibly cause more lost time than you'd gain from having an $800 PCI-express card for your application files.

Re:Still workable (1)

karnal (22275) | more than 4 years ago | (#30125694)

In this instance, "booting the PC" doesn't necessarily mean "loading all of the system files." Mainly just means getting the system up to the point that you're then pulling system (e.g. \windows files) off of the SSD.

Re:Still can't boot off of it. (0)

Anonymous Coward | more than 4 years ago | (#30123802)

The IOXtremes can handle alot more I/Os per second then the X-25 can, there is alot of value there for some applications. You also need to consider that an X-25 is SATA which means it requires a controller as well. If you really want to get the full potential out of X-25s you can't just use the on-board SATA controller. A good hardware raid SATA controller that can push ~800mbps is going to run at least $800. When it's all said and done, the IOXtremes really are not that much more expensive looking at the total solution then an X-25 based solution and can handle significantly more I/Os. The other tradeoff is capacity (unless you have alot of PCI-E slots in your box, you will probably be able to build a larger array of x-25s in your chassis.)

Re:Still can't boot off of it. (0)

Anonymous Coward | more than 4 years ago | (#30124586)

a Bootloader is all that has to be on a HDD (or CD, USB Stick, DVD) that is bootable, to actually boot. The OS(including windows) and the rest can be wherever you wish them to be (bootable drive or not).

Re:Still can't boot off of it. (0)

Anonymous Coward | more than 4 years ago | (#30127312)

It is incredible that they can charge so much and still can't write a stupid BIOS extension to make the card bootable. Makes me wonder how good their Windows driver is.

Re:Still can't boot off of it. (1)

Just Some Guy (3352) | more than 4 years ago | (#30128906)

On paper, I don't think the performance difference between this and something like an Intel X-25m is going to justify the 4 fold price difference.

This is the perfect caching layer for ZFS. One command to insert it as a read cache between the OS and a big array can make a huge difference [sun.com] in IOPS. I can't easily convince my boss to buy a machine with 80GB of RAM that will be used for nothing but filesystem caching, but I wouldn't hesitate to ask him for a PCIe card to drop into the servers we already have.

Well (4, Insightful)

ShooterNeo (555040) | more than 4 years ago | (#30122480)

First off, late in the article they show that game level load times are faster with these PCIx SSDs. Left For Dead loads about twice as quick with the Fusion IOXtreme. So the end user would notice a difference (especially as time goes on and apps become more and more bloated)

One thing this product does effectively illustrate is that SATA 6 is already obsolete. All this card really is is the same grade of memory chips that goes in a lesser SSD like an Intel X-25M. The difference is that the controller gangs together 25 channels instead of just 10 like the Intel product. The controller isn't even that high performance a part - it's using an FPGA. An ASIC version of the chip could be cheaply fabbed using technology several generations back. So, in the long run, the cost to design and manufacture a PCIx SSD is virtually identical to the cost of a SATA SSD. And SATA 6 is already too slow for SSDs to use (and too fast of an interface for a mechanical hard drive)

All in all, I predict that in a few more years, basically all SSDs sold will use a PCIx interface to connect to the host PC. Laptop manufacturers will have to change their internal mounting scheme slightly. And, prices should fall drastically from the $900 this IoXtreme is MSRPing at.

Re:Well (0)

Anonymous Coward | more than 4 years ago | (#30123090)

I'm nitpicking, but PCIx is not PCI Express. PCIx is a bus commonly found in servers and used for HBA's, NIC's, etc, and is compatible with PCI. PCI-E is the bus that replaced AGP and is intended to eventually replace PCI and PCIx (for example the FC HBA and the SAS controller in my new Precision T5500 workstation are PCI-E rather than the PCIx that was in my old PowerEdge 2650). PCI-E is serial rather than parallel like PCI(x).

Re:Well (1)

petermgreen (876956) | more than 4 years ago | (#30124212)

is intended to eventually replace PCI and PCIx (for example the FC HBA and the SAS controller in my new Precision T5500 workstation are PCI-E rather than the PCIx that was in my old PowerEdge 2650)
Afaict PCIe has already practically killed PCIx, take your precision workstation for example, four PCIe slots but only one PCIx.

At the low end PCIe x1 cards/slots don't seem to be doing so well though, while lots of machines have at least one slot I don't think i've ever seen a card in person (and the cards i've seen on suppliers websites are generally more expensive and with less choice than PCI versions). It seems at the low end that most stuff has either moved to USB or been itegrated onto the motherboard (though when integrated on the motherboard it is often connected with PCIe x1).

Re:Well (1)

Agripa (139780) | more than 4 years ago | (#30128918)

At the low end PCIe x1 cards/slots don't seem to be doing so well though, while lots of machines have at least one slot I don't think i've ever seen a card in person (and the cards i've seen on suppliers websites are generally more expensive and with less choice than PCI versions). It seems at the low end that most stuff has either moved to USB or been itegrated onto the motherboard (though when integrated on the motherboard it is often connected with PCIe x1).

I have a PCIe x4 RAID controller and an inexpensive PCIe x1 Intel network adapter. Besides disk controllers and network adapters, I have seen various I/O cards including serial/parallel with PCIe x1 interfaces. In my experience at least, USB based serial and parallel converters are prone to problems even if you get past driver and operating system support issues.

Re:Well (1)

Fulcrum of Evil (560260) | more than 4 years ago | (#30123930)

what'd be cool is multi-channel SATA - if the host can see that one device is on the other side of multiple channels, it can just bond them together and send/receive data on whatever I/F is free at the moment.

Re:Well (1)

ShooterNeo (555040) | more than 4 years ago | (#30124022)

I guess. The thing is, PCI express x4 is perfect for the job. Another poster mentioned that modern machines often hang the SATA controller off of the PCI express bus anyways...might as well reduce the complexity. All the interface chips have been out for years, and are very cheap and ready to go. The only missing element is that you do need a very high performance design for the drive controller on your SSD in order for it to be worth it.

Re:Well (1)

DRMShill (1157993) | more than 4 years ago | (#30126194)

Actually I think these observations only serve to reinforce Professor John Frink's predictions that in 10 years time SSDs will be twice as fast, 10,000 times the physical size and will be so expensive that only the five richest kings of Europe can afford it.

One major issue with it (1)

InvisiBill (706958) | more than 4 years ago | (#30122486)

Unfortunately, a bit of a let-down for some might be, that the product still currently can't be utilized as a boot volume.

That means you still need some other drive (probably an "old" SATA SSD) to boot from. You can then load all your apps (and probably even some parts of the OS with a little hacking) onto this beast, but you still can't use it as your primary drive.

Fusion-io assures us that this feature will be supported in future driver and/or firmware revisions but also didn't commit to a schedule for that roll-out just yet.

Hopefully it comes along soon and at no cost for the early adopters of this item. I'd love to see these become the standard, but it doesn't really fit for me at the moment. As stated above, the jump from HDD to SATA SSD is a much larger percentage increase than SATA SSD to PCIe SSD, and cheaper too.

Re:One major issue with it (1)

XanC (644172) | more than 4 years ago | (#30122718)

You just need to load the kernel from some other medium. An old hard drive, a USB stick, an old flash card or something.

Unless you're running a truly backwards OS like Windows. Then, yeah, you have to put a lot of stuff on your boot drive.

Re:One major issue with it (1)

Rockoon (1252108) | more than 4 years ago | (#30122728)

The unfortunate part is that there is no technical reason for a PCIe device not to appear to be an additional drive controller, and thus be bootable. Back in the day my first HD was a 32MB "Hard Card" that simply slotted into a 16-bit ISA slot.

Re:One major issue with it (1)

blackraven14250 (902843) | more than 4 years ago | (#30123848)

As has been said before, it's the ioXtreme's driver that helps provide this performance, and using a standard driver would greatly diminish this speed advantage.

Re:One major issue with it (1)

Rockoon (1252108) | more than 4 years ago | (#30124174)

Explain to me why ioXtremes driver is mutually exclusive with a regular one.

It sounds to me more likely that they skimped big time on the hardware end, than that they met up with a technical limitation.

Re:One major issue with it (1)

TheRaven64 (641858) | more than 4 years ago | (#30128302)

Irrelevant. The same is true of most IDE drives. You can access them via BIOS interrupts, but you get poor performance. Bootloaders do this because it's incredibly simple and they only need to load a kernel (or a second-stage bootloader) which can then load a real driver. This then bypasses the BIOS driver and has better performance.

Re:One major issue with it (1)

Predius (560344) | more than 4 years ago | (#30122896)

I've got to try this again, but back in the day you could install on a drive that Windows had a driver for, but the BIOS couldn't boot as long as you had a small NTFS/FAT partition on a drive the BIOS COULD boot to hold the bootloader and driver... So you primary drive/OS would live on the SSD, and that legacy pile of junk hanging off your ATA port could be a tired piece of CF for all Windows could care.

Price ? (1)

dbcad7 (771464) | more than 4 years ago | (#30122528)

In looking at similar items pricing, sorry, don't care if it displays information before I think to ask for it..

For about $900 (1)

adisakp (705706) | more than 4 years ago | (#30122726)

For about $900, or the cost of the Fusion ioXtreme 80GB card, I bought two Intel 160GB SSD drives that I have in a RAID 0 configuration. It's very fast and 4X the capacity for the same price. Oh, and it's bootable.

Re:For about $900 (1)

maxume (22995) | more than 4 years ago | (#30123054)

Given that the SSDs are very nearly striped at the block level anyway, I can't imagine that RAID 0 is adding much more than flakiness.

Re:For about $900 (1)

Celandro (595953) | more than 4 years ago | (#30123740)

You are uninformed. The drives are fast enough that they hit the cap for a single SATA connection.
Here's a review of 16 Intel drives in raid-0
http://www.tomshardware.com/reviews/ssd-6gb-raid,2388.html [tomshardware.com]

Its not quite 1600% faster but its about 1300% faster than the peak transfer rate of a single SATA connection.

Then again.. if you really wanted performance for cheap, you could get 8 of the new 40 gig Kingston (intel based) drives and raid-0 them for the same price as the Fusion ioXtreme card. I'd challenge someone to come up with a better performance solution for ~$900
Note: I have no idea if there is an 8 port hardware raid 0 card available anywhere that can do 1.5GBps but if there is, I'd love to see it. Software raid-0 should still beat the pants off the fusion drive.

Re:For about $900 (1)

maxume (22995) | more than 4 years ago | (#30124158)

Peak transfer isn't a particularly interesting workstation benchmark (If I were chasing performance, I might put a bunch of spinning disks in RAID 0 to cut down on latency, but the RAID isn't going to make the USB drive I am copying files to any faster, so the transfer rate isn't really that interesting).

And really, I wouldn't be shocked if OP was using software RAID.

Re:For about $900 (1)

adisakp (705706) | more than 4 years ago | (#30131674)

FWIW, you can get near linear scaling on many MB RAID controllers with SSD drives up to 3 drives. You may get a boost on the 4th drive as well, but it's not as much (some MB RAIDs top out at around 666MB/s and 3 Intel SSD drives will push this limit). As a matter of fact, with less than 4 drives, the difference in speed between built-in MB RAID and dedicated HW RAID is almost indistinguishable.

There are plenty [benchmarkreviews.com] of benchmarks [hothardware.com] on the net if you look for them that show both a large speedup in transfer rates and in IOPS with MB-based RAID and SSD's.

BTW, your USB drive copying example is flawed... by that logic, you should never buy a drive that does more than about 30MB/s because that's currently where USB tops out. Transfer rate is important for other things such as processing large video files, multimedia creation, loading large datasets (video game levels), etc.

Re:For about $900 (1)

maxume (22995) | more than 4 years ago | (#30132160)

Incomplete is probably a better word than flawed, the context is comparing the speed boost of going from a spinning disk to a SSD or a couple of SSDs in a RAID setup and the copying example is just a case in my usage where there really isn't any difference between the two.

As you pointed out in your other reply, I was wrong about the benefits of the RAID setup, but I still have trouble looking at it from anything other than a cost/benefit perspective (where, again, for me, the 10 seconds that the RAID saves costs about the same amount as the presumably much larger amount of time that simply switching to an SSD would save).

We obviously have different views of the costs, as you own a mid-level SSD setup and I don't own one at all.

Re:For about $900 (0)

Anonymous Coward | more than 4 years ago | (#30127762)

Then again.. if you really wanted performance for cheap, you could get 8 of the new 40 gig Kingston (intel based) drives and raid-0 them for the same price as the Fusion ioXtreme card. I'd challenge someone to come up with a better performance solution for ~$900
Note: I have no idea if there is an 8 port hardware raid 0 card available anywhere that can do 1.5GBps but if there is, I'd love to see it. Software raid-0 should still beat the pants off the fusion drive.

The LSI 9260 card works. Lots of OCZ users have gotten insane performance using it and either 4 or 8 OCZ SSDs. But you're making a very common mistake in assuming that the 40GB Kingston drives with Intel controllers are as fast as the 80GB Intel drives. They are not. They're cheaper because not only do they have half the memory, but they also have half the channels (5 instead of 10). So you can RAID0 a pair of 40GB Kingston drives and performance would be on par with a single 80GB Intel.

Re:For about $900 (1)

adisakp (705706) | more than 4 years ago | (#30131326)

Given that the SSDs are very nearly striped at the block level anyway, I can't imagine that RAID 0 is adding much more than flakiness.

I actually tried both single drive and RAID0 in my Vista configuration. Single drive took about 20 seconds to boot (once POST completes), RAID0 takes about 10 seconds to boot. So it's twice as fast based on a simple real world timings (VISTA boot speed).

Latency (0)

Anonymous Coward | more than 4 years ago | (#30122858)

No difference in rated read latency over SATA SSDs. Bummer. That's the primary improvement SSD's have made over mechanical drives.

Two years from now, these will cost $25 (1)

Fantastic Lad (198284) | more than 4 years ago | (#30124294)

And five years from now, they'll be dusty leftovers found in plastic bins at the local electronics surplus shop. If you can even find them.

Ten years from now, people will hold them up and squint at them and wonder what they were originally built to do. Computer cards all look the same. The only notable thing about these ones is that they don't have any ports on the back. After a couple seconds of interest, they'll get tossed back into the bin.

No real point to this post, other than the "gosh" factor. It just still amazes me how quickly consumer tech ripens and rots.

-FL

Re:Two years from now, these will cost $25 (0)

Anonymous Coward | more than 4 years ago | (#30124720)

Really, all this is, is a non-bootable hardcard.

Remember those? They were somewhat popular in the 1980s and late 1990s... Little ISA/EISA cards with a 20 to 40mb harddisk attached?

Looked kinda like this? [computermuseum.li]

To me, this is just a rehash of 20 year old technology, that has been merficully forgotten by today's generation.

If you want something a little more interesting, look at this... [acard.com] Or This. [acard.com]

Both are SATA, but are potentially user-upgradeable. The latter is deffinately more price competative per GB, AND it's bootable!

Pointless! for now... (1)

SirAstral (1349985) | more than 4 years ago | (#30124398)

The price tag vs capacity and limitations makes this a worthless purchase for ANY Serious minded individual.

Hotswap isn't really a viable option for failed devices.
RAID if possible would not be conventional or standardized.
Price tag is completely stupid, especially when you can have an Intel x25 80 gig for much less in cost.

Most people are awe inspired and fooled by the grand total throughput of this thing at 800MB/s. Let me tell you, that is not really all that impressive. Just 8 HDD's could turn that number in a pure linear read test. Yes, I know some of you are going to say that well... thats 8 drive this is just 1! let me include you on a little secret! SSD's are exactly RAID-0 devices when you consider internal devices. That is how they get really fast when it comes to reading large amounts of data. However... the truth comes out when you put an SSD to a 1kb read & write test. You can bring even a good SSD down to 5MB/s of throughput. Now to put that in perspective, the very same test on a HDD can easily bring it down to 0.5MB/s of throughput.

Even if a single HDD could turn out 1 terabyte per second it still would not be able to touch an SSD in real performance or in perceived performance. And here is why. Go to your OS system folders and look at how big the files are. Over 50% of the system files are under 100Kb in size. Now how fast can an SSD get that file to you vs an SSD. Keeping it simple a good raptor HDD will take about 3ms and the SSD would take about 0.1ms. That is more that just an order of magnitude faster. It is easily 30 times faster than the HDD. Now lets pretend there are over 5,000 of those files to read!

HDD read
5,000 x 3ms = 15,000 MS equal to 15 seconds of time you have to wait to get all those files opened!
If each file is 100Kb then you will have read 500 Megs of data over 15 seconds which is roughly 33MB/s. Even though drives like this could do 120 Megs a second, it only gave you 33% of its peak limit.

SSD read
5,000 x 0.1ms = 500 MS equal to about 0.5 seconds of time you have to wait to get all those files opened!
If each file is 100Kb then you will have read 500 Megs of Data over less than a second which is roughly 500MB/s. Of course a SATA bus can take about 260+/- Megs a second so it would take more like 2~3 seconds to get that data opened to you, but it is still much faster than the HDD's latency and you literally get much closer to your drives peak limit.

In the end result, in real world applications just a single SSD can out perform even 16 RAID HDD's if the transactions are small and numerous! Which coincidentally, actually happens A LOT on every system whether it be a desktop or a database.

Now considering that for $800 you can easily get about 4 60 gigs SSD's with 200MB/s read and 130MB/s write speeds each you will not only have a very nasty fast drive that is 800MB/s Read and 520MB/s write in RAID-0 you will also have 240 gigs of space which is about 3 times the capacity for the same price! You could even get some hotswap and a RAID 5 for some redundancy and 180 gigs of space!

Face it, this sort of technology is just not 'ready' yet. Keep to the SATA bus. It still has plenty of bandwidth for LAS applications and is in no need of going anywhere!

Re:Pointless! for now... (0)

Anonymous Coward | more than 4 years ago | (#30130656)

You sound intelligent, so I'll assume you'll keep researching the subject and you'll eventually realize the flaws in your argument.

Go ahead and string a bunch of regular SSDs together and see if your theories hold water. Without wear-leveling, grooming, and other advanced controller functions, they'll fail long before the Fusion-io drive.

Cost is not only measured in $/GB. $/transaction counts in a lot of environments. Power consumption matters in a lot of environments. Fusion-io drives are already more cost effective than spinning media, as illustrated in the MySpace case study they released a few weeks ago, and they're only going to get better. The math is pretty simple when 1 server with Fusion-io drives can out-perform 10 servers with other media.

Of course, it isn't for everybody. Yet.

Christmas gift.shoes,handbags,ugg boot,Tshirts, (1)

coolforsale107 (1679900) | more than 4 years ago | (#30125338)

http://www.coolforsale.com/ [coolforsale.com] Best quality, Best reputation , Best services Our commitment, customer is God. Quality is our Dignity; Service is our Lift. Ladies and Gentlemen weicome to my coolforsale.com.Here,there are the most fashion products . Pass by but don't miss it.Select your favorite clothing! Welcome to come next time ! Thank you! Air jordan(1-24)shoes $33 Nike shox(R4,NZ,OZ,TL1,TL2,TL3) $35 Handbags(Coach lv fendi d&g) $35 Tshirts (Polo ,ed hardy,lacoste) $16 free shipping competitive price any size available accept the paypal Thanks

The speed has limited usefulness (1)

m.dillon (147925) | more than 4 years ago | (#30125722)

You have to ask yourself, what do you need that kind of speed for vs a more portable, hot-swappable, and likely longer-lived SATA/E-SATA standard? Maybe a transactional store for a database, but that is pretty much it. A PCI-e style interface would be relegated only to those situations where extreme performance is required. Such devices will always be priced at a premium over their SATA counterparts simply by virtue of their lower volume production.

I do have an interest in how well a SSD could be used to expand the effective physical memory for a machine under load. Say, for an applet server. Another possible use would as a disk cache fronting slower multi-terrabyte HD storage. A PCI-e based device might be an improvement over SATA for that sort of thing though probably not enough of an improvement to justify the difference in cost. The real limitation to using a flash device as another caching layer is not performance but instead wear on the flash chips.

-Matt

Re:The speed has limited usefulness (1)

Techman83 (949264) | more than 4 years ago | (#30126114)

It's speed and usefullness is limited only by your imagination. I know we have workloads here where the data set mightn't be huge, but those extra IOPs would make a huge difference. Cost, well it's irrelevant, all new tech is expensive when it's released. Hell if you go by SSD standards, they are still way more expensive then their spinning platter brethren.

I see your defeatist attitude, and raise you one positive and thoroughly excited attitude that wonders where the tech world will go next.

Re:The speed has limited usefulness (1)

sirdankus (1004283) | more than 4 years ago | (#30126280)

In the scope of a consumer product, I can't think of many common workloads that would really benefit from a PCIe interface. The innovation of using PCIe as a SSD interface created a nice middle point between DRAM and RAID volumes. Such a middle point just seems totally unnecessary right now in the consumer market. The biggest issue for SSD adoption faced by most people is price, so its not defeatest to say that price should be the focus of products that actually get to market. As previous posters have pointed out, the "wow factor" of SSDs - even bound by SATA limitations - are already plenty to convince consumers that there's something worthwhile there. Trying to further up the wow factor to increase the value of these devices is way closer to the diminishing returns part of the graph than lower prices.

Re:The speed has limited usefulness (1)

Techman83 (949264) | more than 4 years ago | (#30126440)

And what about the businesses that aren't big enough to go beyond the "consumer" scope. What about Uni's with limited budget? Go look at the latest and greatest "consumer" Intel processor [newegg.com], I can guarantee you won't have much change out of $1000. Yet give it 12 months it'll probably be less then a 1/3 of that price (maybe sooner, I don't keep up with pricing).

With that kind of attitude 640K would be enough for anyone.

Re:The speed has limited usefulness (1)

sirdankus (1004283) | more than 4 years ago | (#30128708)

First, are there really small businesses who need this kind of performance, same question for Universities?

The CPU comparison is just apples to oranges. The primarily competitor for SSDs, right now, is HDDs. So HDDs:SDDs = x86 CPUs:? Even if I take your analogy is valid, the only reason processors come down in price so fast, is because they sell about a bazillion (rough estimate) of their actually affordable processors, recoup their R&D and optimize their yields.

> With that kind of attitude 640K would be enough for anyone.
Because I opine that they are innovating in the wrong direction, I therefore also believe no one should ever innovate?

Re:The speed has limited usefulness (1)

Techman83 (949264) | more than 4 years ago | (#30128802)

Tell me, what is the right direction then?

Re:The speed has limited usefulness (1)

sirdankus (1004283) | more than 4 years ago | (#30130164)

For whatever reason, SSDs are still expensive. If the reason is material and basic process costs, then I concede and will agree that improving the value of an SSD should be done through improved performance. However, I don't think this is the issue, so the right direction ought to be bringing costs down, without entirely sacrificing SSD advantages.

Some manufacturers are doing this by pairing premium controllers with non-premium NAND MLC (a la OCZ Agility). I'll say it again: Transfer rates are important, but the current SATA limits are already high enough that an SSD operating at SATA limits is ludicriously expensive (for consumers). Therefore, it makes little sense to go beyond SATA right now.

Re:The speed has limited usefulness (1)

Techman83 (949264) | more than 4 years ago | (#30137628)

And if your ahead of the Curve, your product is so much more polished then your competitors when it is relevant.. Judging by the article, these devices aren't using dedicated chips yet (using programmable ones instead) which bumps up the cost by around $200. I'd reckon once they finalise things like Booting, that chip will be replaced with a dedicated one. You could see it drop by a 1/3 in 6 months.

I'd say take a walk in the real world, not everything is perfect and costs money right from the word go. You may want something different, But I can see where it could be use. I know someone that would love one, the guy that runs Cacheboy [cacheboy.net], now he doesn't have the budget for enterprise stuff, but could certainly give one of these a good thrashing.

Re:The speed has limited usefulness (1)

petermgreen (876956) | more than 4 years ago | (#30127600)

In the scope of a consumer product, I can't think of many common workloads that would really benefit from a PCIe interface.
Well the review showed it as cutting game load times in half compared to a conventional SSD. Is that worth adding $1000 to the cost of your gaming rig? I personally don't think so but I bet there are some gamers who think otherwise just as there are some who will spend $1000 each on CPU, and $700 each on thier SLI graphics cards. These early adopters cover some of the R&D and hopefully bring prices down to levels acceptable to the rest of us.

Re:The speed has limited usefulness (1)

sirdankus (1004283) | more than 4 years ago | (#30130322)

A valid point. Although, CPU and SLI charge hilarious premiums for something that could be a real competitive advantage for a gamer, i.e. frame rate. It would be a harder sell to convince a gamer that 1337 loading times will lead to similarly 1337 headshot percentages.

Re:The speed has limited usefulness (1)

MistrBlank (1183469) | more than 4 years ago | (#30128734)

I wish people would stop jumping on the "wear on the flash chips" issue. It's not that big of a deal anymore, drop it people.

Re:The speed has limited usefulness (1)

mochan_s (536939) | more than 4 years ago | (#30133424)

That kind of speed is needed to run things faster. It's like saying, who needs 16 cores, all they do is run things faster!

Less stuff will have to be loaded into RAM as the cost of a disk read isn't catastrophic, IO can substitute for computation - store precomputed textures instead of computing transformations to textures with imprecise fast routines, get away from the mad sequentiality that's everywhere in high performance computing.

RAIDing and striping hard disk requires huge enclosures, heat dissipation problems, vibration problems. Similaring things for flash memory like ganging is very simple and can all fit inside a tiny enclosure without heat or mechanical problems. If 16 cores is the way to go, then why not 16 independent flash disks running inside a 2.5" enclosure running like a lustre disk array? But, unlike a lustre array, you don't have to ask for gobs of sequential data - any random access is fast as sequential access.

Hard disks are really good for storing enormous amounts of data and reading enormous amounts of data sequentially. The only time this fits general consumer demand is for media servers. A fast SSD that stores OS, app data and user data and a secondary hard disk that stores large amounts of data that is rarely accessed but needs to be hand is the way to go.

Anyone else remember the SemiDisk? (1)

jtownatpunk.net (245670) | more than 4 years ago | (#30137546)

It was a RAM drive that went in the old Epson QX-10 and QX-16 computers. I remember when we dropped one of those in the old QX-10 and TP/M and ValDocs launched almost instantly. And two freakin' megabytes of storage. It was HUGE!!! And the battery backup could keep your data safe for a good 6 hours without power.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...