Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

OCZ IBIS Introduces High Speed Data Link SSDs

CmdrTaco posted more than 3 years ago | from the zoom-zoom-zoom dept.

Data Storage 76

Vigile writes "New solid state drives are released all the time, and the performance improvements on them have started to stagnate as the limits of the SATA 3.0 Gb/s are reached. SATA 6G drives are still coming out and some newer PCI Express based drives are also available for those users with a higher budget. OCZ is taking it another step with a new storage interface called High Speed Data Link (HSDL) that extends the PCI Express bus via mini-SAS cables and removes the bottleneck of SATA-based RAID controllers thus increasing theoretical performance and allowing the use of command queueing — vital to high IO's in a RAID configuration. PC Perspective has a full performance review that details the speed and IO improvements and while initial versions will be available at up to 960 GB (and a $2800 price tag), in reality, the cost-per-GB is competitive with other high-end SSDs when you get to the 240GB and above options"

cancel ×

76 comments

Sorry! There are no comments related to the filter you selected.

first! (2, Funny)

Anonymous Coward | more than 3 years ago | (#33733652)

Thanks to the high speed link SSDs.

Re:first! (1)

Corwn of Amber (802933) | more than 3 years ago | (#33740494)

WAAAHAHAAA!

SSDs huh? WHEN I CAN AFFORD ONE.

FUCK THEM IN THE FACE WITH NERVE-RACKING MONOCABLES.

HATE HATE HATE HATE

SSDs COST NOTHING WHATSFUCKINGEVER TO PRODUCE COMPARED TO HDDs

WHY DO THEY COST A KIDNEY PER TERABYTE???

Actually... Re: SSDs huh? WHEN I CAN AFFORD ONE. (1)

Fubari (196373) | more than 3 years ago | (#33741776)

Corwn wrote:

WHY DO THEY COST A KIDNEY PER TERABYTE???

Actually, that is a pretty good deal...

Re:first! (1)

game kid (805301) | more than 3 years ago | (#33741900)

HATE HATE HATE HATE

SSDs COST NOTHING WHATSFUCKINGEVER TO PRODUCE COMPARED TO HDDs

WHY DO THEY COST A KIDNEY PER TERABYTE???

The manufacturers still need to finalize the HAH (Have A Heart) standard. They approved version 1.0, but have to revise it due to a grammar error in section 45G, paragraph 8.

I hear 1.1 will introduce two extensions called "soul" and "conscience" as well.

Bad hardware design. (5, Insightful)

GiveBenADollar (1722738) | more than 3 years ago | (#33733696)

From the website: 'Whatever you do, don't plug an HSDL device into a SAS RAID card (or vice versa)! '

Although I dislike proprietary connectors for generic signals, I dislike interchangeable connectors for different signals even more. Can someone with a bit more knowledge explain why this could ever be a good idea, or how this is not going to smoke hardware.

Re:Bad hardware design. (1)

marcansoft (727665) | more than 3 years ago | (#33734366)

I haven't checked the details, but I'm willing to bet that the physical differential signaling levels used for PCIe (LVDS) and SAS/SATA are pretty similar. As long as they at least kept the transmit/receive pairs in the same place, I bet that plugging in the wrong type of device will probably just cause error reports from the controller or at worst severely confuse the device and/or controller, but won't cause any permanent damage.

Re:Bad hardware design. (4, Informative)

Relyx (52619) | more than 3 years ago | (#33734396)

From what I gather it was cheaper and quicker for OCZ to co-opt an existing physical standard than roll their own. All the customer needs to do is source good quality SAS cables, which are in plentiful supply.

Re:Bad hardware design. (0)

Anonymous Coward | more than 3 years ago | (#33734686)

I guess they make the assumption that people utilizing this technology are competent enough to know what they are doing at all times. The 'measure twice, cut once crowd'.

Fried a few devices have we?

Re:Bad hardware design. (1)

scdeimos (632778) | more than 3 years ago | (#33741062)

It's bone-headed is what it is. It's like some manufacturer saying "our notebook is going to start supplying 110vac at these connectors that just happen to look like USB host ports. Whatever you do, don't plug USB devices into them!"

I know why they've done it, though: it's expensive in time and labour designing and testing new connectors before going mass production. It's saving $ for them. And it'll bite the customers when they plug the wrong devices in and find out they've blown their warranty along with their PC/card/drive.

Re:Bad hardware design. (1)

darkwhite (139802) | more than 3 years ago | (#33748126)

It looks like just scaremongering from this "pc perspective" outlet. They never say they tried it, and I'd be willing to bet that nothing would happen if I plugged it into the wrong port.

sata (the channel) is NOT the issue (-1, Troll)

TheGratefulNet (143330) | more than 3 years ago | (#33733754)

come on, don't blame anything like ssd slowness on sata, the channel interconnect.

there isn't a drive in consumerland (spinning or otherwiwse) that can use a full sata channel on its own.

its never been about the channel; its about the internals and how fast internal reads/writes/erases happen.

don't pin this on sata. totally wrong-placed if you think sata is the limiting factor.

(even sata150 is faster than ssd's are, sustained).

Re:sata (the channel) is NOT the issue (4, Insightful)

Kjella (173770) | more than 3 years ago | (#33733840)

Wow, what a clueless post. SATA-150 can't sustain more than 150MB/s and there's many SSDs that go beyond that. The fastest Crucial even goes beyond SATA 3 Gbps on sustained reads. Working for a HDD manufacturer or something?

Re:sata (the channel) is NOT the issue (0)

BitZtream (692029) | more than 3 years ago | (#33733892)

In the lab, maybe.

Show me one that does it in the real world.

I own a few of them, they aren't that fast, the bus isn't the issue, regardless of what you've read on some website.

Not that spinning platters are faster, but the bus has been faster than the disks for ages and will continue to do so for ages unless something major changes which isn't likely considering the drives have to use electrical signalling just like the bus does.

Ok (4, Informative)

Sycraft-fu (314770) | more than 3 years ago | (#33733946)

How about the drives in this review?

http://www.pcper.com/article.php?aid=1007&type=expert&pid=4 [pcper.com]

Looks to me like one of them is breaking 600MB/sec which is faster than even SATA-3 can handle.

None of this is to mention access time/overhead which is another reason to go to PCIe directly. Rather than doing PCIe -> SATA -> drive's controller, cut out the middle man. I'm not saying it is the best idea in all cases, but it seems to work when performance needs to be the absolute highest.

Re:sata (the channel) is NOT the issue (1)

blackraven14250 (902843) | more than 3 years ago | (#33734038)

I have a pair of two-year-old cheapo SSDs in RAID0, and they're stuck at the limits of my SATA bus. I can easily imagine there being a single drive that will outpace the SATA bus.

Re:sata (the channel) is NOT the issue (1)

TheSunborn (68004) | more than 3 years ago | (#33734462)

Are you sure it is the bus and not the SATA controller itself. I mean sata is(Should be with a good controller) 3gbit per port. Not a total of 3gbit per controller.

Re:sata (the channel) is NOT the issue (1)

blackraven14250 (902843) | more than 3 years ago | (#33734870)

Just remember, 3Gb/s converts to 375MB/s, so maxing it out really isn't too bad. The current Crucial RealSSD 300C tops out at 350 MB/s. That's an MLC drive; an SLC drive has the potential to be double that speed. By the time you get past the SATA overhead, you're definitely maxing out the bus with that drive on a SATA1 connection.

Re:sata (the channel) is NOT the issue (1)

KingMotley (944240) | more than 3 years ago | (#33734628)

My single (older) Intel SSD will saturate a SATA channel, and come fairly close to saturating a SATA-2 channel. I know there are SSD's out there that are 2-3 times as fast as mine.

Re:sata (the channel) is NOT the issue (1)

Bengie (1121981) | more than 3 years ago | (#33736970)

The fastest SSDs that you can buy can do 1400MB(bytes)/sec write and 1500MB/sec reads.

Many new SDDs, even cheap ones, will be doing ~220-280MB/sec reads and ~180-200MB/sec writes. And I do mean cheap ones. The new 22nm drives will come out before the end of the year.

Re:sata (the channel) is NOT the issue (1)

John Napkintosh (140126) | more than 3 years ago | (#33738292)

But those are not SATA, as far as I know. They are PCIe SSDs, which is essentially what you're building with the IBIS solution. Rather than packaging both together on a board, you're separating the actual storage from the PCIe "controller" and sending the signaling over a cable.

Given the choice between the two, I'll opt for the solution that lets me get a controller with the number of ports I want. This opens up the possiblity of doing RAID, as in their 4 ports/4 drives solution. It may seem silly to do that instead of just buying the appropriately-sized single drive solution or PCIe SSD, but it would also be nice to have the ability to swap just the drive as capacities increase.

Re:sata (the channel) is NOT the issue (1)

AHuxley (892839) | more than 3 years ago | (#33742790)

Yes great point. With this system you can grow. No stranded 1st gen pcie card, that wont work with 'the next version' of the same brand of card in the slot next to it.
With this you just keep on pushing in as many SSD's as you need.

I work for an HDD company (0)

Anonymous Coward | more than 3 years ago | (#33734414)

I know that SSDs can saturate 3Gb cable. That is why we are working on 6Gb, as well as SSDs now.

Still while spinning media is so much cheaper we don't see it going away.

Re:sata (the channel) is NOT the issue (5, Informative)

Emetophobe (878584) | more than 3 years ago | (#33733882)

there isn't a drive in consumerland (spinning or otherwiwse) that can use a full sata channel on its own.

What about the ioDrive [fusionio.com] ? They have to use PCIe because SATA isnt fast enough.

(even sata150 is faster than ssd's are, sustained).

I think you're wrong. From http://en.wikipedia.org/wiki/Serial_ATA [wikipedia.org]

As of April 2010 mechanical hard disk drives can transfer data at up to 157 MB/s, which is beyond the capabilities of the older PATA/133 specification and also exceeds a SATA 1.5 Gbit/s link. High-performance flash drives can transfer data at up to 308 MB/s which exceeds a SATA 3 Gbit/s link.

Re:sata (the channel) is NOT the issue (0)

Anonymous Coward | more than 3 years ago | (#33733886)

Looking at the avg. read speed (page 4) would say that they are going faster than sata150 in most cases. (Assuming sata150 is about 150MB/s)

The fastest of the cabled SSD are topping off around 250MB/s. The one speeding on is a PCI-Express card(ioDrive 160) which seems to be running around 620MB/s average read.

Re:sata (the channel) is NOT the issue (3, Insightful)

Surt (22457) | more than 3 years ago | (#33734434)

There is a whole cluster of consumer drives today pushing ~275MB/s out of sata 3gb's 300MB/s limit. That's safely within the range of 'sata limited' allowing for a very small amount of controller overhead.

Re:sata (the channel) is NOT the issue (1)

drsmithy (35869) | more than 3 years ago | (#33736640)

There is a whole cluster of consumer drives today pushing ~275MB/s out of sata 3gb's 300MB/s limit. That's safely within the range of 'sata limited' allowing for a very small amount of controller overhead.

Though it's highly questionable as to whether to is any sort of meaningful "limit" in real-world usage.

Re:sata (the channel) is NOT the issue (1)

Surt (22457) | more than 3 years ago | (#33737034)

It's a limit for me, but admittedly, I'm a software developer, so my usage is a bit different from the conventional. But I bet anyone who does video editing would love to stream data at that rate.

Re:sata (the channel) is NOT the issue (1)

drsmithy (35869) | more than 3 years ago | (#33742260)

It's a limit for me, but admittedly, I'm a software developer, so my usage is a bit different from the conventional.

How, though ? What are you doing where a single drive maxing out at ~300MB/sec actually impacts your productivity ?

Re:sata (the channel) is NOT the issue (1)

Surt (22457) | more than 3 years ago | (#33747502)

Compiling, deploying, starting complex server software. Several minutes, twenty or thirty times a day. Almost purely disk bound.

Re:sata (the channel) is NOT the issue (1)

drsmithy (35869) | more than 3 years ago | (#33751678)

Compiling, deploying, starting complex server software. Several minutes, twenty or thirty times a day. Almost purely disk bound.

Almost certainly by IOPS, though, not bandwidth.

Re:sata (the channel) is NOT the issue (1)

Surt (22457) | more than 3 years ago | (#33751748)

Mostly read bound, particularly in the server startup. Lots of large resources to be read and processed.

Re:sata (the channel) is NOT the issue (2, Interesting)

dave420 (699308) | more than 3 years ago | (#33737902)

You sure done fucked that one up, Sparky. The SATA 6Gb/s works out at about 768MB/s, which while decent, isn't good enough for many different uses. SATA 1.5Gb/s is only just over 140MB/s, something easily surpassed these days.

I wonder how you know your SATA setup isn't at fault?

Familiar connectors (1)

suso (153703) | more than 3 years ago | (#33733812)

The connectors shown in the article look very similar to multilane connectors that you see used on raid controllers like a 3ware raid controller. Is it the same?

Re:Familiar connectors (2, Informative)

Joehonkie (665142) | more than 3 years ago | (#33733862)

Those are just very high-end SAS cables, so yes.

Re:Familiar connectors (2, Funny)

suso (153703) | more than 3 years ago | (#33733936)

If these are your idea of very high-end SAS cables/connectors, then you haven't met my friend Mr. $1million SAN.

Re:Familiar connectors (2, Informative)

elfprince13 (1521333) | more than 3 years ago | (#33734098)

Is that the Monster Cable version?

Re:Familiar connectors (1, Funny)

Anonymous Coward | more than 3 years ago | (#33735122)

No, it's the EMC version. Twice as expensive, only half as bling.

Re:Familiar connectors (1)

Gates82 (706573) | more than 3 years ago | (#33733872)

Yes, this is a plain Jane SAS connector, as a comment above eludes, don't mix and match this drive with your RAID card.

--
So who is hotter? Ali or Ali's Sister?

Re:Familiar connectors (1)

suso (153703) | more than 3 years ago | (#33733876)

Nevermind. Here is a con on the last page of the article: "HSDL cabling may be confused with mini-SAS cabling unless clearly marked."

Specifically, I'm talking about the SFF-8087 with iPass connector as shown here [wikipedia.org] .

Different...how? (1)

Straterra (1045994) | more than 3 years ago | (#33733856)

How is this any different than existing PCI Express SSD products? They both consume a PCI Express slot..and this one consumes a 3.5" drive slot. Am I the only one missing the point?

Re:Different...how? (0)

Anonymous Coward | more than 3 years ago | (#33733920)

Compared to the other PCI-Ex SSD products this one have one controller and one disk, the controller (not the one shown in the review) can have more than one disk connected.

Re:Different...how? (2, Informative)

Joehonkie (665142) | more than 3 years ago | (#33733930)

Probably. The point is that it's a whole new drive interconnect. They have another product that is a standalone card which supports 4 drives in a RAID. These drives only come with a card because it's a new interface technology and they are assuming you won't have a port for it yet. It's an open standard so they are gambling on it eventually becoming the standard for SSDs and having it built into motherboards and such.

Re:Different...how? (0)

Anonymous Coward | more than 3 years ago | (#33733982)

Yes.

One PCIe x4 per SFF-8087, I think (3, Insightful)

bill_mcgonigle (4333) | more than 3 years ago | (#33733888)

The illustrations all seem to show an x8 card, but I think what they're saying is they multiplex a PCIe lane over each pair in the SFF-8087 cable. So, eventually you'll be able to run x16 out of a card to your drive bay, and use that now for a 4x4 config, but perhaps a single x16 config in the future.

In short, a slower PCIe extension cord using existing cables (as opposed to the oddball PCIe external cables [xtreview.com] ). This will probably put pressure on mobo vendors to add more x16 slots. I regularly build storage servers with 16 and 24 drive bays, and it looks like top-end now are Tyan AMD boards with 4 x16 slots. I'd like to see, for instance, a SuperMicro with 6 PCIe x16 slots and dual Intel sockets (though I'm using AMD 12-core more and more lately). PCIe 3.0 is due out in a couple months, so probably it will be there - OCZ could also update to the faster coding rate.

Re:One PCIe x4 per SFF-8087, I think (1)

KingMotley (944240) | more than 3 years ago | (#33734694)

Isn't Evga's X58 Classified boards better with 7 16x/8x slots?

Re:One PCIe x4 per SFF-8087, I think (1)

bill_mcgonigle (4333) | more than 3 years ago | (#33734956)

Isn't Evga's X58 Classified boards better with 7 16x/8x slots?

Is this part 141-BL-E759-A1? I see 4x16x and no ECC.

Re:One PCIe x4 per SFF-8087, I think (1)

KingMotley (944240) | more than 3 years ago | (#33737994)

I was referring to part number 170-BL-E762-A1, however, it does not have ECC. It claims 4x SLI, however, that is because most video cards that are SLI require 2 slots eating up the slot between them, this board actually has 7 x16/x8 slots.

Re:One PCIe x4 per SFF-8087, I think (1)

KingMotley (944240) | more than 3 years ago | (#33738172)

However, if ECC is a requirement, you can check out this part: 270-WS-W555-A1 also known as the Classified SR-2.
7 x16/x8 slots
Dual Cpu (Xeon 5500/5600)
Supports up to 48GB of DDR3

Re:One PCIe x4 per SFF-8087, I think (1)

bill_mcgonigle (4333) | more than 3 years ago | (#33740940)

That's a pretty sweet rig - I've got some friends doing scientific computing who can't get enough GPU in a system - they'd probably like this.

I have to say the EVGA site is very glitzy but not terribly helpful. I downloaded the 'spec sheet' and it was a 1-page advertisement. Sigh. Newegg's specs says 3 of the slots are x8 but they look like x16 on the picture.

Even the ZFS guys insist on ECC for storage, but for a monster compute farm this looks awesome.

Re:One PCIe x4 per SFF-8087, I think (0)

Anonymous Coward | more than 3 years ago | (#33742726)

lol, I hope you don't think you get full 16 lanes out of each of those slots... They're using dual NF200s. The cards just *think* they're running full on 16x.

http://en.wikipedia.org/wiki/Intel_X58 [wikipedia.org]
"The X58 chipset itself supports up to 36 PCI-Express 2.0 lanes"

It will NOT go any faster than that. It's not getting 64 PCIe lanes (4*16).

Re:One PCIe x4 per SFF-8087, I think (1)

drsmithy (35869) | more than 3 years ago | (#33736496)

In short, a slower PCIe extension cord using existing cables (as opposed to the oddball PCIe external cables). This will probably put pressure on mobo vendors to add more x16 slots. I regularly build storage servers with 16 and 24 drive bays, and it looks like top-end now are Tyan AMD boards with 4 x16 slots. I'd like to see, for instance, a SuperMicro with 6 PCIe x16 slots and dual Intel sockets (though I'm using AMD 12-core more and more lately). PCIe 3.0 is due out in a couple months, so probably it will be there - OCZ could also update to the faster coding rate.

I'm kinda curious, how often are you bus bandwidth constrained, and in what circumstances ?

Re:One PCIe x4 per SFF-8087, I think (1)

bill_mcgonigle (4333) | more than 3 years ago | (#33736930)

I think it's only on SLC SSD's - 250MB/s is close to a full SATA 3Gbps bus. But since those are the cache drives, more speed would be helpful.

6Gbps SATA should double that, but the PCIe card devices claim 1500Mbps. I don't usually work in the price ranges of those parts, though, and to be fair, those may just have built-in striping.

Re:One PCIe x4 per SFF-8087, I think (1)

drsmithy (35869) | more than 3 years ago | (#33742318)

I'm still curious as to what situations - outside of benchmarking - you're in where a x8 PCIe bus is constraining. Or even a 3Gb SATA port for that matter.

Re:One PCIe x4 per SFF-8087, I think (1)

bill_mcgonigle (4333) | more than 3 years ago | (#33753192)

Just as I mentioned, the 250MB/s SLC cache drives. The zpool behind them is pumping 700MB/s out. It would be nice to have a big cache that could exceed the speed of the disks. The PCIe 1500MB/s cache drives do that.

Re:One PCIe x4 per SFF-8087, I think (0)

Anonymous Coward | more than 3 years ago | (#33739624)

I think it's funny that you believe you can add more bandwidth to computers by adding more slots.

The bandwidth allocated to PCIe slots is CPU-limited. It's not a function of simply adding more slots. Look at the manual for any 3 or greater x16 slot motherboard. They all have conditions on which the physical x16 slots will downgrade to x8, x4, x1 or even not work. If you sum them all up, you'll see the available bandwidth on a four x16 slot motherboard is no greater than a single x16 with two x1s (or something like that).

Serial-Attached SCSI (2, Interesting)

leandrod (17766) | more than 3 years ago | (#33734060)

Why not just go SAS?

Re:Serial-Attached SCSI (4, Informative)

dmesg0 (1342071) | more than 3 years ago | (#33734270)

My question exactly. One miniSAS connector would give them 6Gb*4 = 24Gbps = ~2400GB/s (including overhead) - a lot more than enough bandwidth

Maybe to save the costs of SAS HBA (at least 200-300$) and avoid paying royalties to T10?

Re:Serial-Attached SCSI (2, Informative)

Wesley Felter (138342) | more than 3 years ago | (#33734970)

Maybe to save the costs of SAS HBA (at least 200-300$)

That's the reason. OCZ found some really cheap obsolete Silicon Image PCI-X RAID controllers and PCIe-to-PCI-X bridge chips in a warehouse somewhere and decided to kludge together some "SSDs".

Re:Serial-Attached SCSI (1)

darkwhite (139802) | more than 3 years ago | (#33748150)

I like your scare quotes while referring to what is at the moment the fastest single storage device on the planet. (really just a raid in an unusual package, but still)

Re:Serial-Attached SCSI (1)

Skal Tura (595728) | more than 3 years ago | (#33735486)

24*1024/8=3072MB/s or 3GB/s

24Gbps does not magically get 800 faster just because it is SAS.

Do not confuse bits and bytes.

Re:Serial-Attached SCSI (1)

dmesg0 (1342071) | more than 3 years ago | (#33739846)

Sorry, I meant ~2400MB/s or ~2.4GB/s (which is 24Gbps divided by roughly 10 to take into an account protocol overhead - that's the rule of thumb in SCSI, you get at most 400MB/s out of 4Gbps FC or 300MB/s out of 3Gbps SAS).

Not confusing bits and bytes, just a typo.

In any case, this thoughput is currently theoretical - best SAS HBAs are 8x PCIe 2.0, and limited by its bandwidth of 20Gbps (which is also divided by 10 because of 8b/10b encoding).

Re:Serial-Attached SCSI (1)

Kjella (173770) | more than 3 years ago | (#33735712)

Yup. Compared to other ~250GB SSDs the $739 price tag does not look so bad. Doing a quick price check, in USD less VAT I have to pay $600+ anyway. For that extra $100 you get a the connector card and a built in RAID, which is roughly what it'd cost you to get a 4 port RAID controller and 4 regular SSDs to RAID. On the other hand, there's not much real reason to get this over a RAID setup either, but I'm guessing they're trying to push this connector out there. If they can get it rolling and start building "native" HSDL SSDs then that would really change things.

Re:Serial-Attached SCSI (3, Informative)

Surt (22457) | more than 3 years ago | (#33734502)

Are you joking? Because the bandwidth has the same limitations this company (and all the other ssd makers) are trying to find a way to break free of.

Re:Serial-Attached SCSI (1)

Wesley Felter (138342) | more than 3 years ago | (#33734932)

A wide SAS 2.0 cable is 2.4 GB/s ful duplex, which blows away anything in this article.

Re:Serial-Attached SCSI (1)

Surt (22457) | more than 3 years ago | (#33736078)

Can you provide a link for that, the doc I read said 2.4Gb/s. for wide SAS full duplex.

Re:Serial-Attached SCSI (1)

Wesley Felter (138342) | more than 3 years ago | (#33739288)

Think about it; narrow SAS 2.0 is 6 Gb/s and wide is four lanes.

Another Betamax ? (2, Informative)

gtirloni (1531285) | more than 3 years ago | (#33734082)

Same physical connector with different electrical wiring. Now we can fry all those expensive SAS parts. Yay! I don't see this taking off. The storage industry is moving to SAS 6Gb/s now.

Re:Another Betamax ? (0)

Anonymous Coward | more than 3 years ago | (#33734232)

conventional (i.e. platterstorageindustry) is moving SAS 6Gbs now. These guys need to bypass them because their solutions are already saturating 6Gbps SAS with their first gen. product.....

Re:Another Betamax ? (2, Interesting)

dave420 (699308) | more than 3 years ago | (#33738026)

Most people don't have SAS in their machines. And even if they did, 6Gb/s isn't enough for a lot of people.

Did anybody notice (0)

Anonymous Coward | more than 3 years ago | (#33734210)

that it's just relocating the SATA controller chip to the drive bay?

Wonder why this performs similar to the RevoDrive, which is a SATA-based solution? As the article says, it's a RevoDrive on a stick.

This just in: OCZ continues to suck. (0)

Anonymous Coward | more than 3 years ago | (#33734524)

1. For NCQ to work with SATA, all that's needed is 1) AHCI and 2) an OS that has a storage interface framework that support command queueing. Linux, FreeBSD, and Solaris (as well as OpenSolaris) support this, and I'm almost certain native AHCI drivers (including the ones from Microsoft, Intel, Silicon Image, etc.) provide this as well.

2. NCQ supports up to 31 command queues... which just so happens to be the maximum queue depth of HSDL as well (oh sorry, maybe 32, we don't know for certain). Wow, imagine that. Anandtech's rant about the command queueing capability is silly. Did they even consider testing TCQ (used on SAS)? How much did they get paid for this?

3. I love how this OCZ-hyped HSDL crap uses a mini-SAS connector, but if you plug a HSDL device into a mini-SAS port the port (and controller) will almost certainly smoke (I'd need an engineering PDF for this HSDL crap to verify, but it's fairly safe to assume it will). Way to penny-pinch and do absolutely nothing but confuse your customers. Maybe you should reapply your R&D focus on improving your existing products that are buggy as hell (see OCZ forums for proof) and stop catering to the gamer demographic (continue to see forums).

Storage and technology-wise, this is the equivalent of putting (more?) blue LEDs on a DIMM. The only thing this tries to solve is the SATA300 and SATA600 bandwidth barrier, which isn't a barrier with SAS multipathing (not the same as SATA port multipliers).

Why bother? (0)

Anonymous Coward | more than 3 years ago | (#33734774)

I've heard that the almighty Apple and a little start-up company named "intel" (what a stupid name) are working on something called LightPeak that will have enough bandwidth for everything!

Where is the point? (1)

vojtech (565680) | more than 3 years ago | (#33735136)

Inside the IBIS there is two full SATA drive boards, with SandForce SATA controllers, connected to a standard PCIe/SATA RAID controller on the base board.

The only difference to a SATA RAID controller and two regular SSDs is that the cable is in a different place.

Re:Where is the point? (1)

Xzisted (559004) | more than 3 years ago | (#33739836)

Don't forget that another reason to move away from SAS/SATA and towards PCIe is to break away from current restrictions in RAID controllers. This setup looks targeted at Enterprise RAID. Enterprise RAID setups, including LSI Logic's megaraid (H700/H800 from Dell) can't support things such as NCQ or SMART, which are really important features on many traditional Hard Drives or TRIM for SSDs. Support of NCQ would be required to hit higher transfer speeds in an SSD RAID setup than what we are able to hit today with current controller technology. We also only get limited SMART data from RAID controllers which pass through a fraction of this but do not support the SMART technology directly (try using smartctl in linux to query a logical drive handed out by a RAID controller). This technology seems to make that possible. I'll take the wait and see approach, but it looks promising.

Imagine (1)

ThatsNotPudding (1045640) | more than 3 years ago | (#33736690)

how much Monster would charge for these cables

...I can't count that high.

holy jpeg compression artifacts, batman (1)

dirtyhippie (259852) | more than 3 years ago | (#33737896)

...I couldn't take looking at any more artifacted jpeg images after page 5, which it seems was only 1/10 of the way through... Sheesh...

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>