Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

"iSCSI killer" Native in Linux

Hemos posted about 8 years ago | from the making-the-world's-storage-better dept.

235

jar writes "First came Fibre Channel, then iSCSI. Now, for the increasingly popular idea of using a network to connect storage to servers, there's a third option called ATA over Ethernet (AoE). Upstart Linux developer and kernel contributor Coraid could use AoE shake up networked storage with a significantly less expensive way to do storage -- under $1 per Gigabyte. Linux Journal also has a full description of how AoE works." Note that the LJ article is from last year; the news story is more recent.

cancel ×

235 comments

Sorry! There are no comments related to the filter you selected.

FP (-1, Offtopic)

Anonymous Coward | about 8 years ago | (#15817120)

FP FTW!

AOE? (5, Funny)

laffer1 (701823) | about 8 years ago | (#15817123)

I didn't know Age of Empires can do network storage! WTG Microsoft!

Re:AOE? (-1, Redundant)

Mayhem178 (920970) | about 8 years ago | (#15817255)

No, no, no, you've got it all wrong. They mean AoE as in Area of Effect. I didn't know Fireballs could do network storage! WTG WotC!

Re:AOE? (0, Offtopic)

metasecure (946666) | about 8 years ago | (#15817308)

n00b a fireball is a single target attack !

they're talking about frost nova or blizzard etc !

Re:AOE? (0, Offtopic)

Mayhem178 (920970) | about 8 years ago | (#15817333)

.....and you're wrong on 2 accounts. First off, I wasn't talking about Diablo. WotC = Wizard of the Coast = D&D. And secondly, Diablo's Fireball DOES do AoE damage, just not in a very big radius.

n00b!!!! :)

Re:AOE? (0, Offtopic)

NitroWolf (72977) | about 8 years ago | (#15817435)

Damn, the n00b count is high!

Fireball (and any AE spell) does not do AoE damage, they do AE damage. AoE is the resulting radius of an AE spell. A spell can't be AoE, it can only be AE. There's no such thing as an AoE spell, only AE spells.

Only n00bs say AoE spell. Vetrans know it's AE.

The same goes for military weapons - You don't say "Our Area of Effect warheads in the MLRS were ineffective."

You say "Our Area Effect warheads in the MLRS were in effective, as the Area of Effect was too small."

Duh.

Re:AOE? (-1, Offtopic)

Anonymous Coward | about 8 years ago | (#15817334)

> n00b a fireball is a single target attack !

ch00b thinks we're talking about video games...

Re:AOE? (2, Funny)

Gattman01 (957859) | about 8 years ago | (#15817415)

a fireball is a single target attack!


I'm sure that'll go over great with your party fighting enemies in a narrow hallway.
I'm sure the DM and your party members will be VERY forgiving when they have to create new characters.


I forget, the AoE of Fireball is either 5 feet or 5 meters. Either way, using it in a small room is not a good idea when you're in the room, unless you don't like your "friends."

Re:AOE? (0, Offtopic)

gEvil (beta) (945888) | about 8 years ago | (#15817324)

I didn't know Fireballs could do network storage!

Fireballs [storagereview.com] have been able to do network storage for at least a decade...

Re:AOE? (0, Offtopic)

Mayhem178 (920970) | about 8 years ago | (#15817348)

Heh...you know, I wasn't even thinking about that when I posted the comment. Touche!

Re:AOE? (1)

utopianfiat (774016) | about 8 years ago | (#15817408)

I'm sure there are Blizzards [blizzard.com] with their own network storage, but I don't know about Frost Nova...

Re:AOE? (0, Offtopic)

AviLazar (741826) | about 8 years ago | (#15817272)

No, not age of empires n00b, Area of Effect. This guy is clearly a mage or a warlock (maybe a priest or paladin). If a warlock, I wonder if he likes to use Rain of Fire or Hellfire.

Re:AOE? (0, Offtopic)

laffer1 (701823) | about 8 years ago | (#15817335)

Age of Empires predates World of Warcraft. I do play both and I have a mage. :)

Re:AOE? (0, Offtopic)

AviLazar (741826) | about 8 years ago | (#15817399)

Age of Empires is still one of my favorite games. Probably because I would go to a friends HS classroom (after school hours) with a bunch of friends and we would play maxed out games (I think 5 to a team, I forget the limits). It was a lot of fun as players would shout over to each other "No do this do this you n00b. Stupid morons" and someone else "Type it out!!! They can hear your plans when you shout!!!". It was a lot of fun with pizza, soda, and good friends.

Age of Umpires (1)

astralbat (828541) | about 8 years ago | (#15817845)

Got to love a good British parody... Age of Umpires [uncyclopedia.org] :-)

Re:AOE? (1)

nine-times (778537) | about 8 years ago | (#15817439)

n00b? Am I the only person for whom "AOE" means "Aces Over Europe"

Re:AOE? (2, Funny)

Smelecat (7286) | about 8 years ago | (#15817547)

Use AoE with caution. In a crowded data center, AoE will agro nearby equipment.

Will it catch on? (4, Insightful)

andrewman327 (635952) | about 8 years ago | (#15817148)

From TFA:
Some significant caveats mean that not everyone is so keen on the technology. For a start, it's a specification from Coraid, not an industry standard. Its networking abilities are limited. And its detractors include storage heavyweights such as Hewlett-Packard and Network Appliance.


So will this ever develop into a real standard or will it remain the sole domain of one company? I do not know if I want to invest time and money into it if the latter is true. From a comp sci point of view this is a great approach to networked storage. It uses what people already have to make storage reletively cheap. I am going to wait to see where this technology goes. Maybe it will blossom and become a serious contender.

Re:Will it catch on? (1, Informative)

Anonymous Coward | about 8 years ago | (#15817203)

iSCSI is routable and secure if you use an encrypted tunnel (ipsec native in most implemenations) whereas AoE is local network only and non routable.

Re:Will it catch on? (1)

andykuan (522434) | about 8 years ago | (#15817536)

The lack of routing doesn't really bother me so much though. Do I really want to send raw drive data through my router? I figure I can use this to build a low-cost NFS cluster -- but instead of having to invest in a dedicated SAN or a differential SCSI bus, I can just share drives over my existing Ethernet switch.

Re:Will it catch on? (3, Informative)

SpecTheIntro (951219) | about 8 years ago | (#15817222)

For a start, it's a specification from Coraid, not an industry standard.

I don't know that this is true, because the LinuxJournal article directly contradicts it. (Unless I'm misreading it.) Here's what the LJ says:

ATA over Ethernet is a network protocol registered with the IEEE as Ethernet protocol 0x88a2.

So, it looks like the protocol has been officially registered and was granted approval by the IEEE--so that makes it an industry standard. It may not be adopted yet, but it's certainly not something like 802.11 pre-n or anything; there's an official and approved protocol.

Re:Will it catch on? (4, Informative)

hpa (7948) | about 8 years ago | (#15817274)

So, it looks like the protocol has been officially registered and was granted approval by the IEEE--so that makes it an industry standard. It may not be adopted yet, but it's certainly not something like 802.11 pre-n or anything; there's an official and approved protocol.

Anyone can register a protocol number with IEEE by paying a $1000 fee. It doesn't mean it's a protocol endorsed by IEEE in any shape, way or form.

Re:Will it catch on? (1)

SpecTheIntro (951219) | about 8 years ago | (#15817714)

So then what qualifies as an "industry standard?" Is that just a euphemism for: "the big players have decided to implement this technology?"

Re:Will it catch on? (1)

Jeff DeMaagd (2015) | about 8 years ago | (#15817288)

If it develops into a standard, it would appear that maybe it will have a niche. It sounds like a nice idea that may be worth a shot for some uses. I can't help but wonder if the higher cost of iSCSI and FiberChannel is there for a necessary reason. The nice thing though is that even desktop systems are being made available with multiple network adapters, so one can be dedicated to this sort of storage.

Re:Will it catch on? (1)

dfghjk (711126) | about 8 years ago | (#15817554)

considering that it's non-routable, in what way is this "a great approach to networked storage"? Ethernet simply takes the place of a SCSI cable here and the device protocols differ. It lacks some of the availability characteristiccs of SATA/SAS and fails to solve and sharability issues because it's still a block interface. "From a comp sci point of view" it's a total dud.

The problem with ethernet is that it's hard to make go fast. We have 1G now but 10G is difficult because of all the processing involved and the offload engines that come with that. IF I could read the article perhaps I might understand if this is better in that respect, but the future of networked storage does not lie in block device protocols. To make this interesting we need 10G links and file symantics.

Re:Will it catch on? (1)

magetoo (875982) | about 8 years ago | (#15817721)

The problem with ethernet is that it's hard to make go fast. We have 1G now but 10G is difficult because of all the processing involved and the offload engines that come with that.
The offload engines do TCP, don't they? And since this thing goes directly to raw Ethernet ... or would it still be a problem?

Re:Will it catch on? (1)

dfghjk (711126) | about 8 years ago | (#15817809)

I was just saying that I hadn't read the article since it's unavailable. I would assume that the problems with performance under 10GigE don't apply because TCP isn't used. I would also assume that the offload engines can't be taken advantage of with this protocol.

I could see benefit in using this over iSCSI for 10G NIC's without TOE's but I wonder if there will be any of those. There real advantage of running anything over ethernet is to get the benefit of huge volumes, so I would suspect all 10G Nics will have TOE's.

Another "Killer" (1)

slimjim8094 (941042) | about 8 years ago | (#15817194)

Why does it seem like everything open-source is a proprietary "killer"?

Having said that, this looks pretty neat. It will probably be more widely accepted, being open source (it is, right?), so it can be ported easily. Features will grow quickly, and other OSS advantages and so forth.

However, why is this better than NFS or Samba/Windows shares? Is it faster? It seems like AoE is offloading more of the low-level stuff to the client. Is this a good thing? It doesn't seem like one...

Re:Another "Killer" (4, Informative)

wasabii (693236) | about 8 years ago | (#15817319)

AoE is a networked block device technology. NFS and Samba are network file system. One is about making block level access to a device available over the network, the other is about making file operations available.

In the case of AoE, a single remote block device can be shared between multiple systems. Each client could issue it's own write/reads. in combination with a distributed file system, each node could mount the same FS.

It's the same as NBD, iSCSI, Shared SCSI, and Fiber Channel.

Re:Another "Killer" (1)

die444die (766464) | about 8 years ago | (#15817330)

Having said that, this looks pretty neat. It will probably be more widely accepted, being open source (it is, right?), so it can be ported easily.
Yeah, just like OGG took off because it was open source. Hah.

Re:Another "Killer" (2, Insightful)

slimjim8094 (941042) | about 8 years ago | (#15817363)

So was MP3 (at least implementations) and it was around longer and more widely supported by programs/devices.

Cheaper? (4, Interesting)

DSW-128 (959567) | about 8 years ago | (#15817207)

I guess I don't really see how it's cheaper that iSCSI? Sure, there's less overhead from the lack of TCP/IP, so you may not need as massive a network to drive it equally. But I've been under the understanding that iSCSI doesn't require SCSI drives, so you could build an iSCSI target out of the same machine/drives as an AoE host, correct? For some applications, I think the lack of TCP/IP might be a benefit - less opportunity to hack. (Then again, I'd expect anybody deploying something like this or iSCSI would drop the few extra $$$ to build a parallel network that transports just storage.)

Re:Cheaper? (2, Informative)

hpa (7948) | about 8 years ago | (#15817320)

The main advantage of AoE is that it's simple enough that you could build it in hardwired silicon if you wanted to, or use small microcontrollers way smaller than what you'd need to run a fullblown TCP stack (this is what Coraid does, I believe.)


The main disadvantage with AoE is that it's hideously sensitive to network latency, due to the limited payload size.

Re:Cheaper? (2, Informative)

NSIM (953498) | about 8 years ago | (#15817443)

You are quite correct, there is no requirement for SCSI drives in an iSCSI implementation, iSCSI refers the protocol, not the drive interface, i.e. it's the SCSI command protocol implemented over TCP/IP. So yes, you can build an iSCSI system out of commodity parts and many people are doing so. if you want get an idea of the options out there for doing this, take a look at: http://www.byteandswitch.com/document.asp?doc_id=9 6342&WT.svl=spipemag2_1 [byteandswitch.com]

Re:Cheaper? (1)

tylernt (581794) | about 8 years ago | (#15817716)

But I've been under the understanding that iSCSI doesn't require SCSI drives
Correct, I built a Windows 2003 Cluster (just for testing, not a production system!) using a Linux iSCSI target on an IDE drive and the stock iSCSI initiators on the 2003 boxes. Performance wasn't great but it worked fine.

With Copper Gigabit Ethernet and Jumbo frames (standard Ethernet is 1500 bytes, but disk blocks are usually 4K so you uneed Jumbo frames to eliminate fragmentation), I'd think you would save a lot of money over Fibre Channel and only take a small hit on performance.

Re:Cheaper? (1)

rf0 (159958) | about 8 years ago | (#15817813)

I've got an iSCSI setup using http://www.open-e.com/ [open-e.com] (basically a custom debian distro on a compact flash) hooked upto a 3Ware 9550SX with Western Digital RAID disks (all SATA). So the short answer is you can just use normal disks. If you really want it on the cheap you can do a single system with a single disk, although why you would want to I don't know

More for business? (1)

gasmonso (929871) | about 8 years ago | (#15817214)

Not sure if I follow this. Harddrives are well under $1/GB. If you buy several 400 GB drives and just connect them in an old PC thats on the network, aren't you accomplishing the same thing? I have a terraserver at home and it cost http://religiousfreaks.com/ [religiousfreaks.com]

Re:More for business? (1)

HaloZero (610207) | about 8 years ago | (#15817291)

If you had to commit ritual sacrifice of several religious zealots in order to pay for your Terraserver, then you may have spent too much on it.

Re:More for business? (1)

bhima (46039) | about 8 years ago | (#15817522)

I don't know... in some parts around they're really common... so they must be cheap.

Re:More for business? (2, Informative)

jimicus (737525) | about 8 years ago | (#15817366)

Maybe cheapie little IDE hard disks are under $1/GB. If you want hot-swap, availability of half-decent RAID cards and disks which actually get to see some testing before they leave the factory, then you'll have to spend quite a bit more.

Re:More for business? (0)

Anonymous Coward | about 8 years ago | (#15817590)

availability of half-decent RAID cards
That isn't a serious feature. Nobody with half a brain still uses RAID cards. Software RAID is the way to go. You telling me that some little 8-bit microcontroller can XOR bytes faster than an already-underused Opteron CPU? Or for that matter, faster than the cheapest Celeron ever made?

Re:More for business? (2, Interesting)

riley (36484) | about 8 years ago | (#15817428)

Storage Area Network solutions are not under the $1/GB. Running a network filesystem (NFS, SMB, Coda, etc) are running a local filesystem over networked storage are two different things, fulfilling two different needs.

iSCSI and AoE don't necessary directly benefit the small/home server market, but for the things that SANs are traditionally used for (data replication across geographically separated sites without any changes to the application software) there could end up being a big win in cost.

Re:More for business? (2, Interesting)

rf0 (159958) | about 8 years ago | (#15817839)

iSCSI is slightly differnet as rather than presenting a file system, it presents a hardware device. So you show it a 1TB device over the network (e.g /dev/sdb) then the client machine can partition that disk up as if it was local. Thats the advantage over just a shared network filesystem

Reliability (1, Interesting)

Neil Watson (60859) | about 8 years ago | (#15817215)

People often forget there is a considerable difference in the reliability of ATA drives versus SCSI. If you are going to use some sort of ATA based SAN be prepared for disk failures much sooner than if they were SCSI.

Re:Reliability (2, Insightful)

dfghjk (711126) | about 8 years ago | (#15817296)

how is that relevant to the discussion of protocols?

reliability of SCSI versus ATA is largely imagined and the rest is intentional. drive manufacturers want you to believe their enterprise drives are more reliable and right now those drives are largely SCSI.

Re:Reliability (3, Informative)

SpecTheIntro (951219) | about 8 years ago | (#15817316)

People often forget there is a considerable difference in the reliability of ATA drives versus SCSI. If you are going to use some sort of ATA based SAN be prepared for disk failures much sooner than if they were SCSI.

This is not necessarily true. [storagereview.com] It all depends on how your network storage is being used. SCSI drives are built and firmware'd for the sole purpose of running a server, and they consistently beat any ATA drive (be it IDE or Serial) when it comes to server performance and reliability. ATA drives just aren't built to handle the sort of usage a server requires--note that this isn't a reflection of quality, but of purpose. But a file server (which is the only thing the SAN would be used for) requires much less robust firmware than a server housing MySQL, PHP, maybe a CRM suite, e-mail server, etc.--and so ATA drives shouldn't immediately be ruled as less reliable. The maturity of the technology plays a more important role than the interface.

Re:Reliability (2, Informative)

Ahtha (798891) | about 8 years ago | (#15817509)

I agree there are reliability problems with ATA. We expect ATA disk failures within the first year for all of our ATA RAID systems and have yet to be disappointed. ATA drives just don't seem to be able to handle the pounding they get in a RAID configuration. We still use them, however, mirroring the ATA RAID with another server/disk installation as a backup. Of course, that doubles the cost of the ATA solution, but, it's still cheaper than a SCSI solution.

Re:Reliability (0)

Anonymous Coward | about 8 years ago | (#15817527)

And what is the relative system reliability of $1000 worth of SCSI drives vs $1000 worth of ATA drives when connected into a redundant system? (If you point out that an individual failure is more likely with multiple parts, then you totally fail it, because the question regarded overall system reliability, not the irrelevant likelihood of a non-catastrophic subsystem failure)

Re:Reliability (1)

dfghjk (711126) | about 8 years ago | (#15817609)

I think you mean availability. Redundancy doesn't improve reliability, in fact it lessens it. What redundancy does is offer availability, the ability for a system to remain available after a failure.

The answer to your question is that, assuming there are more ATA drives of assumed lower reliability, the ATA system will be less reliable. You, sir, total fail it.

Re:Reliability (0)

Anonymous Coward | about 8 years ago | (#15817761)

No, you fail it. Embarassingly, because I specifically called out the error you made even before you made it.

If a subsystem component fails in such a way that the overall system continues to function as designed, no system failure has occured.

The expected number of annual system failure events will be lower with the redundant system. It is more reliable.

The expected number of annual non-critical subsystem failures will be higher with the redundant system. It is also completely irrelevant, beyond its impact on calculating TCO, after the system has been designed.

Re:Reliability (0, Troll)

Slashcrap (869349) | about 8 years ago | (#15817602)

People often forget there is a considerable difference in the reliability of ATA drives versus SCSI. If you are going to use some sort of ATA based SAN be prepared for disk failures much sooner than if they were SCSI.

I would just like to say something to the people who modded this "Interesting". You are killing Slashdot with your stupidity.

Now crawl back under your rocks, you fucking imbeciles. That is all.

PS. To the original poster - if you don't understand something, don't comment on it. Is that really too much to fucking ask?

PPS. You still don't fucking get it do you? Well, this will blow your mind - you can run ATAoE with SCSI discs. And vice versa.

Re:Reliability (1)

Ryan Amos (16972) | about 8 years ago | (#15817644)

It has less to do with the interface and everything to do with the drive mechanisms. SCSI drives are more expensive not because they use SCSI, but because the customers who use SCSI would rather pay a little more and have a drive that is more reliable.

Even the enterprise and datacenter are starting to use SATA for the vastly superior price per GB over high speed SCSI or FC drives in tiered storage systems. Store the bulk of your data on cheap SATA drives in a RAID5, then when you use it, move it to a 15k RPM SCSI drive.

Drive failure is really not much of an issue; you'll likely be using RAID5 or 5/0 with a hot spare with any SATA configuration and the odds of 3 drives failing at once are astronomical. Not to mention that a 500 GB SATA drive costs less than an 80GB 15k RPM FC drive, so if it fails, just buy another.

PoE, AoE, ... , EoE! (1, Funny)

adavies42 (746183) | about 8 years ago | (#15817224)

Everything over Ethernet!

Re:PoE, AoE, ... , EoE! (0)

Volante3192 (953645) | about 8 years ago | (#15817244)

I was thinking Ethernet over Ethernet. TCP/IP packets encapsulated within themselves...

Re:PoE, AoE, ... , EoE! (0)

Anonymous Coward | about 8 years ago | (#15817303)

so will it be more like virtual ethernet or rather ethernet virtualization?

Re:PoE, AoE, ... , EoE! (1)

pyite (140350) | about 8 years ago | (#15817365)

I was thinking Ethernet over Ethernet. TCP/IP packets encapsulated within themselves...

Pretty impressive... considering Ethernet has no knowledge nor concept of TCP/IP.

Re:PoE, AoE, ... , EoE! (0)

Anonymous Coward | about 8 years ago | (#15817376)

There's always PPPoEoE!

Re:PoE, AoE, ... , EoE! (0)

Anonymous Coward | about 8 years ago | (#15817450)

Ethernet over Ethernet? You're thinking of VLANs.. been there, done that.

EoE! (0)

Anonymous Coward | about 8 years ago | (#15817493)

I prefer the hardwired Ethernet on Ethernet action myself.
The inter-specification is my favorite, I like with a male Cat 6 cable in a female Cat 5 plug....

Re:PoE, AoE, ... , EoE! (1)

tduff (904905) | about 8 years ago | (#15817633)

Sounds like Plan 9 [bell-labs.com] .

How does it lower costs? (1)

Unknown Relic (544714) | about 8 years ago | (#15817238)

TFA isn't responding, so maybe I'm missing something but how does this new protocol actually result in cheaper costs per GB? It's already possible to get an iSCSI SAN which uses SATA drives, and one of the major cost differences is the type of drive. What else is new here?

Re:How does it lower costs? (2, Informative)

Unknown Relic (544714) | about 8 years ago | (#15817309)

Oops, only the linux journal article is down, the cnet article has answered my question: it isn't any cheaper than iSCSI + SATA solutions. $4,000 without any drives, compared to a starting price of $5,000 for a StoreVault (new from NetApp) with 1TB of storage. Other options such as Adaptec's Snap Server start just as cheap.

Re:How does it lower costs? (0)

Anonymous Coward | about 8 years ago | (#15817507)

You can buy a Norco 1220 storage chassis for $800, and build/buy a cheap server for $500. That is $1300 for your enclosure in a total of 4U, supporting 12 disks. If you build a half-depth server, you might be able to squeeze them into a combined 3U. Install Linux on the chassis and run "vblade" from the AoE tools package, alternatively, run NDBD instead.

Will it be the most "enterprise" system? No, but it *will* be relatively cheap. I just built mine.. $500 for disks, $800 for the chassis, and used an existing server. I now have 1.28 tb of usable storage and 7 free slots for additional disks for $1300 out of pocket -- although it would've been $1800 if I had needed to buy a new server.

Re:How does it lower costs? (1)

wasabii (693236) | about 8 years ago | (#15817340)

Just slightly less overhead than iSCSI. It's the same shit though.

Re:How does it lower costs? (1)

swillden (191260) | about 8 years ago | (#15817557)

TFA isn't responding, so maybe I'm missing something but how does this new protocol actually result in cheaper costs per GB?

The idea is that iSCSI uses TCP, which requires a lot of additional processing, which bogs down both the machine using the storage and the machine that contains the storage. The solution usually recommended is to buy expensive network cards that offload the TCP overhead from the main CPU. With AoE, you don't have the TCP overhead and therefore don't need the more expensive TCP-offloading network cards, resulting in a lower overall cost.

Given the cost of main CPU cycles, though, it seems to me that most systems will have the cycles to spare on TCP overhead. On the other hand, unless you really need your remote storage protocol to be routable, it's not clear that iSCSI has _any_ advantages over AoE, at least from a purely technical perspective.

Re:How does it lower costs? (1)

dfghjk (711126) | about 8 years ago | (#15817637)

"Given the cost of main CPU cycles, though, it seems to me that most systems will have the cycles to spare on TCP overhead."

not at 10G they won't.

the question isn't whether iSCSI or AoE is better but rather why you would use either of them.

Will there be any 10G NIC's that don't offer offload engines? If not, what is the advantage architecturally of AoE?

What about direct DMA and zero-buffer?

Re:How does it lower costs? (1)

swillden (191260) | about 8 years ago | (#15817825)

the question isn't whether iSCSI or AoE is better but rather why you would use either of them.

I think the costs and benefits are pretty clear. I'd actually love to see some drives with AoE/iSCSI built into them for home use. Need more storage for your video collection? Just buy an AoE/iSCSI drive and plug it into your switch. Although it would be done differently, unlimited expandability is the main advantage for enterprise environments as well. I don't know how this notion compares with SAN solutions, really, except that I know people complain SANs are very expensive and hard to manage.

Will there be any 10G NIC's that don't offer offload engines?

Yes. As prices on the 10G NICs come down, the norm will be to do TCP header parsing and checksumming on the main CPU -- since even desktop systems are all going to have at least two cores in the near future, there will be plenty of cycles. Some of the first 100BaseT NICs had offload engines, and lots of the first Gig-E NICs did, too, but it always works out that offloading only makes sense for specialized applications.

What about direct DMA and zero-buffer?

What about it? Most (all?) current NICs support DMA, and there's no reason you can't implement zero-copy from the network card just as easily as from a drive controller. Perhaps I'm missing your point.

Yes! (3, Interesting)

mihalis (28146) | about 8 years ago | (#15817239)

I like the look of this technology. The great thing it has going for it is that most of the non-hard-disk infrastructure (switches and cabling) leverages the tremendous investment in ethernet. That is great.

The thing that needs work, in my view, is that the bit that links the disks and the rest isn't cheap enough. In fact what would be awesome here is if, say, Seagate provided disks with native ATAoE connectors built-in. They might have to buy Coraid for that to happen.

In case anyone thinks I'm out of my mind here, don't forget that disks can already be had with ATA interface, SCSI interface, FCAL interface, SATA, SAS - that's five and there are probably more. Yes they might be a bit more expensive, but if they come in under the combined price of "regular ATA disk" + Coraid ATAoE disk adapter then you'd come out ahead. Someone like Seagate would, I think, have the industry-wide clout and respect to succeed in making this an open standard. Something that will be a challenge for Coraid for a long time (I have nothing against them, btw, they are friendly and their mailing list didn't spam me when I signed up).

When I was on the OpenSolaris pilot project I tried to get people interested in using this with Solaris. I think it might be great for ZFS, for example. At that point the real storage wizards were more interested in iSCSI, but I respectfully disagree, OpenSolaris + ZFS + cheap storage = awesome file server. Emphasis on the cheap. As Sun people will admit, their previous attempts at RAID were more like RAVED (Redundant Array of Very Expensive Disk). Coraid does have a Solaris driver, so this is definitely feasible.

Re:Yes! (1)

MoxFulder (159829) | about 8 years ago | (#15817529)

How much actual logic is needed to allow a hard drive to communicate in ATAoE? I haven't read the spec, but from the article it seems like not very much... basically the normal ATA packet needs some kind of ATAoE header prepended, and then it gets pumped directly into an Ethernet MAC.

These days, an embedded Ethernet controller adds, say, $10 to the total cost of a device. And hard disks already have onboard intelligent controllers, so getting them to speak the ATAoE protocol shouldn't be much more than a firmware update.

So, I agree with you. It seems totally feasible to manufacture drives which speak ATAoE natively, with a little RJ-45 jack in back. Stack 'em up, patch them into a switch, and you'd be good to go...

Re:Yes! (1)

magetoo (875982) | about 8 years ago | (#15817861)

Hmm. I don't know. Plugging one drive directly into the network switch is going to be simple and neat, but when you're plugging in several, you might start to wonder if every one of them really needs to have a separate cable to a switch that is essentially just a "disk multiplexer". Let the switch speak SATA instead, and the drives be cheap.

And you need power to those drives too. So even with only one or two drives, you still need additional hardware. It seems to me like it's better to just have a specialized disk switch, even before going into wanting to have the disks on their own separate network.

iSCSI killer? (4, Interesting)

apharov (598871) | about 8 years ago | (#15817286)

In the context of using this in low-cost environments with Linux I can hardly see how this could kill iSCSI. Last week I implemented an iSCSI setup for about 500 euros (target serves out 500GB disk space for non-critical backup) using standard components, FC5, iSCSI Enterprise Target [sourceforge.net] and Microsoft iSCSI Initiator.

Works great and is a lot (>10x) faster than the about similarly priced NAS device that was used for the same task before.

Re:iSCSI killer? (1)

sloth jr (88200) | about 8 years ago | (#15817548)

I agree. We're really just talking about the transport layer, since the targets can be whatever kind of device the host supports and that the target unit makes available. So, AoE seems a little - redundant, I'd guess. The SCSI standard is well-defined, been around forever, so I'm not sure why a re-implementation using an ATA command set would make much sense.

sloth jr

iSCSI can talk to ATA drives (2, Insightful)

Anonymous Coward | about 8 years ago | (#15817298)

iSCSI is a protocol. ATA disks are a physical medium. They work together, and you get the benefits of SCSI commands with the price of ATA disks. Just because iSCSI is the protocol does NOT mean that you need to use SCSI disks. You might even be talking to a RAID of ATA disks and not know it.

So, why would you need AoE? It's already cheap, and been for sale for some time.

Re:iSCSI can talk to ATA drives (1)

dfghjk (711126) | about 8 years ago | (#15817669)

to go fast?

what are the benefits of SCSI commands for storage? Oh yeah, command queuing. ATA has that.

Bootable? (2, Interesting)

Anonymous Coward | about 8 years ago | (#15817326)

Is it possible to boot WindowsXP via AoE or iSCSI? I want a diskless WindowsXP box.

Re:Bootable? (1)

EndlessNameless (673105) | about 8 years ago | (#15817616)

The BIOS would have to support booting from AoE. OS support from the vendor is irrelevant, as the AoE driver makes the disk accessible in the same manner as an internal ATA disk.

Since most x86 BIOSes only support PXE as their network boot protocol, I doubt it will work out of the box. Something would have to provide block-level access to the HD in order for the OS to bootstrap, and PXE doesn't do that.

Coraid (or someone else) would have to make a bootable floppy, CD, or flash drive image that could add this functionality.

Alternatively, a CF card with an IDE adapter could work as an internal solution. With a sufficiently large CF card, you could even install the OS onto the card and use the AoE drive for user profiles, applications, and storage (don't know about performance when swapping though... even with GigE, you might just want to do this on a system that has 2+ GB RAM). This is still technically diskless, and it is feasible with existing hardware and software. If you really want a diskless system now, this is my recommendation.

Re:Bootable? (0)

Anonymous Coward | about 8 years ago | (#15817704)

Yep, you'd buy an iSCSI HBA which has the capability of hooking int 13. The BIOS of the actual pc/server is then irrelevant.

keep in mind that the iSCSI target should also support boot (i think).

Re:Bootable? (1)

Petersson (636253) | about 8 years ago | (#15817674)

Well, boot linux from flashdisc and start vmware.

Re:Bootable? (2, Informative)

KingMotley (944240) | about 8 years ago | (#15817786)

Yes, you can. Just look for an iSCSI PCIe card. It's basically an ethernet card that looks like a standard ethernet card and disk controller (Most are SCSI controllers, although there is no reason they couldn't make it look like an ATA controller, but you'd lose a lot of features).

Not an iSCSI killer, here are the reasons why not (4, Insightful)

cblack (4342) | about 8 years ago | (#15817346)

1) Complexity for RAID and volume management is not centralized and is pushed to individual hosts. One of the main benefits of SAN technology is that you can just carve out storage from a single interface and assign it to a server and the server simply sees it as a block device. With AoE each drive is addressed separately by the server, which means it is up to the server (and server admin) to figure out how to handle distributing over multiple drives, handle drive failures, and expanding volumes. This is huge.
2) It is not a standard and is only really supported by one vendor. This may change in the future but it is significant right now. It is registered with the IEEE but that hardly makes it a peer-reviewed standard with input/improvements from many experts.
3) No boot from SAN. Until someone makes some sort of mini bootstrap system on a CD or a hardware card implementation of AoE that can be addressed as a block device admins will be unable to host the root filesystem and/or C: drive on an AoE SAN
4) No multipath (that I can see). Perhaps I misunderstand this, but it seems like there is no way to do multipath IO with this system. That is, all the drives are single-connected to a network. If that network switch goes down, all drives on that network are inaccessible.
So AoE looks like a neat technology for pushing drives out of the box and potentially sharing them among hosts, but there is no intelligence there. It is just dumb block addressable storage with no added availability or management, and therefore is far from being an iSCSI or FC killer.

Re:Not an iSCSI killer, here are the reasons why n (3, Interesting)

Cyberax (705495) | about 8 years ago | (#15817453)

You can use Ethernet-based multipath IO, a lot of switches can be stacked to provide redundancy (and load-ballancing).

AoE is a COOL thing exactly because it's a 'dumb' technology. You can buy a switch, a bunch of disk drives and AoE adapters, a small Linux PC - and your storage system is ready. There is a lot of existing RAID manipulation and monitoring tools for Linux, so RAID configuration is not a problem.

You also can boot from SAN, it's not a problem. Just add required modules and configs to initrd and place it on a USB drive.

Re:Not an iSCSI killer, here are the reasons why n (1)

pe1chl (90186) | about 8 years ago | (#15817612)

It may be cool, but it is WAY too expensive. 4000 dollars for a 15-disk box without disks, come on!

I am looking for an affordable storage box for my home network, but for this kind of money I expect SMB/NFS functionality, not a dumb ATA interface over ethernet.

Re:Not an iSCSI killer, here are the reasons why n (1)

Gaima (174551) | about 8 years ago | (#15817687)

While I'm not especially interested in network storage, and I know very little about SANs and AoE, I still thought I'd give my input.

1) The "server", or drive array, handles the RAID, and all space carving (LVM, EVMS). AoE tools then export block devices.

2) Yup, no argument there.

3) VMs can boot from AoE, unless you use RedHat in which case it's not stable.

4) Multipath ethernet (or bonding) can be done trivially at the kernel level on all connected devices. Both to double the throughput, or just increase the reliability.

This is perhaps a poor comparision, but AoE is somewhat akin to Linux, as a SAN is to MS Windows.
AoE is a very important part, but a kernel is just a kernel.
SAN is a whole system, complete with GUIs, and reports.

Will it kill iSCSI, who knows, I don't, but I don't care.

Re:Not an iSCSI killer, here are the reasons why n (1)

dfghjk (711126) | about 8 years ago | (#15817713)

while I agree with you, the issue of multipath is moot. Any device with only one port has that port as a single point of failure. You solve that problem with two ports and redundant switches. It is no different for SAN's, iSCSI or FC.

SAN's management capability is also it's downfall. Expensive, complicated and vendor-specific.

Re:Not an iSCSI killer, here are the reasons why n (1)

cblack (4342) | about 8 years ago | (#15817818)

Agreed on your multipath statements, except that in this case, due to the topology, losing a switch means losing all drives connected via that switch. In most SANs (sometimes even iSCSI) you have multiple routes from the raid card, through different networks, to the servers.

As far as SAN's management capability being its downfall, I disagree. I find it isn't terribly complicated (much simpler/faster than the Linux LVM tools I've used) and it can even be done in a standard way by creating your own iSCSI target and using those very same LVM tools. Someone even commented on this story that they had done this. The point is that all the raid and volume administration is done in one place, and individual servers do not have to worry about that complexity.

Re:Not an iSCSI killer, here are the reasons why n (1)

cblack (4342) | about 8 years ago | (#15817752)

I read some of the responses. First off, I still don't see how you can have multipath if the actual trays only have one port (which looks to be the case). Their EtherDrive units where a RAID card and drives sit in one unit and then the already-raided volumes are exported over AoE is a bit different, but at that point you could just make your own iSCSI target and be more compatible with existing mature iSCSI implementations (such as the software initiator in windows).
All this talk of how raid /can/ be done on a linux server is missing my main point, which is: raid and volume management should not /have/ to be handled on the server in a SAN environment, it is one of the main selling points of SANs. A single place to manage your storage configuration, all servers just see a chunk of storage and don't have to worry about the details.
I am not saying AoE is not useful in some settings or a neat technology, my main point is that it is not an iSCSI killer because it does not fill all the functions iSCSI does.

Re:Not an iSCSI killer, here are the reasons why n (1)

Odinson (4523) | about 8 years ago | (#15817891)

It's good that you mention multipath support. Have seens any docs out there on how to do a simple multipath setup with both Linux target and initiators? I tested iscsi about 8 months ago and found 0 docs on it.

Thanks.

Not so much cheaper (2, Insightful)

err666 (91660) | about 8 years ago | (#15817371)

Ok, so the Coraid people are selling their ATA over Ethernet 15 slot version for $3,995.00. That's apparently around EUR 3133. I can get something proven iSCSI based from Promise here in Germany for 4.499,- (a Promise M500i). Ok, that is almost 50 percent more expensive, but the iSCSI solution is supposed to work under all operating systems (Linux, *BSD, Windows, etc.) more or less out of the box, while for AoE you will have to buy drivers for Windows, and has generally worse support for other operating systems.

Now, suppose you will really use this baby and you want to have *lots* of storage.

So you buy 15 SATA drives, like say Seagate ST3750640NS for EUR 444 each. Now the difference between AoE and iSCSI becomes less:

AoE solution: EUR 9793
iSCSI solution: EUR 11159

Now the iSCSI solution is only 14 % more expensive.

Now it would be clear for me to go for the "safe" path of something proven and widely supported like iSCSI instead of AoE. The infrastructe you need will be the same anyway (Gigabit Ethernet, Gigabit ethernet switch).

Maybe I dont get it... (1)

segfaultcoredump (226031) | about 8 years ago | (#15817434)

From reading the news article, it seems that they are selling $3,995 ATA -> ethernet converters (disk sold seperately). Each box will hold 15 drives and offer a simple raid controller inside. It still has the same performance issues of iSCSI (a bit lower overhead, but not by much).

I dont see what the point is other than the fact that they are offering yet another transport protocol. Given that one can install iSCSI target software on linux/solaris/windows... whats the point? Anybody who read the article on Sun's 4500 (48 500G drives w/ a dual proc opteron) know that you can cheaply assemble a box with a ton of drives, slap the iSCSI target software on it and call it networked storage. Redundancy sold seperately.

I've seen FC -> SATA arrays that run around $10K for 6TB. And thats for a system with active/active controllers, mirrored cache and 4Gb FC interfaces. And these systems had the ability to expand to 120 drives on the same controllers (final price: around $1 per GB). I think that FC has a bad rep for being expensive due to companies like EMC and HP. If you know what to look for, you can actually get the stuff for a relatively low cost. Given the performance boost compared to iSCSI, I'm going to stick with FC for the time being (I will still pay for the EMC kit when it makes sense from a support/reliability standpoint, but I also mix in things from companies like NexSAN when cost is more important)

Re:Maybe I dont get it... (1)

Skreems (598317) | about 8 years ago | (#15817471)

Adding to that... I just don't see the point at all. I priced out a home-level file server and came out to $0.53 per gig, and that's including a backup drive to swap in should one of the raid5 array fail. The rest of the hardware was SATA2, hypertransport system bus, dual core machine... I wouldn't expect it to have any problems at all maxing out 3 or 4 gigabit nics. So how is $1/G all that great?

ATAoE is a crock, it's no better than iSCSI (3, Informative)

NekoXP (67564) | about 8 years ago | (#15817470)

So. Coraid has not, in a whole year, explained why iSCSI is somehow more expensive (disks + Linux kernel + network.. all the same) than their ATAoE implementation.

They'll give excuses about the cost of iSCSI hardware offload.. but you don't need that. ATAoE is all software anyway it's just a protocol over ethernet, rather than layered on top of TCP/IP.

What is wrong with using TCP/IP - which is already standard and reliable? Nothing. We know TCP/IP provides certain things for us.. resilience (through retransmits), and routing, are a good couple, and what about QoS?

ATAoE needs to be all the same network, close together, they're reimplemented the resilience, you can't use inbuilt common TCP checksum, segmentation and other offloads in major ethernet chipsets because they're a layer too low for it.

No point in it. Just trying to gain a niche. They could have implemented products around iSCSI, gotten the same performance with the same features, for the same price. Bunkum!

Re:ATAoE is a crock, it's no better than iSCSI (1)

dfghjk (711126) | about 8 years ago | (#15817755)

One of the problems of TCP is it's very hard to make go fast over 10G. All those issues become moot for AoE since it doesn't use that. GigE ethernet isn't really interesting for attaching large numbers of disks.

Trouble is that I'd assume all 10GigE NIC's will come with offload engines anyway so there's no savings.

There is no functional problem with making the product non-routable. Servers need physical security and physical proximity. What you seem to think is a liability is not one.

On the next iSCSI: Linux - Killer! (1)

JoshDM (741866) | about 8 years ago | (#15817513)

Next week, Horatio and the team are called in to investigate when a relatively unknown l33t hax0r is murdered at a hip Miami internet cafe. It turns out he was an investigative aide working for Comcast and was monitoring bandwidth usage. As Horatio interviews the cafe's owner, another patron sneaks out of the cafe and crashes his RAID while installing Debian. However, the case gets more complicated when they discover another murdered victim inside the server.

Cheaper? (1)

KillerEggRoll (582521) | about 8 years ago | (#15817625)

"That's just fine with UCAR's Frederick. He installed a separate network on servers that connect to the Coraid backup device."
How is AoE cheaper? If you care about your storage infrastructure, then be prepared to spend some money and effort (dedicated SAN networks are always good!). If they want to penny pinch, then use software-based intiators and targets. The target software will generally let you us any type of disk storage regardless of its native interface (ATA, SATA, SCSI, FC, they're all good!). If performance is an issue, forgo the expensive TOE NICs and use normal gigabit NICs. Setup the NIC driver so that interrupt coalescing is disabled and use >=9000 MTU ethernet frames. Be prepared to pay the price with extra CPU cycles though...

Bare boards? (1)

bhima (46039) | about 8 years ago | (#15817707)

I'd be far more interested in AoE and iSCSI if I could buy a few bare bridge boards to retrofit some RAID cages I have now.

I can't tell if this is clever or stupid... (2, Interesting)

YesIAmAScript (886271) | about 8 years ago | (#15817811)

A wise man once told me there is a fine line between them.

ATA is a crappy protocol, even when local. It's only good for squeezing that last $0.03 out of the controller cost. Once you are using ethernet cables ($1) and links and PHYs on each end ($4 each), it makes a lot more sense to put some brains back in. Use SCSI. Heck, even ATAPI optical drives (the optical drive in your computer) uses ATAPI, which is SCSI in packetized ATA transfers.

Also, I'm a bit nervous about the packet CRC validation being done in the ethernet controller/layer itself. The problem is that if an ethernet switch between you and the storage device stores packs and forwards them (as all smart switches do), it may also chose to regenerate the CRC on the way. If it corrupts the packet internally and generates a new, valid CRC for the new, corrupt packet, you have undetected corruption. I'd be a bit nervous about that for my hard drive.

I do think using GigE is a smart way to attach hard drives to servers. I look at the back of an Apple XServe and see two GigE ports and a fibre channel card. Why can't one GigE port be used to attach to the network and one to the XServe RAID? Why do I need to get a multi hundred dollar card to attach to the XServe RAID when that GigE port is fast enough? It'd sure save a lot of cost, and hopefully reduce the price ot the end user.

Anyway. I'm pro GigE attachment, not sure I'm for this AoE.

Re:I can't tell if this is clever or stupid... (1)

dfghjk (711126) | about 8 years ago | (#15817874)

ATA is a perfectly fine protocol for block storage and is much leaner that SCSI. The SCSI protocol was used for ATAPI because it already existed and was needed to support a wide variety of devices besides disk. It makes no sense to put your imaginary "brains" back in.

I'm pretty confident that you can prevent unintended data corruption. TCP/IP manages it so there's your proof of concept :)

GigE is not a good choice for disk attachment since it is easily outrun by a small number of disks. 10GigE is where it's at.

Apple is not the end-all-be-all of server manufacturers.

Ignoring many issues (0)

Anonymous Coward | about 8 years ago | (#15817837)

For small time users, I think this will catch on.

But you can't ignore a lot of the issues that high end storage systems address that this doesn't.
- Reliability
  - Availability
  - Smooth failure/degredation (when weird shit happens, how well does the device handle it, while maintaining uptime)
  - Mutli-path I/O
  - Cache writing. ATA has a notorious problem that drives cache and write things at their own will. However it's not on a level that the operating system is aware of, which make for some potential disasters when you use stuff in a transactional environment.
- Performance
  - Response time (a lot of people ignore this one)
  - Throughput
- TCO
  - How long do you spend fucking with a broken components?
  - How much money do you spend on replacing broken components?

For home users and environments where the admin really has nothing but time, yeah, pretty good setup. But for enterprise environments where the company you're working for cannot afford for its storage setup to have high downtime, no way. Enterprise storage systems are still leagues ahead of home brew software solutions because they throw lots of money at R&D, not just time and community testing.

Not Routable? (1)

stereoroid (234317) | about 8 years ago | (#15817857)

If I'm reading the Wikipedia AoE article [wikipedia.org] right... AoE is a L2 protocol that can not cross routers. That would immediately rule out the office I work in, in which floors and the data centre are on separate TCP/IP subnets. Small offices only, then?

But, as noted above, if they are claiming that they avoid the cost of ToE NICs for iSCSI, that's a spurious claim, since they are an optional performance enhancer, not a requirement for iSCSI. I've seen surprisingly decent performance without them, with the HP EVA iSCSI bolt-on from QLogic and ordinary Broadcom server NICs.

What about vblade? (1)

Mostly a lurker (634878) | about 8 years ago | (#15817867)

I cannot imagine buying the Coraid devices: as others have mentioned, the savings over iSCSI are too small and you risk single vendor lock-in. However, I am intrigued by the possibilities provided by vblade. As I understand it, this module allows you to change a dirt cheap Linux machine into an AoE controller for regular ATA/SATA disks. This would not replace FC based SANs for latency critical applications, but could apparently provide a very nice, low-cost backup device.

Does anyone here have experience with vblade? Is my understanding correct?

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>