Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Announces Open Fibre Channel Over Ethernet

kdawson posted more than 6 years ago | from the geting-our-roughage dept.

Intel 107

sofar writes "Intel has just announced and released source code for their Open-FCoE project, which creates a transport allowing native Fibre Channel frames to travel over ordinary ethernet cables to any Linux system. This extremely interesting development will mean that data centers can lower costs and maintenance by reducing the amount of Fibre Channel equipment and cabling while still enjoying its benefits and performance. The new standard is backed by Cisco, Sun, IBM, EMC, Emulex, and a variety of others working in the storage field. The timing of this announcement comes as no surprise given the uptake of 10-Gb Ethernet in the data center."

cancel ×

107 comments

This is nice and all (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#21737614)

But is there really a discussion to be had? Technology improves slighty, news at 11.

Re:This is nice and all (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#21737642)

You must be new here.

Fiber channel (5, Funny)

smitty_one_each (243267) | more than 6 years ago | (#21737630)

Fiber channel
In ye olde patch panel
Beats fiber thin
On your chinny-chin-chin
Burma Shave

Wrong! (0)

Anonymous Coward | more than 6 years ago | (#21742016)

It's Myanmar Shave now. But I agree, it just doesn't sound right any more.

Need target (1)

Wodin (33658) | more than 6 years ago | (#21737632)

This sounds quite cool, but I don't have any FC storage arrays or the "Fibre Channel Forwarder" they mention, so I would have to wait until they have the target written before being able to try it out.

Re:Need target (0)

Anonymous Coward | more than 6 years ago | (#21744594)

FTFA: "To reiterate, the initiator is working with the SCSI 2.6.24-rc tree (which is contained in the -upstream repository) and the SW Target works on a vanilla 2.6.23 tree."

Bumper cars. (0)

Anonymous Coward | more than 6 years ago | (#21737640)

Oh this should be interesting. Fibre over a collision-and hold-off architecture.

Re:Bumper cars. (5, Funny)

morgan_greywolf (835522) | more than 6 years ago | (#21737750)

Oh this should be interesting. Fibre over a collision-and hold-off architecture.
They have this newfanged technology. It's called switched Ethernet! It's amazing! With switched Ethernet, you get no collisions!

eth0 Link encap:Ethernet HWaddr 00:00:0D:03:01:04
                    inet addr:192.168.1.100 Bcast:192.168.1.255 Mask:255.255.255.0
                    inet6 addr: fe80::000:00f0:0043:0084/64 Scope:Link
                    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
                    RX packets:1781638 errors:0 dropped:0 overruns:0 frame:0
                    TX packets:1651683 errors:0 dropped:0 overruns:0 carrier:0
                    collisions:0 txqueuelen:1000
                    RX bytes:803882935 (766.6 MiB) TX bytes:333706343 (318.2 MiB)
                    Interrupt:18 Base address:0xd800


(address details fudged only)

Re:Bumper cars. (2, Insightful)

cait56 (677299) | more than 6 years ago | (#21739030)

FCOE really does rely on "new fangled technology". More than switched ethernet is required, it has to be an enhanced Ethernet that prevents virtually all congestion related drops.

Work on such features is indeed in progress in both IEEE 802.1 and the IETF. The comparison of FCOE vs. iSCSI in those environments will be a lot more even than the comparisons presented by FCOE champions currently. Those compare storage traffic that requires neither routing or security, and tests FCOE over forthcoming Ethernet vs. iSCSI over current Ethernet.

Those comparisons involve a lot more than wire transport protocols. For example, open-fcoe is a good start, but open-iscsi is a much more mature project.

Re:Bumper cars. (2, Funny)

Gothmolly (148874) | more than 6 years ago | (#21746776)

Dude, thats MY IP address !!!!

Re:Bumper cars. (1)

lpq (583377) | more than 6 years ago | (#21747844)

Someone labeled this 'funny' -- seems more "insightful" with "funny" as a 'side-effect'...

Re:Bumper cars. (1)

chuckbag (471448) | more than 6 years ago | (#21738812)

Yea no kidding. No one understands what _Best_Effort_Protocol_ means. (sigh)

Here is the truth:
If you carved the words "ethernet" on a stick and then smeared shit over it, people would stand in line to buy it.

Re:Bumper cars. (0)

Anonymous Coward | more than 6 years ago | (#21740204)

the E in FCoE is a misnomer. they require modified "lossless" ethernet.
by rights it ought to be named FCo modified E. they think pause frames
and fcoe-aware switches will do this.

I'm more interested in AoE (0, Interesting)

Anonymous Coward | more than 6 years ago | (#21737644)

That's not Age of Empires, but ATA over Ethernet, a lightweight protocol which would be great for network booting Windows. Does anyone know of a free AoE initiator for Windows XP? The etherboot project already has AoE capability in its gPXE stack: http://www.etherboot.org/ [etherboot.org]

Re:I'm more interested in AoE (1)

extremescholar (714216) | more than 6 years ago | (#21737888)

I agree that AoE is the best way to go. You can get SAN performance without all the headache. I don't know of a Windows AoE initiator, but then again, expcept for my job and for gaming, I don't use windows.

Re:I'm more interested in AoE (5, Funny)

Hal_Porter (817932) | more than 6 years ago | (#21738116)

You should give it a snappier name like Serial ATA Networking, or SATAN. Lots of interesting logo possibilities in that, and it'll be funny watching 'technology evangelist' types stutter, sweat and mumble when they give PowerPoint presentations to born again potential customers.

Re:I'm more interested in AoE (2, Funny)

jmpnz (533304) | more than 6 years ago | (#21739348)

Then we'd need an option to add High Availability Internetworked Logical-units to Serial ATA Networks.

Re:I'm more interested in AoE (1)

jd (1658) | more than 6 years ago | (#21740946)

But that only works correctly for Holistic Ethernet Link-Layer data centers providing Basic Realtime Infrastructure Management Support Technology Over Nextgen Ethernet.

Re:I'm more interested in AoE (1)

sumdumass (711423) | more than 6 years ago | (#21748018)

I'm surprised that you couldn't work Fire in there somehow. Good job though.

Re:I'm more interested in AoE (1)

jd (1658) | more than 6 years ago | (#21740884)

That acronym is already used by a security scanner. (Which had a patch which renamed it SANTA for evangelical network admins.)

Re:I'm more interested in AoE (2, Funny)

psydeshow (154300) | more than 6 years ago | (#21740994)

FreeBSD is sometimes a tough sell to religious groups because of the devil mascot.

"You want to put... a demon? On our server?"
"Daemon, it's a daemon."
"..."

Re:I'm more interested in AoE (1)

sumdumass (711423) | more than 6 years ago | (#21748002)

SATAN is doomed from the start. Too many people will think of the older hack tool/pass word cracker.

There is something about existing names that seem to already be looked down on that would probably cause them to over look that possibility for a name.

Re:I'm more interested in AoE (0)

Anonymous Coward | more than 6 years ago | (#21739360)

How is ATA over ethernet offtopic in a discussion about a way of migrating SAN technology to ethernet?

Re:I'm more interested in AoE (2, Insightful)

Intron (870560) | more than 6 years ago | (#21739684)

How is ATA over ethernet offtopic in a discussion about a way of migrating SAN technology to ethernet?

ATA, SATA and SAS all have severe connectivity limits. They don't have a way of addressing a large number of devices, running long distances or supporting multiple initiators. While they might be fine for your home they are worthless for the SAN/LAN environment where fibre channel and FCoE are targeted.

Re:I'm more interested in AoE (0)

Anonymous Coward | more than 6 years ago | (#21740002)

ATA is a register set. there are no targets in ATA. otoh, AoE can address 65535*254 targets. targets themselves may be a collection of disks in a raid.

Re:I'm more interested in AoE (0)

Anonymous Coward | more than 6 years ago | (#21745124)

AoE can address 65535*254 targets.

No it can't. Manually setting DIP switches to unique values on 65,000 storage units is not a workable solution. What happens when two units have the same value? **Undefined **

Re:I'm more interested in AoE (0)

Anonymous Coward | more than 6 years ago | (#21746384)

SAS is pretty damn good, ok it's a DAS technology, but with 16k device support, multiple initiators, dual port devices for failover (most I've seen have two ports active/active) it's a huge leap on from your old SCSI-320 cable.

Sure if you want to go long distances it won't but then you buy a box which allows connectivity via FC or iSCSI.

Speed? (4, Informative)

Chrisq (894406) | more than 6 years ago | (#21737648)

As far as I can see this is a way of bridging fibre channels over Ethernet. This does not necessarily mean that you will get fibre-like speed (throughput or latency). I am sure that this will have some use, but it does not mean that high performance data-centres will just be able to use Ethernet instead of fibre.

Re:Speed? (2, Informative)

farkus888 (1103903) | more than 6 years ago | (#21737736)

I am not too sure about the latency, but I don't know of any storage solution that can saturate 10 Gb sustained speeds. except maybe something like gigabytes array of ram as a hd system. simply reducing the number of drives on a daisy chain should keep you running happily as far as throughput goes.

Re:Speed? (2, Informative)

afidel (530433) | more than 6 years ago | (#21737810)

Huh? Our piddly 150 spindle SAN could keep a 10Gb link saturated no problem if we had a workload that needed it. In fact that's less than 7MB/s per drive, about one tenth of what these drives are capable of for bulk reads or about one fifth for bulk writes. Even for totally random I/O 150 spindles could probably keep that sized pipe filled.

Re:Speed? (1)

farkus888 (1103903) | more than 6 years ago | (#21737986)

I didn't think anyone intended this for the main pipes of a SAN. I thought it was for bringing those individual spindles back the the FC switch. then using fiber for the big links. though to be honest my understanding of FC SANs is poor, I am honestly posting more to get someone to explain it to me because I said something wrong than to try to enlighten others.

Re:Speed? (1)

afidel (530433) | more than 6 years ago | (#21738166)

Nah I see this as a lower cost way to distribute from the FC switch back to the storage users eg the servers. Most storage is also presented by some kind of storage array, very very little is JBOD presented directly by a switch. This is mostly due to the lack of management of JBOD as well as the fact that the performance improvement of placing a bunch of intelligent cache in front of the storage pool is huge.

Re:Speed? (1)

Intron (870560) | more than 6 years ago | (#21739320)

No big company wants to maintain two separate optical fiber networks, so you either get ethernet or fibre channel for your long runs. Since it's inception, you have been able to run ethernet over fibre channel, but almost nobody uses expensive (FC) technology when they can use cheap technology (ethernet). Alternatively, you can run iSCSI which is serial SCSI over ethernet. FCoE lets you bridge two remote fibre channel SANs or connect to remote fibre channel storage using ethernet without having to convert to iSCSI in between.

Re:Speed? (2, Interesting)

shaggy43 (21472) | more than 6 years ago | (#21737842)

I have an account that I support that has completely saturated 4 4G ISL's in-between 2 Brocade 48k's, and had to re-balance their fibre. Granted, and individual HBA doesn't hit a sustained 2G/sec, but 16G/sec saturated to a pair of HDS Thunders is impressive.

Mostly, I think this technology will compete against iSCSI, not dedicated fibre, with all the drawbacks -- plus an added drawback of currently being single-platform.

Re:Speed? (1)

onemorechip (816444) | more than 6 years ago | (#21742376)

Single-platform? OpenFCoE may be a Linux software initiative, but I think the T11 FCoE standard [wikipedia.org] on which it is based is being developed for all modern major server platforms.

Re:Speed? (4, Informative)

afidel (530433) | more than 6 years ago | (#21737780)

According to this netapp paper [netapp.com] even NFS over 10GbE is lower latency than 4Gb FC. I imagine if the processing overhead isn't too high or offload cards become available then this would be significantly faster than 4Gb FC. As far as bandwidth 10>4 even with the overhead of ethernet framing, especially if you can stand the latency of packing two or more FC frames into an ethernet jumbo frames.

Re:Speed? (3, Informative)

Intron (870560) | more than 6 years ago | (#21739406)

Umm. The paper says that the test load was sequential reads entirely cached in memory, so not exactly an unbiased test.

Re:Speed? (1)

fatovich (770355) | more than 6 years ago | (#21747044)

Why of course, if you wanted to test the speed of the fabric rather than the speed of the raid array wouldn't you prefer to read from cache rather than disk?

Re:Speed? (1)

Phishcast (673016) | more than 6 years ago | (#21739644)

Just thought I'd point out that NFS (NAS) is NetApp's bread and butter. They've been saying NFS is as good as block storage over Fibre Channel forever, and not everyone agrees. Their claim may or may not be true, but this coming from NetApp should be scrutinized in the same way as a statement from Microsoft saying how much lower their TCO is compared to Linux. Storage vendors are well skilled at spin.

Re:Speed? (1)

Limited Vision (234684) | more than 6 years ago | (#21740700)

10 years ago, NetApp put out white papers saying that they could make Oracle run over NFS. Could you? Sure. Would you, if you wanted to keep your job? No.

Re:Speed? (1)

saleenS281 (859657) | more than 6 years ago | (#21739652)

8Gb FC will be out long before 10Gb ethernet becomes reasonably priced.

Re:Speed? (1)

Znork (31774) | more than 6 years ago | (#21740386)

"8Gb FC will be out long before 10Gb ethernet becomes reasonably priced."

You mean, 8Gb FC will be out long before 100 Gb ethernet becomes reasonably priced.

10 Gb ethernet is already reasonably priced (compared to FC).

Re:Speed? (1)

Limited Vision (234684) | more than 6 years ago | (#21740734)

10 GbE 10 Gb FCoE.

FCoE is about making Ethernet more like Fibre Channel.

Re:Speed? (1)

Limited Vision (234684) | more than 6 years ago | (#21740764)

That should read:

"10 GbE does not equal 10 Gb FCoE."

Re:Speed? (1)

Wesley Felter (138342) | more than 6 years ago | (#21744144)

"10 GbE does not equal 10 Gb FCoE."

It does when you are doing FCoE in software, which is what this thread is about. Sure, the vendors would like to sell you specialized FCoE cards which will end up costing the same as an Ethernet NIC and an FC HBA put together, but you don't have to buy them.

Re:Speed? (0)

Anonymous Coward | more than 6 years ago | (#21747048)

But you do need a 10G switch that does supports pause. Preferably per-priority-pause if you want to run both TCP/IP and FCoE over the same cable.

Re:Speed? (1)

afidel (530433) | more than 6 years ago | (#21741832)

10GbE is reasonable today. Quadrics has a 96 port switch for only $300 per port and adapters are only $1K (eg NC510C from HP or the PXLA8591CX4 from Intel). Sure you can get 2Gb FC for around this same price point but 4Gb is significantly more. Brocade wants $500 per port with no SFP's and only 32 of the 64 ports enabled for the Silkwork 4900, fully configured you're at greater than $1,500 per port. Qlogic is similar for the SANbox 9000.

FCoE is NOT a FC replacement. (1)

ToasterMonkey (467067) | more than 6 years ago | (#21744472)

What do you mean "10GbE is lower latency than 4Gb FC"? That's a bit apples-to-oranges, isn't it?
You do realize that 10Gb FC is also available, and netapp has a conflict of interest? FCoE isn't going to do jack for netapp's NAS equipment.

I imagine if the processing overhead isn't too high or offload cards become available then this would be significantly faster than 4Gb FC
It won't have FC's other performance characteristics, and that's a lot of expensive ifs before even getting close.

if you can stand the latency of packing two or more FC frames into an ethernet jumbo frames.
If you could stand the latency, then why on Earth would you be using FC to begin with?

FCoE isn't going to replace FC where FC is needed. It will only make connecting (ethernet) things to a FC SAN easier. This is actually about bringing ethernet INTO a fibre channel fabric.
It also requires new FC (FCoE capable) switches, and will eventually mean that new FCoE aware ethernet switches are made. Go to www.t11.org [t11.org] and look up the specs yourself. You're looking at a possible future of (FC)Storage -> (FC/FCoE)Fabric -> (FC/FCoE)Clients, not Ether, Ether, Ether.

That, in case you were wondering, is why FCoE has such broad vendor support even from companies that rely on FC.

Re:Speed? (1)

Chris Snook (872473) | more than 6 years ago | (#21737970)

Why not? Switched fabric topology has no inherent latency benefit over star topology, and the majority of servers in a data center aren't doing anything that need any more sophisticated throughput aggregation than 802.3ad (LACP) bonding will give you. As long as you have pause frame support (a prerequisite for this FCoE implementation) you can create a lossless ethernet network, which eliminates the need for much of the protocol overhead of something like iSCSI, as long as you're staying on the LAN.

FCoE seems to fill a niche somewhere between conventional Fibre Channel and iSCSI. Intel is betting that it's a fairly large niche, and will drive a lot of 10 Gb ethernet sales, and I think they're right.

Re:Speed? (0)

canuck57 (662392) | more than 6 years ago | (#21738142)

As far as I can see this is a way of bridging fibre channels over Ethernet. This does not necessarily mean that you will get fibre-like speed (throughput or latency). I am sure that this will have some use, but it does not mean that high performance data-centres will just be able to use Ethernet instead of fibre.

To me, fibre channel SAN solutions are oversold. It raises the cost per GB/TB much higher than if you just put all the drives in a system right off that it needs. Direct attached storage (no switches - any kind) is still faster than going through 1-5 switches to get to it. That is, with fibre channel on a large scale you already have these latencies and they will also exist on Ethernet. But Ethernet is cheaper and noting special to learn.

I have only seen one use for fibre channel solutions that made sense. You have a very large DB (64 big disks or more), you want to stay on line and do the split mirroring thing to get a consistent image for backup/fail over. But even then, with DB replication you could avoid this too.

And if a system only needs lots of storage, it should be able to justify it's own storage unit and directly attach it. Thus no switch layering and better performance. Easier and less expensive to manage and less to go wrong. Ever seen a fabric switch microcode go nuts? Not pretty. It could happen with a ethernet switch too but my best is the Ethernet switch is better tested and more people know how to support it.

But if a server needs 2TB of file storage, can be backed up live, why put it on a latency laden fibre channel? Why not just put 4 1TB disks in it, RAID 0+1 and call it a day?

I get a kick out of shops putting systems than only need a TB of storage on a mega SAN because "it is the way we do it". To me it makes more sense to http to a storage box, give it a permanent IP, allocate 20TB of disk and let it run over 10000BT or faster.

We grossly underestimate the complexities and the frailties of having an entire data centers storage on one giant SAN. If your looking to the future, look at the Google file system they wrote.

Google has a Ethernet based network redundant file system I wish they would toss to Linux as open source. Then if your corp had 8000 desktops running Linux, you could conservatively put 100GB on each for an extra 800TB of inexpensive disk. I just need to find such an insightful company that will hire me as that would be a cool project.

Re:Speed? (2, Interesting)

jsailor (255868) | more than 6 years ago | (#21738494)

For that type of project, look to the hedge fund community. I know of 2 hedge funds that have built their own storage systems that way - Ethernet, Linux, direct attached disk, and a lot of custom code. My world doesn't allow me to get into the details, so I can't elaborate. My only point is that their are folks doing this and it tends to be the guys with large storage needs, moderate budgets, and a great deal of freedom from corporate standards and vendor influence.

Re:Speed? (2, Interesting)

canuck57 (662392) | more than 6 years ago | (#21738620)

My only point is that their are folks doing this and it tends to be the guys with large storage needs, moderate budgets, and a great deal of freedom from corporate standards and vendor influence.

Stay with them, these are good environments. BTW, I am not anti-standards, but at the end of the day they need to make sense. That is, not a standard for pure political posturing.

Re:Speed? (1)

nilbud (1155087) | more than 6 years ago | (#21738798)

If you cut a little notch in the side of a 5.25" floppy you can flip them over and double the storage capacity.

Re:Speed? (1)

dave562 (969951) | more than 6 years ago | (#21742450)

I might have missed it in the logic you presented, but where do you account for hardware failure? If your box with the direct attached storage goes down then all access to that data ceases. If you're running a SAN with the app servers clustered, when you lose one of the boxes the cluster fails over and your users still have access to the data.

Easier and less expensive to manage and less to go wrong.

When "less" becomes a single point of failure you have problems. In this day and age you have to assume that something is going to go wrong and then setup systems that fail gracefully.

Re:Speed? (1)

complete loony (663508) | more than 6 years ago | (#21738386)

Whatever happened to ATAoE? Wasn't that supposed to be the cheap equivalent to iSCSI / Fibre Channel?

More to the point, how difficult and expensive would it be to build a chip to interface between FCoE and a SATA drive?

I'm still hoping for a cheap consumer solution for attaching drives directly to the network.

Re:Speed? (1)

jabuzz (182671) | more than 6 years ago | (#21738516)

Want to represent your tape drive using AoE? Sorry you are out of luck. FCoE offers all the benefits of AoE (i.e. using cheap Ethernet, with no TCP/IP overhead) but the flexibity to do stuff other than SATA drives.

Re:Speed? (1)

jcnnghm (538570) | more than 6 years ago | (#21739328)

We have a SR1521 [coraid.com] , and it seems to do its job pretty well. It provides lots (over 7TB) of cheap storage space to the network. It probably isn't as fast as some other solutions, but our application doesn't need it to be.

Re:Speed? (1)

Joe_NoOne (48818) | more than 6 years ago | (#21739732)

The problem with those protocols (ATAoE, etc) is using ethernet it's only a broadcast domain - no segmentation or isolation (i.e. any host could connect to any storage). FC Protocol [wikipedia.org] is similar to TCP/IP but much more efficient and better suited to storage.

Re:Speed? (0)

Anonymous Coward | more than 6 years ago | (#21741886)

> "lower costs ... while still enjoying its benefits and performance."

Step right up, get something for nothing! Seriously though, performance of storage technologies are typically dependent on the types of loads. The trick is just optimizing the storage and transport to the file mix and quality-of-service needs.

For example, if you want to transmit a million little files, sending them over the network to disk is fastest.

But if you want to send bigger files, say over 1GB, then nothing beats streaming to tape (over SCSI or SAN). High-speed tape devices, such as LTO3, can actually write data faster than disk can read. If you want to boggle your mind about throughput rates, look at the latest (LTO4) I/O specs http://www.lto.org/technology/udata.php [lto.org]

So can someone say "TCP/IP is faster than SAN"? Yes, in some cases. Can you also say "SAN is faster than TCP/IP"? Yes, for big workloads, absolutely.

Then there is quality-of-service. The SAN folks have had years to implement stuff like "automatic path failover" in configurations like:
  - Dual FC HBAs on system for disk access paths, and dual HBAs for tape.
  - Dual SAN switches (or more).
  - Each tape/disk device is dual-attached to SAN.

Then, when any component in the SAN path fails, the alternate path automatically comes online; managed at the device-driver layer so the failover is transparent to the OS. In other words, "where-ever your data resides, you can always access it".

Now I'm not saying that FCoE will never have these capabilities, or that these capabilities are necessary in all environments, but if you really care about accessing your data, you might want a solution that the SCSI/SAN vendors have spent 10 years debugging and enhancing.

Full disclosure: I support backup/restore software which is hardware-neutral, so I'm not in favor (or against) any connectivity, transport, or storage solution; but I've sure seen a lot of things that "are supposed" to work which take some years to fine-tune.

10GE is a heck of a lot cheaper (5, Informative)

afidel (530433) | more than 6 years ago | (#21737690)

As long as a server is within the distance limit of copper, 10GE is about 3-4x cheaper then even 2Gb FC. We've also had a heck of a lot more stability out of our 6500 series switches then we have out of our 9140's and the 9500's are extremely expensive if you have a need for under 3 cards worth of ports.

Re:10GE is a heck of a lot cheaper (0)

Anonymous Coward | more than 6 years ago | (#21738592)

If you wanted stability you went with the wrong brand. Cisco is new to the FC market. It will take them some time to get their product stable. Heck they are still working on getting some features in IOS stable.

I also call FUD on the price difference between 2 or 4 Gb FC and 10 GigE. I need some time to do some digging, but I bet 10GigE is more expensive than 2 Gb FC.

Re:10GE is a heck of a lot cheaper (0)

Anonymous Coward | more than 6 years ago | (#21745276)

Hi. I know who you are. Guess who I am?

Re:10GE is a heck of a lot cheaper (1)

statemachine (840641) | more than 6 years ago | (#21746666)

We've also had a heck of a lot more stability out of our 6500 series switches then we have out of our 9140's and the 9500's are extremely expensive if you have a need for under 3 cards worth of ports.

1) Why don't you just direct connect since you only have 3 HBAs?

2) At least compare it to a 9120 or 9124 (which has 8-port licenses). Anyone knows that a 9140 (40 ports) and a 9506 (a director with 4 FC card slots) is way overkill for what you describe.

I'd say that at the very least, you're misinformed as to what's out there.

High End customers will not go to this. (3, Interesting)

BrianHursey (738430) | more than 6 years ago | (#21737744)

As we have seen with iSCSI the bandwidth capability over Ethernet just is not there. I with the EMC this will probably be great for the low end company that needs a mid tier and low tier environment. However large corporations with large database and high number of systems still need to stay with fibre frabrics. This probably will be only on the mid tier platforms like clariion.

Re:High End customers will not go to this. (4, Interesting)

totally bogus dude (1040246) | more than 6 years ago | (#21737904)

I expect you're right, but it's interesting to note they're referring to this as Fibre Channel over Ethernet, and not over IP. The reduction in overhead there (not just packet size, but avoiding the whole IP stack) might be enough to really help; and if you're running separate 10 Gigabit Ethernet for the storage subsystem (i.e. not piggy backing on an existing IP network) it might be really nice. Or at least, comparable in performance and a heck of a lot cheaper.

On the other hand, really decent switches that can cope with heavy usage of 10-GigE without delaying packets at all aren't going to be massively cheap, and you'd need very high quality NICs in all the servers as well. Even then, fibre's still probably going to be faster than copper... but that's just something I made up. Maybe someone who knows more about the intricacies of transmitting data over each can enlighten us all?

There was recently an article about "storing" data within fibre as sound rather than converting it to for storage in electrical components, since the latter is kind of slow; how does this compare to transmission via current over copper?

Re:High End customers will not go to this. (4, Informative)

afidel (530433) | more than 6 years ago | (#21738046)

Latency and bandwidth are comparable for copper and fiber ethernet solutions today, the drawback to copper is you need to be within 15m of the switch. This isn't so bad in a small datacenter but in a larger facility you would either need switches all over the place (preferably in 2's for redundant path) or you'd need to go fiber which eliminates a good percentage of the cost savings. FiberChannel used to have copper as a low cost option but it's not there in the 4Gb world and even in the 2Gb space it was so exotic that there was almost no cost savings due to lack of economies of scale.

Re:High End customers will not go to this. (1)

BrianHursey (738430) | more than 6 years ago | (#21738340)

The 15m limit would be a drawback for most data centers and DR sites where they use mid and long distant solutions for DR and Business continuity configurations. Like SRDF and SAN Copy. Again... This would be great for mid and low tier configurations creating the capability of implementing low cost sans configurations. I know SAN capabilities with fibre. My only experience with Ethernet configurations is low end iSCSI configurations, most of the current iSCSI configurations do not use fibre drives but they use SATA drives and I have not been impressed.

Re:High End customers will not go to this. (1)

guruevi (827432) | more than 6 years ago | (#21738368)

FibreChannel has a lot of copper in a lot of installations, all you need to do is get an SFP module that terminates copper instead of fiber optics. Especially for direct connects between servers and storage (Apple XRAID and Dell solutions for example) or direct connects between switches and storage in the same rack. The interconnects for large SAN's (between switches and backbones) are usually fiber though. Fiber is very expensive and the SFP's themselves are not cheap as well neither are the switches any cost savings are good.

Copper transceiver is ~$90 for 3m (10ft) including SFP modules while fiber solutions easily costs the same for a single SFP module (you need 2 + cable)

Re:High End customers will not go to this. (1)

Joe_NoOne (48818) | more than 6 years ago | (#21739814)

Fibre Channel is a protocol, not a cable (that's why it's not spelled Fiber). In fact, high end systems DO have copper based FC connections. They are great for shorter runs - the EMC Clariion uses copper cables between it's disk shelves to interconnect them. The CX3 series is running 4GB end-to-end and no issues with copper interconnects.

Re:High End customers will not go to this. (1)

eth1 (94901) | more than 6 years ago | (#21740348)

My company has several large data centers. While the network portion is generally separated from the server portion, so that two servers next to each other in a rack might talk to each other via a switch 25m away, the SAN racks and the servers that use them are usually fairly close to each other. There's no reason why an off-net storage switch couldn't be located in the SAN rack and connected directly for most installations.

Granted, you do lose some placement flexibility, which might be a deal-breaker in some situations, but if it's significantly faster or cheaper it might be popular.

Re:High End customers will not go to this. (1)

BrianHursey (738430) | more than 6 years ago | (#21741544)

I just talked with some colleagues I was incorrect. The media is not the limit. What is the limit is the actual protocol.. So using copper vs fiber makes no different in some cases copper can be faster. However long distant solutions is another story.

Re:High End customers will not go to this. (0)

Anonymous Coward | more than 6 years ago | (#21739512)

You are absolutely correct. FCoE is Fiber Channel over Ethernet, not Fiber Channel over TCP/IP. The Ethernet protocol is in the process of being updated to add additional features for FCoE, such as a pause function. The committee is working toward making Ethernet a lossless protocol and may already be there. The IP stack is avoided all together.

The whole idea is to have a single data path to a server. Both IP and FC on the same wire. The switch at the other end will have to be able to differentiate between the two protocols and switch them properly. But it will be expensive to start with.

Re:High End customers will not go to this. (1)

teknopurge (199509) | more than 6 years ago | (#21738200)

(4) bonded NICs/server
(1) Procurve gigE switch w/Jumbo frames turned on
(many) SAS drives

and we can, in production, have 4Gb throughput on our iSCSI SAN.

Tell me again where this "throughput" is hiding?

Regards,

Re:High End customers will not go to this. (3, Insightful)

Chris Snook (872473) | more than 6 years ago | (#21738222)

Bullshit.

The bandwidth is there. I can get 960 Mb/s sustained application-layer throughput out of a gigabit ethernet connection. When you have pause frame support and managed layer 3 switches, you can strip away the protocol overhead of iSCSI, and keep the reliability and flexibility in a typical data center.

The goal of this project is not to replace fibre channel fabrics, but rather to extend them. For every large database server at your High End customer, there are dozens of smaller boxes that would greatly benefit from centralized disk storage, but for which the cost of conventional FC would negate the benefit. As you've noted, iSCSI isn't always a suitable option.

You're probably right that people won't use this a whole lot to connect to super-high-end disk arrays, but once you hook up an FCoE bridge to your network, you have the flexibility to do whatever you want with it. In some cases, the cost benefit of 10Gb ethernet vs. 2x 4Gb FC alone will be enough motivation to use it even for very high-end work.

Re:High End customers will not go to this. (1)

Huh? (105485) | more than 6 years ago | (#21738912)

I've never heard of anyone getting 960Mb/s with iSCSI out of a single gig ethernet link.

Re:High End customers will not go to this. (1)

Chris Snook (872473) | more than 6 years ago | (#21740364)

Oops, I was vague. My results were with UDP NFS, which is much simpler to tune. As you noted in your reply, it's possible to tune iSCSI to similar performance levels, but doing so without sacrificing latency is rather difficult. My point was that simpler protocols (like FCoE) make it much easier to get the most out of the hardware.

For what it's worth, the NFS server in my testing was using Fibre Channel storage.

Re:High End customers will not go to this. (1)

Znork (31774) | more than 6 years ago | (#21740584)

I max out at 960Mb/s with iSCSI over gigabit $15 realtek cards with a $150 dlink switch. With out of the box iSCSI enterprise target software on Linux, to a client running OpeniSCSI (eh, or whatever it is that's shipped in RedHat by default). Over substandard cabling, on top of that. (Fer sure, by then the iSCSI server has cached the data in-mem, but anyway.)

So I'd really have to wonder what anyone failing to get that is running. I hope they're not paying for it.

Sure, non-cached performance against the IDE and SATA disks backing it isnt much better than 400-600Mb/s, but since I started using iSCSI for my home SAN I have come to the conclusion that the oft-repeated mantra about overhead and latency related to IP and ethernet is FC SAN salesman driven utter shite.

Re:High End customers will not go to this. (2, Interesting)

myz24 (256948) | more than 6 years ago | (#21742018)

I think you could follow up with some info about your setup. I mean, there is no way you're getting those speeds without tuning some network parameters or with some serious CPU and RAID setup. It's not that I don't believe you, I have a buddy that has done the same but with NFS but he's using an opensolaris system with TCP offloading cards and a heck of a RAID array.

Re:High End customers will not go to this. (2, Interesting)

Znork (31774) | more than 6 years ago | (#21743022)

"those speeds without tuning some network parameters or with some serious CPU and RAID setup."

Basic setup is approximately this; CPU's for both servers and clients range between AMD XP 3500+ to AMD X2 4800+. Motherboards are Asus (Nvidia 550 and AMD690) cards, with 2-4GB memory plus an extra SATA card on the iSCSI servers, and extra rtl8168/9 gigabit cards (the forcedeth driver has some issues). Disks on the iSCSI servers are striped with LVM, but not to more than 3 spindles (I dont care that much about maxing speed, I just want close-to-local disk performance). Tuning's been mainly on the iSCSI side with InitialR2T and ImmediateData. I've played around with network buffers, but basically come to the conclusion that it's more efficient in my case to throw RAM on the problem.

The peak rates (90-97MB per sec) have been obtained on completely unloaded systems, and the standard 40-60MB/s read is with disk mirrored against both iSCSI systems (and then striped on the iSCSI systems).

"I have a buddy that has done the same but with NFS"

Ah. Ehm. NFS. Yes.

Well, to tell the truth, I started out this interesting journey towards a home SAN using diskless systems booted over PXE and mounting NFS roots (mainly to silence my mythtv systems, and simplify backups). Lets just say that after testing and switching the first system over to iSCSI I could barely believe my eyes.

NFS is much, much, much harder to get decent performance out of. No matter how much tuning I've done I've rarely managed to get more than 20-40MB/s peaks, and for many small file accesses the performance is horrible. I'd thought that was what I could expect out of a gigabit Lan, after all, NFS saturated a 100Mbps network. I was quite surprised when my first iSCSI tests got 60-70MB/sec.

If your friends setup is such that block devices would work instead of NFS (at least for some parts), I'd really suggest he try running an iSCSI target. I cant vouch for the Solaris version, but I know there is one and that it sounds similar to the Linux one. Linux ietd (iscsi enterprise target) has been very stable and highly performing for me (more than a year running it now with no lost data :) ). It lacks some features like some forms of SCSI-3 reservation, but I can live with that.

who paid for this ad? (0)

Anonymous Coward | more than 6 years ago | (#21738130)

given the uptake of 10-Gb Ethernet in the data center
Huh? What planet is this happening on? You might find a few astroturfing vendors on this forum touting this stuff, but no-one I know has any plans for any serious 10Gb ethernet investment any time soon - never mind using it as a storage transport. iSCSI will continue to fill a niche role, but it will never be the primary storage protocol in any serious data center. The technology to watch is SAS.

There's a clear bias in this article that's quite contrary to the direction the market is headed. Who's paying for this ad?

Re:who paid for this ad? (1)

Creepy (93888) | more than 6 years ago | (#21738902)

I see this as a benefit to smaller companies that need high speed storage, but maybe can't switch their entire storage network to fibre channel overnight due to cost. Many routers run Linux, so router manufacturers can probably add this functionality to existing Ethernet routers without hardware changes, making the cost of migration much smaller in the short term.

Re:who paid for this ad? (1)

Intron (870560) | more than 6 years ago | (#21739546)

Is somebody running SAS more than 8 meters?

Srsly, FC or iSCSI? (1)

smcdow (114828) | more than 6 years ago | (#21738384)

I'm not a datacenter kind of guy, so help me out. If you've got 10 G Ethernet, then why would you want to run FC rather than iSCSI?

Can someone elaborate?

Re:Srsly, FC or iSCSI? (1)

cerelib (903469) | more than 6 years ago | (#21738522)

Here is the simple version.

iSCSI is for implementing a "direct attached storage device" using an IP network (Internet/internet/intranet) as the backbone.

FCoE does not involve IP and is simply a lower cost, possibly better (time will tell), way of replacing optical fabric in data centers.

Re:Srsly, FC or iSCSI? (1)

Chris Snook (872473) | more than 6 years ago | (#21738600)

iSCSI adds a lot of protocol overhead, and tuning it to work well with a given application and network load becomes quite difficult above gigabit speed. When you're using a fairly reliable transport, such as FC or Ethernet with pause frames, you can dispense with that, and get near-optimal throughput and latency with very little tuning.

Re:Srsly, FC or iSCSI? (1)

Vellmont (569020) | more than 6 years ago | (#21738712)


I'm not a datacenter kind of guy, so help me out. If you've got 10 G Ethernet, then why would you want to run FC rather than iSCSI?

I'm not a datacenter guy either, but I am a programmer.

My guess is simply just avoiding the IP stack. I'd guess an IP stack would add some latency, definitely adds some overhead, and most implementations are unlikely to be well optimized for extremely high bandwidth links (10 Gbit/sec).

FCoE avoids the IP stack entirely. If done properly, it can avoid all of the above problems. It would limit you to a single LAN of course, so you won't be crossing any subnets. But then most routers don't really have the capability of delivering extremely high throughput anyway.

Re:Srsly, FC or iSCSI? (1)

MT628496 (959515) | more than 6 years ago | (#21739472)

Okay now I'm confused. If you're avoiding the IP stack entirely, where does crossing subnets come into play?

Re:Srsly, FC or iSCSI? (1)

Slashcrap (869349) | more than 6 years ago | (#21741412)

Okay now I'm confused. If you're avoiding the IP stack entirely, where does crossing subnets come into play?
I guess they'll just have to cross that bridge when they come to it.

Re:Srsly, FC or iSCSI? (1)

MT628496 (959515) | more than 6 years ago | (#21742518)

Huh? Last time I checked, a bridge is not for crossing subnets. In fact, a bridge doesn't even operate at layer 3 at all. Or do you mean some other type of bridge?

Re:Srsly, FC or iSCSI? (2, Insightful)

Joe_NoOne (48818) | more than 6 years ago | (#21739970)

Some important limitations of iSCSI :

1) TCP/IP doesn't guarantee in-order delivery of packets (think of stuttering with streaming media, etc...)

2) Frame sizes are smaller and have more overhead than Fibre Channel packets.

3) Most NICs rely on the system to encapsulate & process packets - a smart NIC [TCP Ofload Engine card] costs almost as much as a Fibre Channel card.

Re:Srsly, FC or iSCSI? (0, Flamebait)

myz24 (256948) | more than 6 years ago | (#21741888)

You're kidding right?

1. TCP/IP is all about getting packets in the right order. That is it's purpose. It will give the application the data in the correct order. That's why TCP was created. UDP does not guarantee order, TCP does. The reason you see stuttering in video is because the bitrate of the video is higher than the bitrate you may be getting over the wire. Also, if this is all happening on a local network, how and why would a packet not arrive in the right order and in nearly the same amount of time each time? If you were running an iSCSI setup you would (should) be running it on its own private network.

2. Jumbo frames...

3. Again, you're kidding right? I can find offloading cards for around $100, I can't find any good fibre channel cards for that price.

Re:Srsly, FC or iSCSI? (0)

Anonymous Coward | more than 6 years ago | (#21742448)

1. The packets are numbered so they can be put back together in order, but TCP only knows source, destination, and next hop addresses and has no control over how it is delivered. If the first packet goes through router A and the routing protocol directs the next packet to router B you could still have the second packet arrive first.

2. Not all network hardware can handle jumbo packets (>1500 bytes) and some break up jumbo packets into smaller ones. FCP does 2048 bytes by default.

3. According to Computer Weekly [computerweekly.com] cheapest is $650 if you want one comparable to a fibre card.

Re:Srsly, FC or iSCSI? (2, Informative)

onemorechip (816444) | more than 6 years ago | (#21742562)

It doesn't *deliver* packets in order (at least, not unless the underlying network does). It provides the capability to reconstruct the original order. GP was talking about *delivery* of packets.

Re:Srsly, FC or iSCSI? (0, Flamebait)

myz24 (256948) | more than 6 years ago | (#21743252)

I think slashdot moderators hate me, why am I modded a 1 and the comment I'm replying to a 2??

Anyway, I'm not sure I follow what you're saying. The TCP stack on the sending machine most certainly sends the packets in order. Whether or no IP delivers them in order is a different matter and that's for TCP to fix. The thing is, TCP fixes the ordering issue BEFORE it sends the data up to the application so as far as the application is concerned, everything IS IN ORDER. The application will block (or timeout) until TCP is satisfied and sends it some data.

All of this ignores the idea that you probably aren't doing iSCSI through multiple routers and bridges. Nobody in their right mind is attempting to do iSCSI over the Internet and expecting fibre channel like performance. You've got some serious issues if you have packets that are out of order on your local network.

Re:Srsly, FC or iSCSI? (1)

onemorechip (816444) | more than 6 years ago | (#21745216)

I think slashdot moderators hate me, why am I modded a 1 and the comment I'm replying to a 2??

I think that's called a karma bonus modifier; my post wasn't moderated either way.

We're both right. It just depends on what is meant by "delivery". Delivery is out of order on the wire, and that's what I (and I suspect the original poster) meant. Of course the protocol stack guarantees in-order delivery to the higher layers. That means that the slowest packet is going to dictate the latency of an entire transfer, which I think is what he was getting at.

That said, I don't think ordering is a big deal for FCoE. For FC, in-order delivery (on the wire, that is) isn't necessarily required. Class-2 and Class-3 FC do not guarantee it. So at least these two classes should map to FCoE easily enough, and I'm sure a TCP-like reordering capability in the FCoE encapsulation could enable mapping of Classes 1, 4, and 5.

You've got some serious issues if you have packets that are out of order on your local network.

I don't think that would necessarily indicate *serious* issues. A local switched network for a mid-sized company or college campus could certainly be complex enough that a little temporary congestion could affect arrival order.

Disclaimer: I'm neither an FC nor Ethernet expert, but I've had just enough exposure to both to be dangerous.

Re:Srsly, FC or iSCSI? (1)

Znork (31774) | more than 6 years ago | (#21741178)

Well, basically, this is how it works:

Yer olde FC product salesman has a much better commission, as FC products have far, far higher margins than ethernet products. Therefore the FC saleman buys you better lunches and invites you to seminars with more free booze, while displaying his company produced graphs over how cutting edge lab FC hardware vastly outperforms iSCSI served by a PC from last century.

In your booze addled state you find this reasonable, and refrain from using google or performing actual tests on comparable (priced or performance) hardware for yourself.

Ok, to be slightly more fair, iSCSI overhead may have been an issue at some point in time. Today I'd say it's negligable for most scenarios (much like hardware-vs-software raid). CPU performance and optimized network stacks simply make it a non-issue (it's not like FC does magic fairy dust things, you've got to use those multicores and your apps not multithreaded anyway, so...).

And if you have an FC salesman yakking about it, throw ATA over Ethernet in his face and suggest that you're going to buy your future storage from Coraid.

Personally I've gotten close to max theoretical throughput (95-98MB/sec) with iSCSI over gigabit on cheap-ass COTS hardware so I'm less than impressed with our very expensive corporate SAN.

Too late: I'm already using AoE (1)

Tracy Reed (3563) | more than 6 years ago | (#21741330)

AoE is awesome, it is cheap, it is simple. 8 page RFC. The only SAN protocol you can really understand completely in one sitting.

http://en.wikipedia.org/wiki/ATA_over_Ethernet [wikipedia.org]

And combine it with Xen or other virtualization technology and you have a really slick setup:

http://xenaoe.org/ [xenaoe.org]

Too late (1)

b1ufox (987621) | more than 6 years ago | (#21748332)

It was announced almost 20 days back on lkml.
And the summary is incorrect in saying Intel has just announced.

Looks like either the /. editors are lousy buffons who do not care to click on the links to match the article summary or it is someone from Intel who is(are) trying to make sure that OpenFCoE gets some press.

doh... bad ,very bad journalism on part of slashdot.
Please do not be osnews, atleast check your articles for chirst's sake.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...