×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Fibre Channel Over Ethernet: From Fee To Free

Soulskill posted more than 3 years ago | from the investment-dollars-well-spent dept.

Networking 87

alphadogg writes "With demand for Fiber Channel over Ethernet (FCoE) more sluggish than vendors had hoped, 10 Gigabit Ethernet switch and adapter makers are making it available for free. FCoE is a standard driven largely by Cisco to converge customers' data center LAN and storage fabrics with 10G Ethernet. Industry heavyweights Intel and Brocade are among those now giving away FCoE capabilities. There are several factors prompting vendors to slash FCoE prices or stop charging for it altogether, including market indifference; technological immaturity; competing alternatives, such as virtualized Fibre Channel and Ethernet I/O; the recession; and vendors looking to drive switch volumes. 'When FCoE first came out there used to be a fairly large price premium,' says Alan Weckel, director of Dell'Oro Group. 'Cisco had to give it away for free to drive switch volumes. Users were not adopting as rapidly as thought or that Cisco had hoped for.'"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

87 comments

CSCO forward guidance is low (2, Interesting)

Anonymous Coward | more than 3 years ago | (#35204880)

Their stock was killed last week ... down below $20. I know, I own some. They are just trying to generate business, kind of like those 'tards who are pushing 3D TV.

Re:CSCO forward guidance is low (0)

Anonymous Coward | more than 3 years ago | (#35205898)

After the very aggressing marketing of their blade servers which are, pretty much nothing special and just blade servers, I strongly dislike the company at this point. From the attempts to get them purchased by making claims to senior management of the uniqueness of features that are inherent to any blade system, to the endless attempts to leverage the angle they have with the networking team to get a foot in the door.

If your technology has merit, it will be adopted. If you try to shove it down the throats of the people who would be running it, they will likely not look at it very closely.

To me, FCoE is just an attempt to move a reliable network (FC) to an unreliable medium (ethernet)

Re:CSCO forward guidance is low (1)

FST777 (913657) | more than 3 years ago | (#35212610)

How is FC1 more reliable than Ethernet? Both are (in this case) supposed to run on fibre and reliability mostly depends on how the network layer is implemented. In this case, everything from layer three up is identical. In the case of FCoE, the main advantage besides convergence is speed.

I'd still go for iSCSI, by the way. Even cheaper, and routable if need be. For truly mission-critical stuff: use DAS.

Re:CSCO forward guidance is low (1)

hesaigo999ca (786966) | more than 3 years ago | (#35208694)

I think we are starting to see a trend....anything that people do not need to upgrade, they do not, who needs a octo core computer when surfing the web, or windows 7 when xp does the job just fine, or a regular 10/100 router, when the 10gb are available... in the end, it boils down to the computer sector was overpriced for many eyars, and now we are seeing the true value of things.....gone are the days that laptops used to sell for about 2000$ when bestbuy has trouble moving the 250$ netbooks which are twice as powerful as the last gen laptops.....gone are the days of having to pay 400$ for an extra 200gb external drive when I can get a tera for 59$ at bestbuy....
it shows that yes prices are down, but had they been down to begin with we would not see such a flop...because people would not have needed a second mortgage to get a family computer/station going.

Re:CSCO forward guidance is low (1)

jon3k (691256) | more than 3 years ago | (#35212858)

woah woah lets calm down no one is trying to sell you a 10gb home router. literally no one, on earth, there isn't even a product that exists for home users thats 10gb. i was with you up until that point. shit, gigabit is still relegated to "performance" oriented home products for the most part.

Does this.. (-1)

Anonymous Coward | more than 3 years ago | (#35204926)

Mean the billions in subsidies American tax-payers give to these telecoms are actually giving us something back for once? Or are they still raping out buttholes?

Re:Does this.. (0)

Anonymous Coward | more than 3 years ago | (#35205018)

There are no telecom subsidies. They simply are granted limited monopolies.

Too late (4, Insightful)

Anonymous Coward | more than 3 years ago | (#35204988)

As network fabric bandwidth continued to increase and latency decrease, FCoE appeared to be a last ditch effort to plug the steady trickle of customers from the highly expensive FC over to the much cheaper to deploy iSCSI. I'm sure the thinking was that by making it routable and with the same semantics as existing FC installs, it could accomplish that task. However, I'm also thinking that in most situations, where there's little to distinguish between iSCSI and FCoE other than the now almost commonplace on-NIC hardware iSCSI acceleration, it's a case of too late.

Re:Too late (4, Informative)

Junta (36770) | more than 3 years ago | (#35205110)

Actually, FC isn't routable. FC over *ethernet* has no ip and no provisions to span gateways. The *theory* is that FCoE has fewer layers allowing for higher performance, but it's rare for that difference to be realized in cheap ethernet fabrics (the whole point of FCo*E*) and even rarer to matter relative to storage device performance limitations. iSCSI is much easier to manage with fewer limitations and gets some nice things from being over TCP whether FCoE people will admit it or not.

When FCoE first came to market, vendors had dollarsigns in their eyes with thoughts of extorting customers with FC pricing strategies using 'just' ethernet. You saw people trying to do per-port FC enablement licensing BS and other stuff unheard of in ethernet land.

If FCoE is going to exist long term, it will be as a 'freebie' alternative to iSCSI or as a convenience to build large SANs without a lot of FC switches and HBAs but using existing FC enclosures.

Re:Too late (2)

afidel (530433) | more than 3 years ago | (#35205302)

FC is so routable, there have been FC directors with routing capabilities pretty much since there have been FC directors. That said routed FC is kind of a hack but since lossless iSCSI requires DCB which also isn't really routable to get the same reliability you end up with the same limitations with a much higher complexity and CPU cost.

Re:Too late (3, Interesting)

guruevi (827432) | more than 3 years ago | (#35205470)

I think he meant that FC is not routable as the standard IP protocol is unless you buy expensive and proprietary solutions (as you call it, a hack).

  iSCSI has the advantage of being able to sustain a packet loss while FCoE can't stand it. FCoE is thus only possible over a small fabric (eg. 1 stack of dedicated switches) while iSCSI can be mixed with other traffic without sustaining any issues. Off course, some people using iSCSI can't sustain any packet losses either (because of latency - eg. streaming video and live video editing) and don't understand that Ethernet is not built for that kind of load - network engineers don't care about packet losses and hope the transport layer will fix them, storage engineers can't have any packet losses and the transport layer relies on it.

That being said, FCoE is similar to ATAoE, never widespread because of it's iSCSI cousin and those that ended up using it, might as well just used a true FC (or Infiniband) fabric.

Re:Too late (2)

butlerm (3112) | more than 3 years ago | (#35205978)

network engineers don't care about packet losses and hope the transport layer will fix them

That is a bit of a generalization don't you think? Excessive packet loss is death to any real network, which is why there has been a lot of effort to do various forms of explicit congestion notification instead.

On the Internet, that may be hard, but on a local network it is much easier. Some switches even have ECN marking these days, which is a far superior solution to avoiding loss through buffer bloat.

Re:Too late (1)

GooberToo (74388) | more than 3 years ago | (#35208552)

This is really the truth.

Most TCP stacks have fairly complex algorithms to avoid, manage, and recover from congestion. Furthermore, with technologies such as sliding windows, TCP allows for rather good scaling once you leave the single switch. This in turn means iSCSI can better scale and intermix with other traffic without catastrophic issue. Even better, loss of frames results in increased latency rather than loss of data.

So while iSCSI is in fact a fatter, heavier stack, that's also why there are dedicated boards to offload work from the host CPU. Even still, with plentiful and powerful CPUs available these days, even host CPU socket stack implementations are frequently not an issue as I/O is all too often still the bottleneck.

FCoE is an expensive technology which was looking for a solution, which iSCSI largely and cheaply, had already solved.

Re:Too late (1)

jon3k (691256) | more than 3 years ago | (#35212908)

CPU cost? With iSCSI off-load I'd always assumed (incorrectly?) that the difference in CPU load between FC and iSCSI was somewhere between non-existant and negligible. What about when using dedicated iSCSI "HBA" ?

Re:Too late (1)

afidel (530433) | more than 3 years ago | (#35213026)

iSCSI HBA's at 10Gb cost about the same as FCoE CNA's so where's the advantage?

Re:Too late (1)

jon3k (691256) | more than 3 years ago | (#35213374)

Massively decreased complexity and huge reduction in switching infrastructure costs?

Re:Too late (1)

afidel (530433) | more than 3 years ago | (#35214236)

Decreased complexity? I find iSCSI to be WAY more complex, especially if I have to go into yet another interface to configure the HBA.

Re:Too late (1)

butlerm (3112) | more than 3 years ago | (#35206008)

FC over *ethernet* has no ip and no provisions to span gateways

That is why they invented TRILL [wikipedia.org]. Link state routing at layer 2.

Re:Too late (1)

Junta (36770) | more than 3 years ago | (#35210220)

I would have considered that just enhanced switching (it solves a lot of topology problems with large layer 2 ethernet networks, but still would have issues with broadcast in widespread deployment). I know a lot of the terminology says 'routing', but it just seems more logically close to sane enhancements to switching than routing. I recognize at some point the distinction between 'switching' and 'routing' gets blurry.

Re:Too late (1)

Anonymous Coward | more than 3 years ago | (#35205146)

FCoE is a L2 protocol, it's not routable. You may be thinking of FCIP. FCoE would've made more sense as an access layer technology early on if some of the supporting standards (TRILL, etc) had been ready at the same time FCoE devices started hitting the market.

Re:Too late (2)

CAIMLAS (41445) | more than 3 years ago | (#35205826)

FC is expensive, compared to what?

Unless you're using the (mostly shit) iSCSI software initiators, you'll be using iSCSI hardware initiators on 10gbit Ethernet - which is hardly cheap.

Do the math: to get > 1Gbit/s on your fabric-side links, you need to spend a copious amount if you're going 10gigE. Not only are the switches ungodly expensive compared to 1Gbit, but they're expensive compared to pretty much everything else - Infiniband or Fibrechannel. When it comes down to it, the biggest thing 10gig Ethernet has going for it is compatibility and people understanding Ethernet (it certainly isn't price or availability - it's easier to find high throughput FC and the like).

FC is great - for storage networks. Infiniband is likewise great, though it was way ahead of the curve (and I have no idea what's keeping it from competing now, aside from vendors not wanting to push it). They're comparable on price and similar on functionality. Ethernet keeps getting its push from 35 years of history.

Why anyone would want to encapsulate FC on top of Ethernet (thereby reducing any advantage FC might have), I have no idea. It sounds like one of those "let's fix our poor engineering with creative application of technology; it can bite the next poor fucker in the ass".

Re:Too late (2)

grub (11606) | more than 3 years ago | (#35206390)


Do the math: to get > 1Gbit/s on your fabric-side links, you need to spend a copious amount if you're going 10gigE. Not only are the switches ungodly expensive compared to 1Gbit

At work the iSCSI chassis we've been buying have 4x 1Gb ports which bind to make a 4 Gb pipe (the initiator has to be capable). Not 10GigE but much cheaper than a 10 Gb-ready switch.

'course, if you need 10 Gbit, you need 10 Gbit but this is a nice trade-off if super-performance isn't critical.

Re:Too late (1)

totally bogus dude (1040246) | more than 3 years ago | (#35207318)

Does that actually give you 4gbit throughput to any host across a single data stream, or is like most link aggregation schemes in that it just spreads concurrent sessions across multiple physical links, but each session is limited by the bandwidth of a single physical link?

Re:Too late (1)

grub (11606) | more than 3 years ago | (#35208582)

That's the aggregate from the iSCSI chassis to the servers. They get =>3Gb/sec easy. Jumbo frames help there. From the servers to the network it's the standard spread-the-load means across 4 gig lines. All clients are 100Mb or 1Gb, nothing faster, so it's quite zippy for them. Not 10Gb speeds but priced well.
(standing on bus, think I got it all... ;))

Re:Too late (1)

swb (14022) | more than 3 years ago | (#35211900)

Sounds like the Equallogic model. I've always wondered what kind of actual bandwidth you get and how it gets spread out, I can get never get a straight answer from our Equallogic rep and the management software doesn't really give you any idea beyond simple bytecounts per physical interface.

In most of the setups I've been around (recent model PS4000 and PS6000s) using the ESX 4.1 sofiware iSCSI initiator, we see real disk throughput top out around 60 MByte/sec in any given VM with basically no other ongoing disk I/O.

When we've used QLogic hardware initiators throughput seems to be about 10-15% better and with lower host CPU utilization, but the performance is close enough and host CPU is largely irrelevant so most people aren't willing to pay.

I like the Equallogics, they seem reliable and are easy to work with, but I'm a little skeptical on the "magic" behind their NIC setup and the ability to deliver more than 1 Gbit/sec to any host.

Re:Too late (1)

FST777 (913657) | more than 3 years ago | (#35212856)

The main idea is convergence: when you already have a 10GbE network, why built a second infrastructure for FC? Usually it is cheaper (and easier to get the budget) to just expand your current setup to facilitate either FCoE or iSCSI.

Re:Too late (1)

jon3k (691256) | more than 3 years ago | (#35213012)

Look into Arista Networks. Founded by Andy B, who founded Sun, and Granite 1Gb switches that he sold to cisco before running their gig-e switch line. You're looking at copper 10Gb ports for about $500 a piece. Optical for about $500/port and about $150, if I remember right, for 10Gb sr transceivers. Layer 2-4, runs linux kernel (fedora kernel specifically, if I recall) along with a gnu userland and will run every port at wirespeed. Plus you can get 48 10Gb ports in 1U. No one else in the industry can touch that density, especially their 11U switch with 384 10Gb wirespeed ports. Pretty health feature support as well, even including things like MLAG, their TRILL implementation.

Re:Too late (1)

pyite (140350) | more than 3 years ago | (#35213376)

Plus you can get 48 10Gb ports in 1U. No one else in the industry can touch that density

Not entirely true. Arista has little secret sauce. They're using merchant silicon. Stay tuned for the plethora of switches about to be released based on the Broadcom Trident ASIC [broadcom.com] (64 10 GigE switch on chip, manifesting itself as either 64 10 GigE or 48 10 GigE + 4 40 GigE). Some (like Force10's [force10networks.com]) are already announced. The differentiator in 10 GigE ToR switching is quickly becoming software.

Re:Too late (1)

jon3k (691256) | more than 3 years ago | (#35213430)

No secret, just ask them, they'll tell you the exact vendors of their ASICs (hint: it hasn't always been Broadcom). Their whole business model is built on being able to use the best silicon available and layering the software on top of that.

Re:Too late (1)

jon3k (691256) | more than 3 years ago | (#35213438)

Oh, and it is entirely true - because Arista ships products _today_. So as of right now, what I said is absolutely correct. No one can touch that density. What happens tomorrow we'll see, but as of right now, Arista is king of the hill as far as sheer 10GbE density, bar none.

Re:Too late (1)

Dadoo (899435) | more than 3 years ago | (#35206636)

FCoE appeared to be a last ditch effort to plug the steady trickle of customers from the highly expensive FC over to the much cheaper to deploy iSCSI

Seriously? We've experimented with iSCSI here and, in our experience, the performance leaves a lot to be desired. I have to assume that's because of the IP overhead, and I'd expect something like HyperSCSI or AoE to perform significantly better. (Why people would rather use iSCSI than something like those is beyond me.)

We're heading in the SAS direction, instead. It's not as flexible as FC or iSCSI, but it's much cheaper than FC and faster than iSCSI.

Re:Too late (1)

jon3k (691256) | more than 3 years ago | (#35213052)

What made you think it was the IP overhead? What was the configuration like? We run iSCSI in a lab and we get >920Mb/s over copper gig-e.

What do you mean you're using SAS? Are you using DAS and then something like NFS? That doesn't make a lot of sense, SAS isn't really an alternative to a SAN.

Re:Too late (0)

Anonymous Coward | more than 3 years ago | (#35207468)

OP of parent here. Yes - iSCSI is the routable one, for all those wondering what I *meant*. I didn't - it's a glaring error. I've no idea why I said that as I'm well aware of it. All I can think of is sorry, it was late! (UK) Too late, apparently :)

IB Baby (0)

Anonymous Coward | more than 3 years ago | (#35211948)

Who needs 10Gb Ethernet and FCoE when we can get 40Gb Infiniband and IPoIB for less. FC is nothing more than a cash cow for certain vendors, 10Gb Ethernet is sold as the next big thing but is already slow and stale.

You're still using 10Gb Ethernet in your datacenter? Upgrade.

And still, no one buys it. (4, Interesting)

7213 (122294) | more than 3 years ago | (#35205006)

FCoE...

A solution in search of a problem. 10GbE ethernet is really very nice. FC (and FCoE included) have a history of poor vender interop.

So by using FCoE you get the worst of both worlds, 10GbE with vendor lockin at the storage level....

So... NFS anyone (or I guess iScsi)?

Only time i've ever used FCoE was as a WAN tunnel link for asynch rep.... not seeing any other value for this anytime soon.

Re:And still, no one buys it. (0)

Anonymous Coward | more than 3 years ago | (#35205100)

I'm guessing you're referring to FCIP. FCoE is a local datacenter technology, it would be an unusual choice for WAN replication given the existence of FCIP.

nobody buys 10GbE either... (2)

SuperBanana (662181) | more than 3 years ago | (#35206180)

10GbE ethernet is really very nice.

Too bad you can't really buy it, and it's insanely expensive, with per-port costs in the hundreds of dollars range. Lots of choices for adapters (which are also insanely expensive)....but I went looking for a 10GbE switch for our small-ish server room for some of our higher bandwidth systems that easily saturate gigabit ethernet...and came up very short in terms of selection. The vast majority of the market consists of switches with 1-2 10GbE uplink ports. That's slightly useful for some situations (for, say, a backup server with a lot of bandwidth, or linking to a main backbone), but not so useful if you want to link up a whole bunch of systems.

Re:nobody buys 10GbE either... (1)

afidel (530433) | more than 3 years ago | (#35206680)

Huh? I've had 10GbE in the core as long as I've been at my current employer (coming up on 5 years) and I'm seriously looking at it for all ports when my older 6509 core goes EOL. Just because cheap closet switches don't have 10Gb yet doesn't mean people aren't using it. Oh and it's coming down in price all the time, Linksys XSM7224S is a 24 port 10GbE switch with 4 uplinks for $8,700.

Re:nobody buys 10GbE either... (1)

LordLimecat (1103839) | more than 3 years ago | (#35206746)

Thats $350 per port, I think he was right about "insanely expensive".

Re:nobody buys 10GbE either... (1)

afidel (530433) | more than 3 years ago | (#35206844)

Hehehe, FC is ~$1,000 per port for both the switch and the HBA and QLogic alone shipped 1M ports last year.

Re:nobody buys 10GbE either... (1)

funwithBSD (245349) | more than 3 years ago | (#35206912)

When we are talking a t5240 or a M5000, so what? Fraction of the total cost, especial for an Oracle RAC cluster.

Even @$1000 per port, gimme 2, I will use the bandwidth. Hell, gimme 4 on the M5000, eventually they will use it.

Problem is for CISCO, companies that need that bandwidth are too few to drive their revenue model.

Re:nobody buys 10GbE either... (1)

inKubus (199753) | more than 3 years ago | (#35207296)

Yeah, but equivalent to 240 ports of GbE. Or equivalent to an $870 24-port 1Gb. That's about the going rate for a decent L3 Gb switch, even HPs. And that's a lot of cables saved, lot of server NICs, etc. 10Gb is really the only way if you have dense stuff like blades or SAN nodes that can push a lot of bits. Obviously. But it's a little more expensive, but only a little. $8k is not really that much when you look at what 50 copies of windows is or *shudder* 50 new iMacs. If you're a medium-sized shop at a normal, non-communications company, that's not a lot for a core switch.

For the record, Optical will eventually be the future but the dammed connectors suck ass at the moment and optical kits are AMAZINGLY PROFITABLE FOR NETWORKING COMPANIES so I like my copper until they decide to sensically drop the optical prices.. Probably it'll take a mass market moment to bring optical out once and for all to the masses. Just like in the 90's when they started putting copper nics on motherboards and the prices plummeted.

Re:nobody buys 10GbE either... (1)

pacman on prozac (448607) | more than 3 years ago | (#35207542)

That $350 doesn't include transceivers which you need at both ends.

I would think about $4k/port is more realistic for an average install (which won't be using Linksys).

Re:nobody buys 10GbE either... (1)

jon3k (691256) | more than 3 years ago | (#35213212)

For 10GbE?

$367 [provantage.com] for a 10GbE port, $454 [provantage.com] optics/port, $691 Intel 10GbE NIC [cdwg.com] (dual port too)

Total: $1,512/port

So unless you can build out 1Gb for less than $150/port (and have enough space for 10x the ports!) then 10GbE starts looking pretty attractive. But it depends on the size of the isntall, if we start considering a core/distribution/access architecture and including all the upstream ports, etc, it could get incredibly expensive. You could also include cost to install, configure, manage, etc. But if we're just talking basic per port pricing, under $2k is very easy.

Re:nobody buys 10GbE either... (1)

afidel (530433) | more than 3 years ago | (#35213308)

sfp+ direct attach for top of rack, cables cost from $30-120 depending on the length you need.

Re:nobody buys 10GbE either... (1)

jon3k (691256) | more than 3 years ago | (#35213094)

$350 per port is "insanely expensive" to you? Well let's see, its 10x the bandwidth of gigabit ethernet at 10x the density. So you pay LESS THAN $35 per port for gigabit?

Re:nobody buys 10GbE either... (1)

jon3k (691256) | more than 3 years ago | (#35213078)

You think 10Gb ports at "hundreds of dollars" is expensive? I pay that much for 1Gb ports from Cisco.

Re:And still, no one buys it. (1)

BitZtream (692029) | more than 3 years ago | (#35206798)

I really can't understand why anyone would do this FCoE?

Its a crappy protocol to start with, the only useful part of it was the fact that fiber was faster than copper at the time for any reasonable price, now thats no longer true. Fast enough copper is WAY cheaper than fiber and with none of the trouble.

FCoE to me makes about as much sense as PPPoE, when you start talking like that you've clearly very little actual understanding of what you are doing. You can come up with some reasons as to why you might do it, but it just shows you don't actually know the right way to do it and haven't bothered to look. It wreaks of inexperience and incompetence.

Re:And still, no one buys it. (1)

lanner (107308) | more than 3 years ago | (#35207406)

It's ISCSI that is a great cheap SAN protocol. FC still has some use to it that ISCSI can't beat, but for most stuff, ISCSI is awesome.

I just don't see a lot of reason to bridge the two, unless you are transitioning in a very very large environment.

Look at Cisco's switches that try to bring FC and 10GbE together -- they suck! No layer 3 support, and the latency is horrible compared with competitors.

People will complain about the overhead that the network stack adds to it, or latency or junk, but really, it's not that big of a deal. Get a network card with offloading.

ISCSI is good enough for 80% of orgs out there. That's what's wrong with FC.

Re:And still, no one buys it. (1)

jon3k (691256) | more than 3 years ago | (#35213238)

Cisco rep emailed me today to tell me that the Layer 3 daughter cards for the 5548P will be available starting in March. The list ("not to exceed") price is ~$17k. Figure the usual 30%+ off that.

Re:And still, no one buys it. (0)

Anonymous Coward | more than 3 years ago | (#35207566)

NFS seconded, rest of options are retarded

Re:And still, no one buys it. (1)

Vlado (817879) | more than 3 years ago | (#35219296)

Which solution are you referring to when you say FCoE over a WAN?

I'm not aware of any products like that on the market.

You may be referring to the FC over IP which is actually in use quite a lot for years now in situations where native FC either isn't technically possible or would be way too expensive.

FCoE has practically nothing to do with FC over IP.

Duh (2)

afidel (530433) | more than 3 years ago | (#35205036)

When Brocade introduced their FCoE switch I could pick up two 40 port 8Gbps FC switches and a pair of 48 port GigE switches with 10Gb uplinks for what they were charging for 24 ports of FCoE with 4x FC connections. So instead of going with the switch that probably cost them no more to manufacture I bought a pair of 5100's and bought a pair of stacking HP GbE switches and so had complete redundancy for about the same cost as one FCoE switch.

Re:Duh (0)

Anonymous Coward | more than 3 years ago | (#35205102)

10gig FCoE and 10 gig iSCSI are still both slower than 8 GB FC. All the claims about iSCSI and FCoE being cheaper are largely hogwash. The prices on FC keep falling just as fast as Ethernet and it's got less overhead and limitations.

Re:Duh (3, Insightful)

afidel (530433) | more than 3 years ago | (#35205238)

Actually FC has an inherent disadvantage and that is they ship about 5% of the ports per year that Ethernet does and so all their R&D has to be spread over 1/20th the ports. The vendors all realized this about 5 years ago and so they started on FCoE to spread their R&D budget over the much larger Ethernet ecosystem. I believe 16Gb FC will be the last standalone standard and that after that they will piggyback on Ethernet for 40Gbps and 100Gbps speeds. Speaking to industry insiders at Storage Networking World it's obvious that the days are numbered for standalone FC.

As to your claims that 10Gb FCoE is slower than 8Gb FC for throughput that's rubbish as the framing overhead for FCoE is nowhere near 20% and they are both lossless protocols, for latency it may or may not be true depending on implementation.

Sorry, just not true... (1)

Desmoden (221564) | more than 3 years ago | (#35206656)

I don't know what the latest iSCSI over 10Gb ethernet scores are but...

8Gb FCP/FC runs at ~8Gbs
10Gb FCoE runs at ~9.7Gbs

Re:Sorry, just not true... (0)

Anonymous Coward | more than 3 years ago | (#35206830)

Actually, 8Gb FCP/FC runs at ~6.8Gbs after encoding overhead (about 870MB/s), while 10Gb FCoE runs at ~9.8Gbs (about 1254MB/s).

Re:Sorry, just not true... (1)

Desmoden (221564) | more than 3 years ago | (#35207350)

You've gotten 9.8Gbs with FCoE =) hehe, rock on.

Sorry, my bad on the 8G, you are correct. I usually don't get as high as 870MB/s more like 810-820, but my Gbs conversion was WAY off. Thanks for the correction!!!

Re:Sorry, just not true... (1)

Khyber (864651) | more than 3 years ago | (#35207472)

Meh, we're already breaking those speeds with local data buses, what's the point of this when we'll be getting it wirelessly soon enough?

Oh, vendor lock-in. Duh.

Ignore me.

Re:Duh (2)

drsmithy (35869) | more than 3 years ago | (#35205416)

So instead of going with the switch that probably cost them no more to manufacture I bought a pair of 5100's and bought a pair of stacking HP GbE switches and so had complete redundancy for about the same cost as one FCoE switch.

You also had 1/10th the bandwidth and twice as much cabling to each server, higher power draw, more rack space required and more devices to manage.

Re:Duh (1)

afidel (530433) | more than 3 years ago | (#35205456)

I do have more cabling but power draw's actually lower due to early FCoE adapters being complete power hogs. Rack space isn't a consideration for either my primary datacenter or my colo space since power density means I can't fill racks with servers anyways. Oh and at least for my applications storage bandwidth is much more important than IP bandwidth so I actually have equivalent bandwidth.

Re:Duh (1)

jon3k (691256) | more than 3 years ago | (#35213254)

I'm glad I'm not the only one having the problem. Seems like I can barely fill a third of a rack before I'm bumping up against the amount of power they will deliver to a single rack. It's getting ridiculous.

Too late for us IMHO (1)

grub (11606) | more than 3 years ago | (#35205112)


At work a few years ago we were looking at FCoE but it was huge coin. We opted for iSCSI and haven't looked back. Our gear doesn't have to be super-zippy so we started with 16 drive iSCSI->SATA2 chassis in RAID 6 w/ hotspare. We can bind 4 GigE channels for decent throughput. Not 10 Gb speed but great for our purposes. YMMV.

Forgive my ignorance... (1)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#35205130)

How exactly does one charge separately for fiber channel over ethernet when selling ethernet switches?

Does the switch have firmware that actually dedicates processor time to blocking FCoE traffic unless you pay the man(and is a license fee for "UDP over ethernet" or "HTTP over ethernet" the next brainwave from Cisco?), or is the "over ethernet" a marketing exaggeration, and there are actually certain non-ethernet features that the switch must support in order to handle FC "over ethernet"?

Re:Forgive my ignorance... (2)

Junta (36770) | more than 3 years ago | (#35205190)

One is most 'FCoE' equipment had FC and ethernet ports, so there was a hardware difference.

Another is FCoE generally means the ethernet switch has some FC layer management features (e.g. looking at WWN, zoning, etc). I think this is the *key* priced modification for most FCoE equipment without FC ports. Basically making a way of dealing with the switches exactly the way storage admins are accustomed to dealing with SAN switches.

Finally, there are some layer 2 features considered essentially mandatory for 'decent' FCoE (more advanced pause frames, for one).

I still haven't found anyone getting excited about FCoE, even if it doesn't cost more. The storage admins I've met by and large hate the way they have been required to deal with their equipment.

Re:Forgive my ignorance... (3, Informative)

quickOnTheUptake (1450889) | more than 3 years ago | (#35205192)

Apparently the latter. "Since classical Ethernet has no flow control, unlike Fibre Channel, FCoE requires enhancements to the Ethernet standard to support a flow control mechanism (this prevents frame loss). . . . Fibre Channel required three primary extensions to deliver the capabilities of Fibre Channel over Ethernet networks: -Encapsulation of native Fibre Channel frames into Ethernet Frames. -Extensions to the Ethernet protocol itself to enable an Ethernet fabric in which frames are not routinely lost during periods of congestion. -Mapping between Fibre Channel N_port IDs (aka FCIDs) and Ethernet MAC addresses." --Wikipedia

Re:Forgive my ignorance... (1)

quickOnTheUptake (1450889) | more than 3 years ago | (#35205200)

Sorry, let's try that again:
Apparently the latter.

Since classical Ethernet has no flow control, unlike Fibre Channel, FCoE requires enhancements to the Ethernet standard to support a flow control mechanism (this prevents frame loss). . . .
Fibre Channel required three primary extensions to deliver the capabilities of Fibre Channel over Ethernet networks:

  • Encapsulation of native Fibre Channel frames into Ethernet Frames.
  • Extensions to the Ethernet protocol itself to enable an Ethernet fabric in which frames are not routinely lost during periods of congestion.
  • Mapping between Fibre Channel N_port IDs (aka FCIDs) and Ethernet MAC addresses.

--Wikipedia

It's all about what the swich is capable of (5, Informative)

sirwired (27582) | more than 3 years ago | (#35205204)

Your plain-vanilla 10GbE switch does not have the flow-control bits required to make Ethernet lossless; without essentially lossless traffic, SCSI/FC perf goes in the dumpster. (0.03% packet loss == approx. 50% performance cut.)

In addition, there must be at least one switch in the VLAN that can provide FC services, such as zoning, address assignment, name services, etc.

Re:It's all about what the swich is capable of (0)

Anonymous Coward | more than 3 years ago | (#35205762)

They should rename this "FCoME" for Fiber Channel over Modified Ethernet... After knowing this, I can see why iSCSI is gaining more popularity.

Re:It's all about what the swich is capable of (0)

Anonymous Coward | more than 3 years ago | (#35206222)

Plain-vanilla 10GbE switches do provide PAUSE-based hardware flow-control, making Ethernet lossless for one hop. For FCoE, switch vendors have defined per-priority flow-control to be able to mix lossless and lossy traffic, and jack up the price of those switches 5-10x. Even then, lossless is not guaranteed between switches.

It's all about coming up with ways to sell more expensive switches, it is not about solving technical problems.

Re:It's all about what the swich is capable of (1)

jon3k (691256) | more than 3 years ago | (#35213292)

(0.03% packet loss == approx. 50% performance cut.)

Do you have a link to back that up? That would be very interesting if true.

Re:Forgive my ignorance... (1)

jon3k (691256) | more than 3 years ago | (#35213272)

Yes, it's licensed base, with Cisco at least. You have to buy the storage license to be able to configure ports as storage ports.

Little early... (1)

NetJunkie (56134) | more than 3 years ago | (#35205662)

It's a little early to call the death of FCoE. We still can't really do a true FCoE environment. The firmware to enable multi-hop FCoE on switches is just now starting to ship. Up to now all we've done is single-hop where the storage is directly connected to the same switch as the end devices...which is not scalable. I have a lot of customers doing 10Gb NFS (I do a lot of VMware) but for true high performance the choice is still Fibre Channel and those same customers are the ones looking heavily at FCoE.

I will admit though, performance is impressive.... (3, Interesting)

Desmoden (221564) | more than 3 years ago | (#35206286)

With no tuning (other than Jumbo frames for FCoE) I was able to get 9.7Gb/s using FCoE over 10Gb ethernet.

While 16Gb FCP/FC is around the corner, you will be able to run FCoE over 40Gb and 100Gb ethernet in 2-3 yrs. (at MUCH $$)

Keep in mind however, iSCSI has been around for over 10yrs now. These things take time to grow, mature, attach.

So lets wait a few more years before declaring anything dead or alive =)

And keep in mind, FCoE is not meant to replace FCP/FC, its meant to fix what is keeping iSCSI from doing better.

Re:I will admit though, performance is impressive. (0)

Anonymous Coward | more than 3 years ago | (#35217322)

Your reasonable attitude ain't gonna play around here. Didn't you read all the haterade comments from people who can't imagine a need what FCoE offers b/c they happen to find that iSCSI works just fine for them?

Here kid, this is cool stuff... (0)

Anonymous Coward | more than 3 years ago | (#35206686)

The first one's free.

iSCSI? (0)

Anonymous Coward | more than 3 years ago | (#35206980)

Every time I see any posting or article about FCOE I always see someone post the iSCSI argument.

NEWS FLASH iSCSI sucks! It has all the terrible characteristics of TCP/IP and seems to have the reliability of dial up during a lightning storm.

To this day Fibrechannel is still the way to go in a mission critical enterprise environment. Losing your storage infrastructure due to a spanning tree packet storm is just not acceptable.

too $$$ (1)

jsepeta (412566) | more than 3 years ago | (#35207150)

not only is FCOE pricey, even gigabit ethernet products are too expensive. they've been out for years - the prices should have dropped by now.

Cisco's attempt to forklift older Catalysts (0)

Anonymous Coward | more than 3 years ago | (#35207376)

FCoE is nothing more than Cisco's attempt to churn the installed base of Catalyst switches.

As others have pointed out, FC requires a lossless fabric, so FCoE requires lossless Ethernet (at least for the FCoE traffic). The installed base doesn't support that -- you have to upgrade to something like a Nexus.

FC vs. iSCSI (0)

Anonymous Coward | more than 3 years ago | (#35208470)

Most of the FCoE vendors are pushing the standard for different reasons, ie not necessarily as a premise for storage area networking but rather a means at reducing cable management complexity in the datacenter. Storage virtualization is the current big push, with varying types of storage on the backend and storage virtualization middleware that abstracts that storage into something neutral from a management perspective.

In reality FCoE can't hold a candle to iSCSI in WAN/MAN environments as it's not routable traffic per se, and requires lossless Ethernet capability on the switch side.

FCoE is for highly dense local datacenter environments that want to leverage existing fibre channel SAN-based storage topologies, whereas iSCSI is typically used in lower cost WAN-type environments that need long distance access to storage resources. Two different worlds really.

GBIC's Still To Expensive (2)

TheGreatDonkey (779189) | more than 3 years ago | (#35208476)

Beyond this, the physical costs versus 8gig are just not justified yet. With the overhead of FCoE, you can roughly say 10gig FCoE is the same speed as more traditional 8gig FC. If you believe that to be roughly true, then price is the next factor to consider, as what are you really getting?

8gig Fibre Channel GBIC for a SAN fabric averages around $150-$200.
10gig network (CNA) GBIC for a more traditional network averages around $1100.

I am building out a new virtual farm now, and much as we tried to go the converged route with 10gig network, the price point simply isn't there yet (technology is still maturing this year as well). You can work around this with copper for very short runs, but the expense comes in per-rack network gear.

This should start to settle in the fall as the standards fall together better.

Re:GBIC's Still To Expensive (1)

franciscohs (1003004) | more than 3 years ago | (#35209530)

Or you could use Twinax to connect your servers if you're using a top of rack (or similar) topology.

Re:GBIC's Still To Expensive (1)

kamata38 (1988546) | more than 3 years ago | (#35211712)

Having worked in IT only for SMBs, I didn't know much about FCoE so in that sense TFA and these comments are quite informative for me. That said, I don't anticipate deploying FCoE (or any type of FC) anytime soon since we don't have a need for the FC-specific benefits. I am in the process of building a new server room for an office relocation, and have already designed the LAN access/distribution layers using Catalysts using 10GbE trunks to the core. I have to agree that GBICs are way too expensive--especially when implementing 10GbE functionality on the 2960 series switches. I don't have the numbers, but IIRC the total cost for the required modules and SFPs was actually close to the base switches themselves! Then, Cisco charges an arm and a leg just to stack their switches... Granted, you pay for what you get, and Cisco is [arguably] the premium for network hardware... but I can't help feeling a lot of the high cost is just not warranted. Personally, instead of FCoE, I wish Cisco would drive 10GbE (and other Ethernet-enhancing features, proprietary or not) costs down and make it more affordable for smaller shops and enthusiasts.

Re:GBIC's Still To Expensive (1)

jon3k (691256) | more than 3 years ago | (#35213356)

You're not considering FC overhead. Why are you comparing a fiber channel GBIC to a 10Gb CNA? That's like comparing the price of a car to the price of a muffler. You need FC HBAs just like you need 10Gb CNAs. And they cost as much, or usually more, than a CNA.

How about FCoTR (0)

Anonymous Coward | more than 3 years ago | (#35211482)

With all this talk in the industry around converged networking, we are seeing Fibre Channel storage landing on the Ethernet-based networks. How about taking it a step further and supporting Fibre Channel over Token Ring!?

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...