×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

10GbE: What the Heck Took So Long?

Soulskill posted about 10 months ago | from the i-blame-the-schools dept.

Networking 295

storagedude writes "10 Gigabit Ethernet may finally be catching on, some six years later than many predicted. So why did it take so long? Henry Newman offers a few reasons: 10GbE and PCIe 2 were a very promising combination when they appeared in 2007, but the Great Recession hit soon after and IT departments were dumping hardware rather than buying more. The final missing piece is finally arriving: 10GbE support on motherboards. 'What 10 GbE needs to become a commodity is exactly what 1 GbE got and what Fibre Channel failed to get: support on every motherboard,' writes Newman. 'The current landscape looks promising. 10 GbE is starting to appear on motherboards from every major server vendor, and I suspect that in just a few years, we'll start to see it on home PC boards, with the price dropping from the double digits to single digits, and then even down to cents.'"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

295 comments

Meanwhile (2, Informative)

Anonymous Coward | about 10 months ago | (#43941053)

Everyone's still running off of ancient Cat3 wiring laid down when telephones were still analog.

Re:Meanwhile (1)

lgw (121541) | about 10 months ago | (#43941149)

Sounds like my home network may jump from 1Gb to 10Gb sooner than I expected, but it's still behind 3Mb DSL as my only non-Comcast option. Yay?

Re:Meanwhile (0)

Anonymous Coward | about 10 months ago | (#43941245)

I jumped to 1Gb when I had the chance, but here's my thought: it's not NEEDED for home environments. I can stream just about anything to or from computer/xbox/roku/etc with no trouble. Bumping my home network still wouldn't mean crap when my outbound pipe is constrained by the standard shitty 3rd-world standard cable modem speeds the US has or worse yet, crappy AT&T DSL.

Re:Meanwhile (-1)

Anonymous Coward | about 10 months ago | (#43941273)

its also not needed for most work environment.

Re:Meanwhile (3, Insightful)

fuzzyfuzzyfungus (1223518) | about 10 months ago | (#43941405)

its also not needed for most work environment.

It is extremely convenient when doing large building and/or campus networking, though...

Sure, it makes very little sense to do 10Gb to the drop(barring fairly unusual workstation use cases); but if all those 1GbE clients actually start leaning on the network(and with everybody's documents on the fileserver, OS and application deployment over the network, etc, etc. you don't even need a terribly impressive internet connection for this to happen), having a 1Gb link to a 48-port(sometimes more, if stacked) switch becomes a bit of an issue.

Same principle applies, over shorter distances, with datacenter cabling.

Re:Meanwhile (1)

Anonymous Coward | about 10 months ago | (#43941799)

(forgot to mention: unless your existing fiber sucks, you can often turn a 1Gb link into a 10Gb one just by swapping the hardware on the ends, this can start to look very cost-effective, especially if you were planning a switch upgrade anyway, when compared to getting the fiber guys out to fish another bundle of the stuff between buildings.)

Re:Meanwhile (5, Insightful)

lightknight (213164) | about 10 months ago | (#43941517)

Oh for crying out loud. Where do you people get off with this kind of thinking? How are you even allowed in technology fields with a mind like that?

It's not needed...technology is about advancing because it's WANTED. It's not run by committee, and it's not run by determination of some group need, because if it were, we'd still be living in caves and worshiping rocks, because fire isn't needed by anyone.

And the reason, reading between the lines, for it taking so long to be adopted, is because everyone has become cheapskates when it comes to technology. The idea of a separate NIC to handle network traffic is a lost cause, as is a dedicated sound card, and now video card. Why? Because you're trying to justify to a group of people who refuse to educate themselves why it would be in their own best interest to pay a little more.

I applaud the people behind 10GB E, and hope they have enough resources / energy to bang out 100GB E. This is progress we can measure, easily, and it should be rewarded.

Re:Meanwhile (1)

Archangel Michael (180766) | about 10 months ago | (#43941625)

Wants cost a lot more money than needs do. I WANT a Ferrari, but what I need is a car to get me to and from work. Which one cost more? When the Ferrari becomes inexpensive as a Ford, let me know, I'll buy two.

Re:Meanwhile (2)

hairyfeet (841228) | about 10 months ago | (#43941713)

Dude, you wanna blow money like shit through a goose? Knock yourself out, just don't expect the rest of us to subsidize your fetish when we honestly do not need it.

And I'm sorry but unless FTTH becomes common (which I seriously doubt as most ISPs aren't laying any new lines, much less laying fiber) there really isn't any need for this, there really isn't. The average PC just won't be able to write data to the HDDs at even 1Gbps, last i looked the national average for net speed in the USA is a lousy 30Mbps, so you are adding this super sized pipe...that a good 90% of the folks out there won't even be able to max out 1Gbps, hell I doubt most would even be able to max out a 100Mbps line, so its really not needed.

But like 4K and 3D TVs if you wanna blow money on shit when the infrastructure to make use of it really doesn't exist? knock yourself out pal, nobody is stopping you. but its not a conspiracy why any of this shit hasn't taken off, its because the infrastructure to make it worth having? Really don't exist for most of the USA.

Re:Meanwhile (1)

Bengie (1121981) | about 10 months ago | (#43941971)

Doubling Internet speeds gives a fairly consistent 0.7% increase in GDP. Now go from 30Mb to 1Gb and a nation wide fiber roll-out will pay itself back in less than 1 year, assuming the average.

wiki: In measurements made between January and June 2011, the United States ranked 26th globally in terms of the speed of its broadband Internet connections, with an average measured speed of 4.93 Mbit/s

And I call BS on people having usable 30Mb connections. How many people actually get their rated speeds most of the time when they "need" it? I know there are boards full of people who complain that they nothing but unstable Internet, but they have no other options.

Re:Meanwhile (0)

Anonymous Coward | about 10 months ago | (#43942083)

While I am not american, I have a rated speed of 120mbps and I get about 95-100 mbps. I get that in speed tests as well as in real world scenarios.

I would say that for consumers, its not so much the motherboard. They usually are fast enough, but the routers in between often aren't. So I think the first thing we need is a drop in price for gigabit routers.

Re:Meanwhile (0)

Anonymous Coward | about 10 months ago | (#43941749)

No. Quality engineering is about solving needs practically, using as little time and resources as possible. Not about spending loads of cash on things just because they're cool.

Re:Meanwhile (1)

Anonymous Coward | about 10 months ago | (#43941893)

Do you have a 10 inch water pipe servicing your home? Do you have a 10 kV, 100 A electrical service to your home?

There is a difference between wanting something because you know it will improve your life in some way, and wanting something because it might have some chance of being useful down the line. The former is arguably a "need" depending on your priorities. Regardless, if I can't even fully utilize a technology 99% of the time, and that 1% of the time I could, it would only save me a few seconds or minute of time, it is difficult to argue that I even want it, in the sense that I would be willing to pay extra for it. If it were free, sure, I would take it just in case, but for any non-trivial cost, it is not needed or wanted.

Re:Meanwhile (1)

djrobxx (1095215) | about 10 months ago | (#43941735)

The most obvious reason adoption was slow is that's not that easy to fill even a GBE pipe. Spinning disks typically don't do much over 100MB/sec anyway. Sustained writes in a disk speed test is one thing, real-world tests like copying a folder full of home photos and videos is another. Not only is 10GBE not needed for home use, the need for big bandwidth is actually lessening as streaming options improve. I was very excited to get my network up to gigabit in the early 2000's, when I used to copy DVD images down to the HTPC in order to play them. Trying to mount the image remotely would result in stuttering and buffering issues.

Today I can stream a 1080p 3D video across a 100mbps MoCA adapter without much thought. Media players and codecs have all been tuned to deal with internet-based video, so local traffic is a snap. Sadly this house doesn't have CAT5 going to the home theater like my last home so I'm stuck with MoCA, but I've found that it's not really limiting.

I'm all for 10GE becoming standard, just for those times when I want to transfer data from one laptop to another, but it's not something I've been waiting with baited breath for. It might have been more interesting if technologies like PXE network-booting had gone more mainstream.

Re:Meanwhile (1)

kasperd (592156) | about 10 months ago | (#43942059)

The most obvious reason adoption was slow is that's not that easy to fill even a GBE pipe.

On the typical home network, this may very well be true. I can fill a 1Gbit/s link at home, if I really want to. But for real data transfers, the network link does not tend to be the bottleneck.

On production servers, things look different. I have worked in a place, where 1Gbit/s links were a very problematic limitation for some of our servers. We would have loved to have 10Gbit/s on board.

Re:Meanwhile (1, Insightful)

homey of my owney (975234) | about 10 months ago | (#43941387)

What the hell do you do at home that would require a 10GbE network?

Re:Meanwhile (2)

lister king of smeg (2481612) | about 10 months ago | (#43941415)

stream hd video to multiple nodes frome a network file server

Re:Meanwhile (0)

homey of my owney (975234) | about 10 months ago | (#43941505)

How on earth does this take 10Gb. Are you running a motel?

Re:Meanwhile (2)

kasperd (592156) | about 10 months ago | (#43942073)

How on earth does this take 10Gb.

Who said it does? If you need 1.1Gbit/s of sustained traffic, then a 1Gbit/s link will not be sufficient. The next step upwards is 10Gbit/s.

Re:Meanwhile (1)

Anonymous Coward | about 10 months ago | (#43941843)

I challenge you to do something if you actually believe that. Look up the stream rate of your compressed HD stream. Okay, have you done that? What is it? 10MB/s? 15MB/s? So, how many nodes are you going to? Oh, but wait, you said it was from a single server. Well, seeing you're storing HD video, I'd imagine you're not using SSDs as that'd be ludicrously expensive, so you're probably using standard rotary hard disks, possibly in a "OMG IT'S THE FASTESTEST!!!!!" raid 0 array and its theoretical maximum throughput is higher than a gigabit. Except, realistically, everybody is watching different things, as if they were all watching the same thing, you'd just use a multicast and only send the packets once, and wouldn't need much bandwidth. This means you're super high speed raid 0 array is actually spending most of its time seeking, and not reading and sending your data. So the actual throughput it can achieve is actually far below even a gigabit, and could probably be serviced even by a 100Mbit network.

So sorry, try again.

Re:Meanwhile (1)

m.dillon (147925) | about 10 months ago | (#43942061)

Depending on the level of compression a full HD (1080p) stream requires between 400KBytes/sec and ~2 MBytes/sec of bandwidth. That is, approximately 4MBits-20MBits.

Needless to say, even 100MBit ethernet has no problem with a couple of those, let alone existing 1-gigabit ethernets.

At 2160p (which is what people call 4K, for 3840x2160), perhaps ~1 MByte/sec to ~5 MBytes/sec depending on the level of compression and the complexity of the video. That is, somewhere north of 50 MBits on the top end. Despite having four times the pixels you don't actually need four times the bandwidth for a high quality stream. Standard Gigabit ethernet is still plenty good enough for a dozen and a half 4K streams.

-Matt

Re:Meanwhile (0)

Anonymous Coward | about 10 months ago | (#43941495)

"640K is more memory than anyone will ever need on a computer"

That may be a false quote, but the counter point still stands. The need for the technology evolves with the technology itself as we learn new ways to take advantage of it, even if you can't imagine those new uses now.

Re:Meanwhile (0)

Anonymous Coward | about 10 months ago | (#43941585)

It's not even a correct false quote; the most common version is "648k ought to be enough for anyone," which didn't imply that it would be enough for all time... but when the IBM PC debuted in 1981, most microcomputers had only 16k, 32k, 48k, or 64k. Aside from TSR programs, DOS PCs could only run one program at a time anyway, so 640k was HUGE. You're right, though, the need for more memory bumped up against that limit in pretty short order.

That said, 10gb would be awesome for streaming uncompressed HD video around my house. Compression is only a benefit when it's necessary. If all the hardware were fast enough, and storage big enough, why bother compressing?

satellite tv has to compress att does even fios hi (1)

Joe_Dragon (2206452) | about 10 months ago | (#43941829)

satellite tv has to compress att does even fios hit the wall in QAM space.

Re:Meanwhile (1)

lgw (121541) | about 10 months ago | (#43941571)

Until sustained R/W speed on disk passed 10MB/s, I had no use for 1 GbE either! But it looks like it won't be too long before the network is back to being the bottleneck on network file copies/backups again, even on simple non-RAID volumes.

Current high-cost item: The 10Gb switch (1)

johanwanderer (1078391) | about 10 months ago | (#43941931)

The current high-cost item is the 10GbE switch. Those things are way too expensive (10+K range, and the plus goes way up)

Also, for flexibility, you want SFP+ ports and adapters for each port. None of those are cheap.

Re:Current high-cost item: The 10Gb switch (2)

lgw (121541) | about 10 months ago | (#43942101)

That's been true at this point at each jump in speeds (well, other than the details of the connection). The Ethernet chip-on-motherboard heralds the price fall on the switch - at the scale the 10GbE chips will soon be made, their price will fall (and thus the price of port-specific electronics in the switch will fall too), and then reasonably-priced unmanaged switches from low-end vendors follow soon after.

The real reason (1)

Sinryc (834433) | about 10 months ago | (#43941065)

I think its a combo of the crappy economy, but then again maybe the need for wide adaptation just wasn't there. I would think it is like any other thing, if the demand was there the supply would have ramped up and the costs would have gone down.

Re:The real reason (4, Insightful)

redmid17 (1217076) | about 10 months ago | (#43941117)

Biggest reason I can remember from when we were looking at upgrading SAN and LAN equipment in our data center was the price/performance point. We didn't need 10 GbE performance yet and the price was pretty far above what we were using. That was 3 years ago though, so I'd have to poke around some of the newer equipment to see if we have any boxes with it. I just took a gander through the HP and Dell offerings and it's not even an option on anything but the top tier equipment. I think that pretty much explains the situation itself.

Re:The real reason (1)

Anonymous Coward | about 10 months ago | (#43941133)

Indeed maybe the real reasons are enterprise SSDs coming up.

Re:The real reason (0)

Anonymous Coward | about 10 months ago | (#43941293)

I beg to differ, 10GbE is an option on entry level gear, look at the Dell MD3600i "SAN" (I use that word lightly) the powervault MD series is by far not high end, and the configuration is nerfed for an entry level person to configure it, heck getting it to work was very counter-intutive for me. I had a question about the performance in some tests I ran on one, and the dell tech I called to ask about it, even said, if I wanted performance help I should have bought an equallogic...

Entry level san, Entry level support for it, yet it was 10GBaseT...

Re:The real reason (0)

Anonymous Coward | about 10 months ago | (#43941469)

So you "beg to differ" but admit that the boxes can't really push anywhere near that speed. So what is your point again?

Re:The real reason (0)

Anonymous Coward | about 10 months ago | (#43941603)

I never said it couldn't push near that speed, the performance was actually almost on par with equallogic, I had some tuning questions at higher file transfer sizes.. they still push the 10gb through it, just the hardware itself is lower grade and has bugs that needs to be worked out... (apparently the new firmware fixes my issue on the 3200i, but it's not yet available on the 3600i)I was getting over 1GB/s (roughly 9gbps) throughput on the reads... which is very similar to what I get on an equallogic, to be fair I was quite impressed with the performance of it for an entry level "SAN".

Re:The real reason (1)

Tau_Xi (958303) | about 10 months ago | (#43941373)

We're looking at some "cheap" 10GbE stuff. QNAP TS-1279U-RP http://www.qnap.com/useng/index.php?lang=en-us&sn=862&c=355&sc=703&t=704&n=4802 [qnap.com] Can purchase an add-on PCI-E card fairly easily and get a 100,000+ IOPS SAN/NAS combo for around $10k. Hard to beat the price/performance.

Re: The real reason (0)

Anonymous Coward | about 10 months ago | (#43941661)

Combined with 4TB entry level enterprise drives they make for fantastic bulk storage or backup targets.

There is also a 16 bay edition with ECC ram, making 50+TB of raid6 storage possible on a small business budget.

Re:The real reason (0)

Anonymous Coward | about 10 months ago | (#43941247)

Another big reason was no standard on 10GBaseT... we've been using Twinax 10G cabling ($80 for a 6ft cable) and SFP+ switches, now that 10GBaseT is here, even though the switches and nics are still similar in price to their SFP+ counterparts, wiring a rack with 10Gb networking has become quite a bit cheaper, that's one key component that I think a lot of people forget.

We've had 10Gb iSCSI running on that for the past three years, and I must say, it's quite astounding how quickly you get used to it.

Re:The real reason (2)

Guspaz (556486) | about 10 months ago | (#43941427)

The fact that you can use existing commodity Cat 6 cables for up to 55m with 10GigE will help a lot too. Yes, Cat 6a cables are required for the full 100m distance, and yes, Cat 6a cables are themselves cheap ($4 for that 6 feet you mention costing $80 with Twinax), but for lengths under 55m, the cabling that you've already got will continue working at the higher speeds. I think that will be a big factor, especially for consumers, where cables longer than 10 or 15 metres are incredibly rare anyhow.

Right now, I think the NIC/port price is the major roadblock. Netgear made a big deal about dropping below $125 per 10gig port on their switches a while ago, and that's a step forward, but it's still a thousand bucks for an 8-port switch (probably on the larger end of what you'd find in a home network), and the cheapest 10 gig NIC on NewEgg is $345. That's in the "affordable for enthusiasts" range, though. If you're willing to spend seven hundred bucks, you could connect your desktop to your home file server over 10 gigabit, for example, and two or three thousand bucks could get a bunch of devices on a network. Expensive, but not "mortgage your house, this is only for multi-billion dollar enterprise use" levels like it was a few years ago.

Re:The real reason (0)

Anonymous Coward | about 10 months ago | (#43941667)

Exactly my point, I think it'll still be relegated to the virtualization world connecting to shared storage for a while yet, until the nics and switches come down in price, but I could see by the end of the year, some 1/10gb switches coming out with 2-4 10 Gb ports for LAGS (similar to the old 24 port 100Mbps switches with 2 1gb ports (for uplinks))

About damn time. (0)

Anonymous Coward | about 10 months ago | (#43941071)

But hey - we had to wait for the system bus, right?

Cost (3, Insightful)

Anonymous Coward | about 10 months ago | (#43941099)

10GE Motherboards are still pointless when 10G routers & switches are still way too expensive.

Re:Cost (1)

neokushan (932374) | about 10 months ago | (#43941453)

It's a case of demand. There's no demand for those routers and switches because motherboards don't have 10GbE ports on them. Motherboards don't have 10GbE on them because there's no cheap routers or switches. Something has to give eventually and the motherboard probably makes the most sense to give in first.

Re:Cost (1)

lightknight (213164) | about 10 months ago | (#43941543)

Chicken and Egg, Bob. If I have a bunch of devices held back only by a few switches that can easily be replaced, the switches, when they drop a little in price, are getting replaced.

It's totally different when I need to rip out every single component, down to the wiring in the walls, to upgrade the network.

Re:Cost (1)

nabsltd (1313397) | about 10 months ago | (#43941547)

10GE Motherboards are still pointless when 10G routers & switches are still way too expensive.

Absolutely true. You can get a single-port 10Gb card that uses Cat6 cabling for less than $300, but the cheapest switch with more than eight 10Gb ports is around $8000. You can piece together a switch with 6-8 10Gb ports (using modules) for around $4000.

So, the reality is that you will pay 1x-3x the cost of the 10Gb NIC for a port to plug it into. Although that is less than the relative cost per port for high-end 1Gb managed switches, that's because the cost of a 1Gb NIC is basically pennies.

Re:Cost (0)

Anonymous Coward | about 10 months ago | (#43941927)

You can use cat5 for short runs. :D

My idea of the perfect cable (1, Interesting)

kipsate (314423) | about 10 months ago | (#43941105)

My idea of the perfect cable:

Four strands, two copper, two fiber.
The two fiber strands enable redundancy (ring topology all the way to the end-point);
The two copper strands for being able to provide power to devices.

That's it. That's all that's needed.

Re:My idea of the perfect cable (0)

Anonymous Coward | about 10 months ago | (#43941183)

Old enough not to remember Token Ring diagnostics?

Re:My idea of the perfect cable (3, Insightful)

D1G1T (1136467) | about 10 months ago | (#43941197)

What you describe exists. It's not uncommonly used for IP cameras outside the 100m limit of TP Ethernet (on perimeter fences, etc.). The problem with fibre is that it's a bitch to terminate compared to copper, and therefore quite a bit more expensive to install on a large scale. Fibre still only makes sense when you need the long cable runs.

Re:My idea of the perfect cable (1)

Nethead (1563) | about 10 months ago | (#43942091)

Also a lot of big box stores. Loews stores are mostly fiber and they just did a huge upgrade from 10Mb/s to 1Gb/s last year.

Side note distance story:
Had a trouble ticked for a Home Depot where we found that one of the printers up front was wired all the way back to the data center in the opposite corner, about 550 feet. Out temporary fix was to drop the port down to 10Mb/s until we could get a lift in to run a line to the IDF by the printer.

Re:My idea of the perfect cable (2)

jon3k (691256) | about 10 months ago | (#43941217)

For what? What's the application? Way too expensive to run to my IP Phone or Desktop PC (could juse use fiber or copper, why both?). Unnecessary in the datacenter (we don't need PoE). What's the use case?

Re:My idea of the perfect cable (1)

fuzzyfuzzyfungus (1223518) | about 10 months ago | (#43941289)

For what? What's the application? Way too expensive to run to my IP Phone or Desktop PC (could juse use fiber or copper, why both?). Unnecessary in the datacenter (we don't need PoE). What's the use case?

Purists demand that One Cable Rule Them All. This naturally leads to a One True Cable that is wildly overengineered and expensive for the keyboards and mice and IP phones of the world, while still failing to support common, but in some way unusually demanding, edge scenarios.

4k HD (0)

Anonymous Coward | about 10 months ago | (#43941135)

probably needed for 4k HD..

Re: 4k HD (0)

Anonymous Coward | about 10 months ago | (#43941791)

No, not even close.
Both HD and 4k works fine on a 100Mbit network.

Commodity (3, Insightful)

rijrunner (263757) | about 10 months ago | (#43941163)

Of course its growth was going to be lower.

The primary use of 10GbE is virtualization. The use of network cards are a function of the number of chassis, not the number of hosts. Numerically, 10GbE is not 10 1GbE cards. You can split the 10GbE between a lot of hosts. You can easily double, triple, or even quadruple that to making that 10 GbE card the equivalent of 1 GbE cards on 40 servers, depending of their load and use. Instead of buying 40 servers and associated cards, you're buying one larger chassis with larger pipes. In a large farm environment, and it makes sense.

Throw in the fact that network is only as fast as its narrowest choke point, there is no reason to put in a 10 GbE card behind a 7MB DSL connection.

What 10GbE needs to become a commodity is a) end of any data caps, b) data to put down that pipe, and c) a pipe that can handle it.

Show me fiber to my door and then, it will be a commodity.
 

Re:Commodity (1)

jon3k (691256) | about 10 months ago | (#43941229)

That logic doesn't really hold. We moved from 100Mb Fast-E to 1GbE and residential broadband speeds had nothing to do with it.

Re:Commodity (1)

rijrunner (263757) | about 10 months ago | (#43941423)

Most people use issued DSL or Cable modems for networking. Commodity use is directly tied to broadband. And those modems shipped based on the tech supported by the ISP. Switching to 1GbE on the switch side tracks to when companies implemented DOCSIS 2.0. When they move to DOCSIS 3.0, then you'll see an upgrade in networking layer in residential use.

Re:Commodity (0)

Anonymous Coward | about 10 months ago | (#43941443)

True, although the difference there is that 100Mb isn't fast enough to saturate a home consumer hard drive while 1GbE is (or at least is close to it), so 1GbE is actually a noticeable improvement for local networking for home users (most home users only use their network for connecting to the internet, of course, but some do do other things with it). 10GbE is faster than the vast majority of home users can make use of. I can see it being useful for things like file servers or data centers, but it's overkill for anything a home user is doing these days (although it may not be in the future).

I see the joke! (0)

Anonymous Coward | about 10 months ago | (#43941167)

[...] and I suspect that in just a few years, we'll start to see it on home PC boards, with the price dropping from the double digits to single digits, and then even down to cents.

*pfffft* HA ha ha ha ha HA... oh, man, that's a good one. The idea that home PC boards will continue to exist in a few years! And that consumer-level devices will have wired networks to begin with! That's hilarious! Thanks, I needed a good laugh today!

The bottlenecks are elsewhere (3, Insightful)

AdamHaun (43173) | about 10 months ago | (#43941191)

Ten gigabits per second is 1,250 megabytes per second. High-end consumer SSDs are advertising ~500 MB/sec. A single PCIe 2.0 lane is 500 MB/sec. Then there's your upstream internet connection, which won't be more than 12.5 MB/sec (100 megabits/sec), much less a hundred times that. I guess you could feed 10GbE from DDR3 RAM through a multi-lane PCIe connection, assuming your DMA and bus bridging are fast enough...

I'm sure a data center could make use of 10GbE, but I don't think consumer hardware will benefit even a few years from now. Seems like an obvious place to save some money in a motherboard design.

Re:The bottlenecks are elsewhere (1)

jon3k (691256) | about 10 months ago | (#43941241)

We're years (probably a decade+) away from any significant demand (read: more than low single digit percentage) for 10Gb for personal use.

Re: The bottlenecks are elsewhere (0)

Anonymous Coward | about 10 months ago | (#43941911)

Connection between my home server and desktop is limited by networks speed. The server writes at 550MB/s and the desktop reads at almost twice that. I'd love a 10GbE interface on each end.

Of course Internet speeds are not quite there yet, most people are limited to 1/100th of a 10GbE interface.

Re:The bottlenecks are elsewhere (1)

Anonymous Coward | about 10 months ago | (#43941439)

You're saying that 10GbE is not worth it because we are being limited by 500MB/s drives. What you failed to consider is that our 500MB/s drives are currently being limited by 1GbE.

Re:The bottlenecks are elsewhere (1)

Anonymous Coward | about 10 months ago | (#43941455)

Datacenters have been using using 10GbE for years. Multi-node storage devices with 10GbE interfaces linked to 700+ blade servers (some 1GbE and others 10GbE) running clustered. Doesn't take much to saturate it.

Re:The bottlenecks are elsewhere (5, Insightful)

Guspaz (556486) | about 10 months ago | (#43941475)

You're looking at things backwards. If you've got a 500 MB/s SSD, then you shouldn't look at 10GigE and say "that's twice as fast as I need, it's useless". You should look at the existing GigE and say "my SSD is four times faster, one gigabit is too slow"...

Even a cheap commodity magnetic hard disk can saturate a gigabit network today. The fact that lots of computers use solid state drives only made that problem worse. Transferring files between computers on a typical home network these days, I think the one gigabit per second network limitation is going to be the bottleneck for many people.

Re:The bottlenecks are elsewhere (3, Insightful)

AdamHaun (43173) | about 10 months ago | (#43941805)

You're looking at things backwards. If you've got a 500 MB/s SSD, then you shouldn't look at 10GigE and say "that's twice as fast as I need, it's useless". You should look at the existing GigE and say "my SSD is four times faster, one gigabit is too slow"...

If I want to copy tons of large, sequentially-read files every day, maybe. (Assuming that 500 MB/sec actually hits the wire instead of bottlenecking in the network stack.) But I'm not sure why I would do that. If I have a file server, my big files are already there. If I have a media server, I can already stream because even raw Blu-ray is less than 100 Mbps. If I'm working on huge datasets, it's faster to store them locally. If I really need to transfer tons of data back and forth all the time, I'm probably not a typical home network user. ;-)

Re:The bottlenecks are elsewhere (1)

Anonymous Coward | about 10 months ago | (#43941479)

You rarely if ever get the full transfer rate so when using 10 GbE you're going to get something less than that in actual use due to overhead.

I find this technology very useful even for my home. I want to put a large file server in the closet where I can't hear it (ventilated of course). Then my workstation can be diskless. Couple of fanless video cards, minimal case cooling, giant silent CPU fan, and now I have a nearly silent workstation with terabytes of storage for all my databases, virtual machines, etc.

Re:The bottlenecks are elsewhere (1)

st3v (805783) | about 10 months ago | (#43941509)

I respectfully disagree. Although we may not be able to take advantage of the full 10GbE throughput right away due to limitation in I/O devices, it is still faster to transmit something over the network on a 10GbE link. For example, regarding the 500MB/s SSD you mentioned, the transmit speed on 10GbE will cap to ~4Gb/s. It's still faster than 1GbE.

Re:The bottlenecks are elsewhere (1)

jgrahn (181062) | about 10 months ago | (#43941579)

Ten gigabits per second is 1,250 megabytes per second. High-end consumer SSDs are advertising ~500 MB/sec. A single PCIe 2.0 lane is 500 MB/sec. Then there's your upstream internet connection, which won't be more than 12.5 MB/sec (100 megabits/sec), much less a hundred times that. I guess you could feed 10GbE from DDR3 RAM through a multi-lane PCIe connection, assuming your DMA and bus bridging are fast enough...

More importantly, you can't make an IP stack consume or generate 10Gbit on any hardware I know of, even if the application is e.g. a TCP echo client or server where the payload gets minimal processing. The only use case is forwarding, in dedicated hardware, over 1Gbit links. 10Gbit is router technology, until CPUs are 5--10 times faster than today, i.e. forever.

Re: The bottlenecks are elsewhere (0)

Anonymous Coward | about 10 months ago | (#43941665)

yes you van- I have done 50 Gbps on infiniband and the server had plenty to spare

Re:The bottlenecks are elsewhere (1)

adri (173121) | about 10 months ago | (#43941949)

Netflix OpenConnect pushes 20GBit+ on a FreeBSD-9 base with nginx and SSDs. Over TCP. To internet connected destinations.

Please re-evaluate your statement.

Also it is a matter of what you need (2)

Sycraft-fu (314770) | about 10 months ago | (#43941597)

For many things you do, you find 1gbit is enough. More doesn't really gain you anything. It is enough to stream even 4k compressed video, enough such that opening and saving most files is as fast as local access, enough that the speed of a webpage loading is not based on that link but something else.

Every time we go up an order of magnitude, the next one will matter less. There will be fewer things that are bandwidth limited and as such less people that will care about the upgrade.

As you say, 10gbit, or even more, is useful in many datacenters. But at home? What the fuck would I do with it? I guess I could... copy files faster from my desktop to server? Well except my server uses a magnetic drive that is slower than gigabit.

And, of course, you get to re-run all your cables. Gig works over Cat-5e, of course, which has been used for awhile, and with ASICs on smaller processes it actually usually works over Cat-5. So you can have some pretty old wiring and just knock in a gig switch and cards and call it good. 10gig needs Cat-6a. That is new, expensive, and a pain in the ass to work with.

Bandwidth is not something where we need "MOAR ALL OF THE TIMES!!" it isn't something we need to just seek to increase at any cost. Rather it is something that we need to have enough of to make it not a bottleneck for whatever it is we are doing. Well, for a lot of network stuff these days, gig is that. It is fast enough that it doesn't slow things down, at least not a significant amount. So that's all you need.

Same shit with BW anywhere else. You find that increasing memory bandwidth past a point with current CPUs is useless. Like with a Core i7-2600 increasing memory speed up to 1600MHz seems to help, however past that, doesn't matter except in synthetic benches. The memory bandwidth isn't an issue. With graphics cards the PCIe 3.0 upgrade did fuck-all since it turn out PCIe 2.0 4x is almost always enough bandwidth, and 8x is more than enough so PCIe 3.0 16x is doubling something you already have more than enough of.

As things progress we'll probably see more uses for 10gig, and thus it'll get rolled out wider. However it is the kind of thing that'll happen as needed, not that'll happen just because it can. We'll upgrade our building when it needs to be. When our uplinks are getting saturated, we'll take those to 10gig. When there is a reason to get it to the desktop, we will. However we aren't going to run out and drop 6 figures to go 10gig right now just for the sake of doing it.

Re:The bottlenecks are elsewhere (2)

nabsltd (1313397) | about 10 months ago | (#43941605)

I'm sure a data center could make use of 10GbE, but I don't think consumer hardware will benefit even a few years from now.

10GbE would mean you could move your storage off your local machine to your NAS, since those remote disks would be as fast as the average local disk. There are a lot of uses for this, like saving money by only having programs/data on one set of disks, but still having very fast access.

No, not every home user could benefit from this, but not every home user benefits from 1GbE, either.

Re:The bottlenecks are elsewhere (1)

the eric conspiracy (20178) | about 10 months ago | (#43941679)

So storage access is already 5x faster than 1GbE.

Sounds to me like 10GbE is already overdue.

For the cluster I develop for at work we have a 40GB infiniband LAN. For serious IT I'd skip 10GbE now and go to IB.

Re:The bottlenecks are elsewhere (1)

thegarbz (1787294) | about 10 months ago | (#43941863)

Seems like an obvious place to save some money in a motherboard design.

Savings are only available right now. 10Mbit chips are actually more expensive now than 10/100. Older style cards even more so. It's all about economies of scale. Given enough years 10Gbit may become the standard and it may be too expensive to produce slower boards.

Bottlenecked by spinning rust (0)

Anonymous Coward | about 10 months ago | (#43941219)

Until SSD in large capacities are available at reasonable prices, I suspect the demand for 10Gbe will remain low.

Re:Bottlenecked by spinning rust (1)

Guspaz (556486) | about 10 months ago | (#43941485)

Define "large capacities"? Most notebooks sold at a thousand bucks or more use SSDs for primary storage now (certainly all the ultrabooks and tablets do), and even the $700 Dell notebook that got recently has a 32 gig SSD for caching (Intel SRT).

dropping to cents (1)

CAIMLAS (41445) | about 10 months ago | (#43941259)

Don't count on the price of 10gigE dropping to cents. Unlike gigE, 10gigE has really very little 'enterprise' competition technologies. Fibre channel, infiniband, etc. - if you want more than gigE speeds, it's going to cost you. Those were costly technologies then - but back then, they offered significantly more performance (and thus value) than gigE. With 10gigE, there is no financial incentive to drop costs.

Still waiting for 1G (1)

pcjunky (517872) | about 10 months ago | (#43941279)

Most of my customers are still running 100base-T and see little reason to upgrade since their networks primarily exist to distribute Internet access. What took so long? Nobody seems to really want it. Slashdot crowd not withstanding.

Limited use (1)

nine-times (778537) | about 10 months ago | (#43941291)

I would argue that part of the issue is that 10GigE connections have limited use. Not that they're not useful, but at this point, with the amount of data we're moving around, most people aren't going to see a huge benefit over existing solutions. It's a little like why desktop computer sales have slowed in general: what people have now is kind of working "well enough".

Of course, part of the problem is that a lot of what people are doing now is over the Internet, which means that you're bottlenecked by your ISP. It doesn't matter as much if you have a 100Mb or 1Gb or 10Gb adapter if you're doing an Internet transfer bottlenecked to 8Mb.

Connectors (1)

Guybrush_T (980074) | about 10 months ago | (#43941339)

The main reason why 10GbE took time to arrive is simple : connectors are not the good-old RJ45 used for 10Mb, 100Mb and 1GbE. The RJ45 connector is small, cheap and backward compatible. The 10GbE connectors are deep, expensive and not RJ45-compatible, hence cannot be used as a 1GbE port.

10GbE is appearing on servers because the price order is compatible with the expensive and deep connector. It won't appear on commodity motherboards until a smaller connector is designed.

Re:Connectors (0)

Anonymous Coward | about 10 months ago | (#43941395)

There are a number of different media and connector types. 10g base t uses familiar Rj45 connectors.

Re:Connectors (0)

Anonymous Coward | about 10 months ago | (#43941523)

welcome to last year my friend...

10GBaseT (10Gig over copper) uses standard RJ45 Cat6 Cables... (for distance under 55ft, for the standard 100ft you need Cat6A (still RJ45)

the other connector you're referring to is the SFP+ cabling, which while yes more expensive, hardly not backwards comptable, I've got an 8 year old Cisco switch, which just happens to have two of those ports sharing ports 23 and 24... (generally used back then for fiber uplink ports, since you put in a gbic in the slot and attached the fiber)

Re:Connectors (0)

Anonymous Coward | about 10 months ago | (#43941573)

The 10GbE connectors are deep, expensive and not RJ45-compatible

You're confusing GBIC transceiver connector for 10GbE ports.

Re:Connectors (0)

Anonymous Coward | about 10 months ago | (#43941607)

10GBASE-T uses RJ45. https://en.wikipedia.org/wiki/10-gigabit_Ethernet#10GBASE-T

Re:Connectors (1)

nabsltd (1313397) | about 10 months ago | (#43941647)

The main reason why 10GbE took time to arrive is simple : connectors are not the good-old RJ45 used for 10Mb, 100Mb and 1GbE. The RJ45 connector is small, cheap and backward compatible. The 10GbE connectors are deep, expensive and not RJ45-compatible, hence cannot be used as a 1GbE port.

I use 10Gb over Cat6 in my home (to connect my servers to the SAN). It's really easy to find 10GbE with RJ45 connectors, like this card [newegg.com].

Re:Connectors (0)

Anonymous Coward | about 10 months ago | (#43941985)

I have several 10G cards from several vendors. You're absolutely wrong. Not only is the connector RJ45 connector, but you do 100/1G/10G over cat5 or 6. 10G will be limited to a short run, but you can do it error free!

LACP (0)

Anonymous Coward | about 10 months ago | (#43941361)

it's trivial to enable LACP to bond several 1 gbps links. no new equipment, no new cabling. that would have slowed down my 10 gbps deployment.

Re:LACP (3, Informative)

bbushvt (1839406) | about 10 months ago | (#43941575)

it's trivial to enable LACP to bond several 1 gbps links. no new equipment, no new cabling. that would have slowed down my 10 gbps deployment.

10x1gb != 1x10gb. Your LACP bond still limits a single stream to a single link. Even with multiple streams, you would have to have a lot in order of them to hash out to all the links.

Expensive (2)

Dwedit (232252) | about 10 months ago | (#43941429)

The best reason I can think of not to buy a 10-gigabit Ethernet card is simple: The cheapest ones go for $351 on Newegg. Want an Ethernet switch to go with that? That will be $1036.

So once again, the answer is simple, and it has to do with a dollar sign.

Gigabit equipment got really cheap fairly quickly, but not so much for the 10-gigabit equipment.

Re:Expensive (0)

Anonymous Coward | about 10 months ago | (#43942029)

Well, for the cards, you would have to buy *over* 10 to meet the same speed (10G theoretical is actually close to 10G since there's no worry about collisions).

PCs do not define IT (1)

Princeofcups (150855) | about 10 months ago | (#43941499)

Please stop talking like your desktop defines IT. 10 Gb ethernet has been around for years for Sun/Oracle servers, IBM servers, Cisco switches, storage arrays, etc. Hell, I could even get 10Gb for my Mac. It hadn't made it into the PC world yet due to office wiring to the desk still being Cat 5. It's hard enough to get 1 Gb connections for the general user.

what applications need 10gb/sec? (0)

Anonymous Coward | about 10 months ago | (#43941507)

What does the average user (or even the most extreme power user) do today that would require 10gb/sec? Even SSDs can't read or write data that fast yet. The only place I can think of where 10GbE is of any use today is in high end server and networking equipment, maybe linking multiple switches together, for connections onto big internet backbones for an entire site, streaming uncompressed video in a TV station or for connecting up a large NAS.

So nice and fast (if you can afford it) (2)

Sarusa (104047) | about 10 months ago | (#43941519)

We have some of these at work where we do have the need for moving massive volumes of data around. We can get about 99.6% of theoretical throughput in actual use, thanks to the hardware offloading and large frame support. Besides the 10x faster to start with, that's way above any efficiency we get from the 1 GbE ports, though I expect if 10 GbE went commodity you'd lose all the hardware support and you'd be back to 80-90% range.

Note to sustain a data feed to one of these you at least need two SATA 6 gbps SSD drives in RAID0. On the receiving end we're not writing to disk, or you'd need ~3-4 RAIDed.

In our case we're feeding 4 10GbE ports on the same machine and using a 10 SSD RAID0 to supply the data with some headroom (we don't care if we lose the data if one fails, these aren't the master copies). We're just using software RAID, but thanks to all the DMA and offloading the CPU usage is quite low.

Now do I need this at home? Well, SSD speeds are far above the ~85 MB/sec 1GbE delivers, but so far the cost hasn't made it worth it. If I'm copying a gigabyte it takes 12 seconds, which I can live with.

Re:So nice and fast (if you can afford it) (0)

Anonymous Coward | about 10 months ago | (#43941871)

~85MB/s on 1gbe? You are doing it wrong. You should be able to easily get 110MB/s over even FTP, heck even with base TCP flags in Linux nowadays.

Router / switch costs (1)

vinn (4370) | about 10 months ago | (#43941947)

Cisco and most other vendors have made 10Gb ports too expensive and/or don't have a backplane that can effectively support 10Gb across all the ports. This is pretty ridiculous given how cheap processors have gotten. Even when they do support it, the licensing and maintenance costs can be crazy.

For that reason we're currently deploying several 1Gb connections to our VM servers through various switches (depending on costs per port, reliability needed and location).

I've been hoping that late 2013 is when 10Gb will finally appear for us on our campus trunks at least.

The vast majority are fine with 1GbE (0)

Anonymous Coward | about 10 months ago | (#43941991)

What took it so long? Likely because gigabit ethernet, and it's 100MB/sec transfer speed, is 'good enough' for the vast majority of users. You'd need a pretty substantial RAID array to come close to saturating a 10GbE link. And if you're moving 10 gig of files, taking 10 seconds rather than 100 seconds is a much less meaningful speed up than taking 100 seconds rather than 1000.

I Made The Jump (1)

Anonymous Coward | about 10 months ago | (#43942107)

I needed, or at least I thought I needed, a very high speed connection between some VMWare servers and a backup server, all Dell PowerEdge with PERC controllers and 15k RPM drives. I installed 10Gbe links between them and thought I was the shit.

But, I'm only getting ~600Mbps of real throughput on those 10Gbe links, that's the same as a 1Gbe link. I checked and troubleshot the issue till I was blue in the face and I still couldn't get any more speed. The problem is that it seems that this is as much as the server, its bus and its disk subsystem can push.

I suspect that iperf would get multigigabit performance, but I'm not running a switch, I'm running servers and they can't push the data any faster. :(

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...