Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Terabit Ethernet Is Dead, For Now

samzenpus posted about 2 years ago | from the slow-lane dept.

Networking 140

Nerval's Lobster writes "Sorry, everybody: terabit Ethernet looks like it will have to wait a while longer. The IEEE 802.3 Industry Connections Higher Speed Ethernet Consensus group met this week in Geneva, Switzerland, with attendees concluding—almost to a man—that 400 Gbits/s should be the next step in the evolution of Ethernet. A straw poll at its conclusion found that 61 of the 62 attendees that voted supported 400 Gbits/s as the basis for the near term 'call for interest,' or CFI. The bandwidth call to arms was sounded by a July report by the IEEE, which concluded that, if current trends continue, networks will need to support capacity requirements of 1 terabit per second in 2015 and 10 terabits per second by 2020. In 2015 there will be nearly 15 billion fixed and mobile-networked devices and machine-to-machine connections."

cancel ×

140 comments

Sorry! There are no comments related to the filter you selected.

Damn the summary (1, Interesting)

Anonymous Coward | about 2 years ago | (#41475589)

I'd love to see the IEEE report that attempts to guesstimate the needs of future Ethernet users.

We need terabit Ethernet NOW, not in a decade.

Re:Damn the summary (5, Insightful)

rufty_tufty (888596) | about 2 years ago | (#41475617)

We need terabit Ethernet NOW, not in a decade.

You know my 5 year old nephew keeps confusing need and want too.
How much are you prepared to pay for this desire? If it will cost say 4 times greater per bit to implement Terabit with current technology do you still want it?

Re:Damn the summary (4, Interesting)

somersault (912633) | about 2 years ago | (#41475715)

And what exactly is he doing over ethernet that needs that much speed? I'm only just now looking at upgrading our small business network to gigabit. A couple of years ago the cost of a 48 port gigabit switch was pretty high, but now it's very reasonable

Re:Damn the summary (2)

franciscohs (1003004) | about 2 years ago | (#41476003)

You know these port speeds are not meant to be used on access switches right?, at least on the beginning, there is no need to. Only high performing computing and Virtualization servers use more than Gigabit links today, but TenGigabit bundles and higher bandwidth links are used on almost every large network on core connections and core to distribution.

Re:Damn the summary (1)

somersault (912633) | about 2 years ago | (#41476177)

Sure, so options are already available in the high end for people who need it. For the original poster to say anybody really "needs" this struck me as a bit much.

Re:Damn the summary (0)

Anonymous Coward | about 2 years ago | (#41476241)

But if the core infrastructure of the Internet is saturated, then I would say that we all do need it. We don't need it at the local level, however. Similarly, I don't personally need a nuclear missile, but collectively we do.

Re:Damn the summary (0)

Anonymous Coward | about 2 years ago | (#41478065)

CERN pumps out terabytes of data per second when they're running experiments on the LHC. They need the link to pipe stuff out to processing centers. That's an extreme example.

A more modest example is high quality video chats. a few of those will saturate the link pretty quickly.

Re:Damn the summary (1)

_Shad0w_ (127912) | about 2 years ago | (#41476293)

I don't know about that: working in a company that shifts large amount of data around its internal network, having fast network access to the file servers is kind of desirable. Or at least as fast access to them as the computers can actually manage.

Re:Damn the summary (1)

franciscohs (1003004) | about 2 years ago | (#41476359)

Exactly my point, which today it more than Gb speeds but no more than 10Gb. However, we do need higher speed technology for the core infrastructure of whatever core networks we are using. Call it enterprise core, service provider, or whatever you're using.

Re:Damn the summary (2)

h4rr4r (612664) | about 2 years ago | (#41476905)

If you want 1Gb to 10Gb to your desktops you will want 10 times that in the core of your network where that file storage lives.

Re:Damn the summary (3, Insightful)

Shinobi (19308) | about 2 years ago | (#41476607)

"Only high performing computing and Virtualization servers use more than Gigabit links today, but TenGigabit bundles and higher bandwidth links are used on almost every large network on core connections and core to distribution."

I know quite a few non-science professional fields that saturate gigabit to each desktop, and would go for Infiniband, or 10Gig-E if it was viable outside of big corps. Editing/compositing of HD or greater resolution movies shuffle HUUUUGE amounts of data around, and you need a decent turnaround time for the data....

dual port 10GB runs about a grand (1)

Chirs (87576) | about 2 years ago | (#41477761)

for an addin card. Which is interesting since the actual chip is something like $90 from Intel.

Re:dual port 10GB runs about a grand (1)

saider (177166) | about 2 years ago | (#41478855)

That is about right. Labor and transportation costs are usually more than materials when you are considering the cost of a product. Figure another $50 to $100 for all the other components, packaging and labelling, and you are probably around to $200. Add in the manufacturer's mark-up (3-4x) to pay for the factory and overhead, shipping, the wholesalers will want a 10-30% cut, and finally your retailer's profit of around 10%.

Also consider that Intel's $90 part started out as pennies worth of sand.

Re:Damn the summary (0)

Anonymous Coward | about 2 years ago | (#41478971)

I work in a company selling network appliances. There is no custom hardware, it's all PC stuff, and not even more than dual socket. These boxes can handle 160 Gbps of throughput, and primary limiting factor is actually use of 10 and 40Gbps Ethernet, which results in considerable amount of ports. I have little doubt there's use for 400 Gbps Ethernet in relatively low-end market segments in just three to five years.

If I really wanted it, I could easily go and wire my home network with 10 GbE, and even get a real boost from it. Only thing that prevents me from doing it is feeling that it would be slightly wasteful use of money.

Re:Damn the summary (1)

Anonymous Coward | about 2 years ago | (#41476023)

There are scientific uses for such technology. Often technology is more the limiting factor than money.

Re:Damn the summary (1)

jellomizer (103300) | about 2 years ago | (#41476391)

Yes, Howerver I think for Terabit-ethernet, There is other factors too then just money. Like the speed to process and store the data being sent. If performance is that big of an issue, you are not going to trust your information with TCP/IP over a twisted pare cable. You would use a different type of bus for that.

Re:Damn the summary (1)

Anonymous Coward | about 2 years ago | (#41477015)

Yes, Howerver I think for Terabit-ethernet, There is other factors too then just money. Like the speed to process and store the data being sent. If performance is that big of an issue, you are not going to trust your information with TCP/IP over a twisted pare cable. You would use a different type of bus for that.

TCP/IP is not the only protocol that is used on ethernet. There are plenty of performance critical applications that uses ethernet without the IP layer.

Re:Damn the summary (1)

KingMotley (944240) | about 2 years ago | (#41478035)

If performance is that big of an issue, you are not going to trust your information with TCP/IP over a twisted pare cable.

And I suppose no one will ever need more than 640k of RAM. Why would you make such a silly statement?

Re:Damn the summary (0)

Anonymous Coward | about 2 years ago | (#41476145)

Well... this is for small office, but I work in ISP .... we've been doing port channels on 10G ports for a while ..and we are only middle size ISP.... imagen the needs of the big ones . So Tbit ethernet is needed for sure .

Re:Damn the summary (0)

Anonymous Coward | about 2 years ago | (#41476469)

When you purposely delay the advancement of technology you increase the cost of current technology and slow the advancement of future technology.

And if you would like a list of things that terabit Ethernet could be used for.
Reliable HD video chat with multiple parties. (well within an office network)
Inexpensive large scale parallel processing, (because those 10,000 core processors are damn expensive right now) 1,250 8 core systems might be cheaper... (need to do a cost analysis to confirm this, or at least more practice depending on how the system is used.)
Extremely fast large scale boot from network image. ...

Re:Damn the summary (1)

somersault (912633) | about 2 years ago | (#41476949)

This isn't delaying advancement, it's recognising reality.

It won't increase the cost of current technology. Gigabit is already pretty damn cheap. It might slow the advancement of terabit capable tech, it might not.

Note that I'm not saying that we shouldn't keep developing faster tech, but I was saying that currently there is no need for it at intranet level.

HD video chat? If you even want video chat in the first place, I don't see the benefit of HD. Movies sure, video chat.. not so much. I've been setting up everyone here with webcams, but most of our employees much prefer to pick up a phone than engage in a video chat. Admittedly they're engineers and therefore more likely to be introverted than the average worker.

Bandwidth is an issue in parallel processing, but I don't think it's as big a deal as the latency that you introduce when punting stuff out from local cache to the network. It's like having to use a paging file rather than being able to fit everything into RAM. It helps to have a faster drive, but it's still better to keep data as local as possible.

Fast boot is nice, but 10 gigabit switches are already faster than SATA 3, so even with RAID you're not going to see much improvement by moving up to terabit networking speeds.

Basically I'm just agreeing that "need" was a silly word to use.

Re:Damn the summary (1)

KingMotley (944240) | about 2 years ago | (#41477983)

SANs, NASs, transferring large video files?

Re:Damn the summary (1)

DigiShaman (671371) | about 2 years ago | (#41478215)

And what exactly is he doing over ethernet that needs that much speed?

Nothing! Currently he's limited by a v3.0 @ 16 lane PCIe slot which maxes out at 128Gb/s (16GB/s).

http://en.wikipedia.org/wiki/PCI_Express [wikipedia.org]

Re:Damn the summary (1)

kenorland (2691677) | about 2 years ago | (#41478431)

And what exactly is he doing over ethernet that needs that much speed?

Many distributed computations are network bound or require a lot of manual optimization. The faster the network, the more speedup you get from distribution. And that kind of computation is useful even just for video transcoding. Network speeds that become comparable to bus speeds really change how people can develop parallel software. But more likely, we need 1Tb networks soon just to keep up with, and support, CPUs and GPUs.

Re:Damn the summary (1)

TemporalBeing (803363) | about 2 years ago | (#41479179)

And what exactly is he doing over ethernet that needs that much speed? I'm only just now looking at upgrading our small business network to gigabit. A couple of years ago the cost of a 48 port gigabit switch was pretty high, but now it's very reasonable

You did see this article [slashdot.org] , no?

Re:Damn the summary (3, Informative)

SternisheFan (2529412) | about 2 years ago | (#41475751)

We need terabit Ethernet NOW, not in a decade.

You know my 5 year old nephew keeps confusing need and want too. How much are you prepared to pay for this desire? If it will cost say 4 times greater per bit to implement Terabit with current technology do you still want it?

I agree with you completely. Learning to seperate our 'wants' from our 'needs' can make all the difference in our 'consumer-driven' lives.

I may 'want' that shiny new car, but I don't 'need' it. If I have a vehicle that meets my needs, I've learned to be grateful for having that. Coveting that 'new shiny' (new car, other person's money/spouse, phone or internet connection speed, whatever it is) can often lead a person down the road to ruin.

In my experience, I know to be happy and grateful for what I have, and don't waste energy on what I don't have (yet). Of about half the people who win a lottery, 5 years later, they end up wishing they'd never heard about the lottery in the first place. Because it irrevocably changed their lives for the worse, and they realize too late that they were happier before they 'won'. Just my two cents.

---------------

I am so smert! I am so smert..., I mean smart! - Homer Simpson

Re:Damn the summary (0)

saveferrousoxide (2566033) | about 2 years ago | (#41476715)

I am so smert! I am so smert..., I mean smart! - Homer Simpson

I am so smart! I am so smart! S-M-R-T! I mean S-M-A-R-T! -Homer Simpson FTFY

Re:Damn the summary (1)

SternisheFan (2529412) | about 2 years ago | (#41476843)

I am so smert! I am so smert..., I mean smart! - Homer Simpson

I am so smart! I am so smart! S-M-R-T! I mean S-M-A-R-T! -Homer Simpson FTFY

You got me.

I am standing here beside myself - Apu.

Re:Damn the summary (-1)

Anonymous Coward | about 2 years ago | (#41477155)

Well, there is garbage like you that lives on the crums and like a dog should be grateful for even being allowed to exist. And then there are individuals like me, with ambition, whose wants are achieved through ambition, and that shiny new car will be mine when I want it, because I have the capabilities to achieve my goals...

So you are right, your kind does need to learn how to seperate your "wants" from your "needs", because you are begging at my table, and I will only give you the minimum to cover your needs.

Re:Damn the summary (0)

Anonymous Coward | about 2 years ago | (#41477701)

Why you insulting little.... My kind?! Be careful what you wish for, you may get it. (p.s., GrammerNazi time... The word is spelled "crumb". FTFY)

Re:Damn the summary (1)

Raenex (947668) | about 2 years ago | (#41478173)

(p.s., GrammerNazi time... The word is spelled "crumb". FTFY)

The word is spelled grammar. Also, to be more precise, you are being a spelling Nazi, not a grammar Nazi.

Re:Damn the summary (0)

Anonymous Coward | about 2 years ago | (#41478339)

He's not really insulting, he's making a point that one should reach for wants, rather than settle for needs. If we were to all just settle for needs, we'd all still live in caves, why go out and invent, when we should just sit here in this cave with our fires, and be grateful for the meat that we've just caught, right?

Re:Damn the summary (0)

Anonymous Coward | about 2 years ago | (#41478627)

It can be both, making a point, and being really insulting. Just like a knife is still sharp if buried in cow crap. It will still cut and have a "point," and people will notice if they get poked with it, but will probably be spending most of their time complaining they are now covered in cow crap, or now have a wound with cow crap in it, or would be confused why someone would cover a knife with cow crap.

Re:Damn the summary (0)

Anonymous Coward | about 2 years ago | (#41479249)

Well I was the one who made the original comment and the comment about it not being insulting but being about having ambition and not settling for "needs", and instead reaching for wants. Have you ever considered that perhaps the knife was in cow crap because it provides insulation, and prevents rust? You call yourself an anonymous engineer, and yet you took no steps to try and actually see if there was benefit in burrying knifes in cow crap. Do you have no initiative or curioisity?

Re:Damn the summary (1)

Medievalist (16032) | about 2 years ago | (#41477943)

Well, there is garbage like you that lives on the crums and like a dog should be grateful for even being allowed to exist. And then there are individuals like me, with ambition, whose wants are achieved through ambition, and that shiny new car will be mine when I want it, because I have the capabilities to achieve my goals...

So you are right, your kind does need to learn how to seperate your "wants" from your "needs", because you are begging at my table, and I will only give you the minimum to cover your needs.

Hey, what is Mitt Romney doing on slashdot? Go back to Facebook, Mittens!

Re:Damn the summary (2)

Raenex (947668) | about 2 years ago | (#41478331)

And then there are individuals like me, with ambition, whose wants are achieved through ambition, and that shiny new car will be mine when I want it, because I have the capabilities to achieve my goals...

So you buy your shiny new car, and then another shiny new car because you want that one too, and then a McMansion because bigger is better, and you need all that extra space to store all your shiny possessions. At the end of the day are you satisfied with your shiny possessions? No, you need more shiny possessions, and your life is centered around a vapid cycle of consumerism while you sacrifice your ethics and free time to attain them.

Re:Damn the summary (1)

Anonymous Coward | about 2 years ago | (#41478391)

And some of us have actual ambitions. We don't buy shiny new expensive cars, not because we can't, but because we have better things to spend money on. But maybe it is good you concentrate on getting that car, since while you are busy gawking over your own car and thinking how important you feel when sitting in it, you stay out of the way of people getting great things done.

Re:Damn the summary (4, Insightful)

ReallyEvilCanine (991886) | about 2 years ago | (#41475791)

As a parent of a young one I also hear this 500×/day. But what's the cost of "Terabit now and you're safe for a decade" versus "400Gb now, then rewire & replace all your gear in 3-5 years for 750Gb (if there isn't a standards war you have to gamble on), and then do that all over again in another 3-5 years for 1.1Gb"? Because that's the kind of creep we've seen since the early days of token ring and then 10BaseT. Manufacturers certainly want the step-by-step option but the admins and engineers? Not so much so.

Re:Damn the summary (1)

Anonymous Coward | about 2 years ago | (#41475903)

Manufacturers certainly want the step-by-step option but the admins and engineers? Not so much so.

What about accountants?

Most major expenditures are depreciated over a five year term, and in many jurisdictions you then have to get rid of the fully depreciated thing. If you keep it, then you admit that it still has value—which means that you fibbed about it losing its value and getting tax credits for depreciation.

So it's all very well to say keep it for a decade, but then you have to start fiddling with your tax reporting structure, which can get quite messy for public companies. It's easier to rev every few years for most major companies even if it seems counter intuitive from a tech perspective.

Re:Damn the summary (1)

chainsaw1 (89967) | about 2 years ago | (#41476255)

It's a two way street.

While the cost of incrementally upgrading your equipment can be high, if you leap generation(s) you also have risk that the upgrade process will be lost amongst your staff. If that happens, then when [eventually] you do need to upgrade the process may not be as smooth, leading to extended downtime and/or extra costs (lost customers, wrong hardware, infrastructure upgrades, etc.)

The only way to know for sure is to have a cost-benefit analysis and a risk strategy tailored to your business areas.

Re:Damn the summary (1)

jellomizer (103300) | about 2 years ago | (#41476463)

For the most part people who buy the latestest and greatest do it to expand the obsoletness era. Get 400gb now Then in 10 years get 1 TB.

In 10 years you are most likely going to need to replace your gear anyways.

If they go 1.1Tb now and say it will take 15 years for the 2 tb. you will be running for a long time with a well under utilized connection and probably will need to replace your gear in 10 years anyways. So you spent a lot of money for underutilized gear.

Re:Damn the summary (0)

Anonymous Coward | about 2 years ago | (#41476609)

Damn right. I should have had an 8 core Ivy Bridge CPU, 32GB of RAM, and a boxful of SSDs 20 years ago. Fuck that i386 shit, it was all a ploy to get users to spend money on useless incremental upgrades.

Re:Damn the summary (0)

Anonymous Coward | about 2 years ago | (#41478515)

On the other hand, if they can get 400Gb out in a year or two, and 750Gb out in three the four years, and 1.1Tb out in four to five, versus just 1 Tb in four years, you would have more options to upgrade sooner if you really need it. You don't have to upgrade every time, you could skip levels. If they just jumped from 100 Gb to 1 Tb to 10 Tb, you might have to wait many years before any upgrade is possible. Or worse, find out that equipment needs to be replaced, or new networks need to be built, but you have to install 5 year old speeds as it will still be a few more years before the next big jump becomes available. If done right, it will give more options, not remove them.

Re:Damn the summary (3, Funny)

fm6 (162816) | about 2 years ago | (#41475839)

Your nephew is probably about as mature as most geeks.

Re:Damn the summary (0)

Anonymous Coward | about 2 years ago | (#41476139)

I'd settle for 0.00001 terabit symmetric internet.
Bring that first!

Re:Damn the summary (1)

KingMotley (944240) | about 2 years ago | (#41477941)

If it will cost say 4 times greater per bit to implement Terabit with current technology do you still want it?

Yes, easily. Some of us pay that now by bonding multiple channels of "current tech" together and at much worse cost/bps.

Re:Damn the summary (1)

slashmydots (2189826) | about 2 years ago | (#41479147)

We need terabit Ethernet NOW, not in a decade.

My hard drive only writes at about 100MB/s so I'm good actually. Anything backbone-ish can use Fiber.

Re:Damn the summary (5, Informative)

rufty_tufty (888596) | about 2 years ago | (#41475701)

I realised I wasn't being clear about why they can't define the standard now and wait for the technology to catch up.
A standard like this is always a trade off based on the currently available technology, How fast are your analogue transistors, how much processing power do you have to do forward error correction. How fast are your ADCs/DACs to do signal shaping? This determines things like which coding schemes can you use. Also what market needs this and what costs are acceptable, for example DWDM and all the associated costs is perfectly acceptable if fibre is comparatively expensive, however even though in the 90s that would have been the only way to do 10G now we have the capability to do it electrically; designing the spec too soon and guessing is a really bad idea. We don't know how 20nm and lower process nodes are going to behave well enough to predict what their characteristics will be when this technology reaches maturity, to get that wrong is to end up with a standard that either under performs or is over expensive.

Put it another way, the processor architecture you would choose to achieve 80MFLOPS in 1976 is very different from the architecture you would choose in 2006. Telecomms has exactly the same concerns.

Re:Damn the summary (1)

Anubis350 (772791) | about 2 years ago | (#41477249)

Put it another way, the processor architecture you would choose to achieve 80MFLOPS in 1976 is very different from the architecture you would choose in 2006. Telecomms has exactly the same concerns.

Maybe 2006 but not, ironically, necessarily in 2012. The vector processors of the early supercomputers are very much alive in the GPUs of clusters that incorporate GPGPU work for their FLOPS count (which includes 3 of the top 10 right now)

Re:Damn the summary (1)

rufty_tufty (888596) | about 2 years ago | (#41477577)

Yes and no. While there is a popular trend back towards vectorisation there are other things going on that have just as big if not a bigger effect on architecture choice.
Registers are a lot cheaper. Caches are cheaper, and multi-level caches are ubiquitous. Main memory is an order of magnitude slower than the processor which is a problem early supercomputers didn't quite have.
A large part of the problem for modern processors has become the prediction and scheduling of the instructions which vectorisation helps alleviate, as opposed to the original Cray design where vectorisation was mainly used to reduce code size and keep the pipeline populated with current instructions as opposed to all the architecture behind speculative execution you now get.
But yes agreed although it does scare me that the upshot of this is that everything the supercomputer industry is doing (10s of thousands of cores) will be needed on my Nephew's sub-dermal implant. How bad will the software have to be to use up all that compute power if we currently need a Cray to make a phone call...

Tell me about it! (0)

Anonymous Coward | about 2 years ago | (#41475735)

I had a bunch of BSD servers ready to go in anticipation of terabit Ethernet! All I keep hearing is, "it's dead, Jim."

I have hopes though. The Dodge Dart came back, after all!

Re:Damn the summary (1)

Chrisq (894406) | about 2 years ago | (#41476303)

We need terabit Ethernet NOW, not in a decade.

What on earth for? For point to point bridging and interconnects you can already use Fibre at multi-terabit [wikipedia.org] interconnects. Do you have some need for a multi-point LAN to support this speed that couldn't be addressed by setting up seperatete switched VLANs

Guesstimate is a word used by total douches! (-1)

Anonymous Coward | about 2 years ago | (#41476709)

It just makes you sound retarded. That's all.

Re:Damn the summary (1)

satch89450 (186046) | about 2 years ago | (#41477171)

My brother is a process chemical engineer. Numerous times I've heard him say "if you want to increase capacity of a process, you take the unit you have, duplicate it, and pipe it in parallel." In the early days, I remember servers with four NICs running in parallel to increase bandwidth, while each server was glowing slightly red from the load. If one needs the capacity, it's available today. In one enterprise network, I saw servers with multiple NICs, each NIC connected to a separate EtherSwitch. In other words, there were four separate networks. Routers would bring leaf systems (PCs) into the quad-net backbone. I never learned how server-to-server traffic was handled, but I was told that all four backbone sections were used. So you don't NEED terabit Ethernet NOW, there are other solutions that aren't quite as elegant, but work.

In other words (1)

ctrl-alt-canc (977108) | about 2 years ago | (#41475595)

1 TB Ethernet has an infinite latency.

Re:In other words (4, Informative)

bersl2 (689221) | about 2 years ago | (#41475605)

No, unbounded latency. It'll happen, just not yet.

Sigh (2)

ledow (319597) | about 2 years ago | (#41475647)

Powers of 10.
Over copper or fibre.
At copper distances of 100m.

Call it a standard, if you like. Each time you have to upgrade, look to the next power of ten at that specification.

Because although 40Gb/s exists, it's not popular and you won't find it in your average computer supplier, ever. Sure, it's expensive to jump like that, but every technology boost is expensive and I'd rather we skipped the proprietary-data-center-only junk and leave them to their own devices and specify real-world, millions-of-businesses standards at jumps big enough to a) make a difference, b) be expensive at first but mass-market after (rather than sharing the market with half-assed solutions), c) run on the same specs at the previous generation (if not the same cables exactly, at least I can replace 100m runs with 100m runs and not worry).

Ya well (4, Insightful)

Sycraft-fu (314770) | about 2 years ago | (#41475755)

You may discover that you can't have what you want. There are real physical limitations we have to deal with. One issue, with regards to copper Ethernet, that we are having is keeping something that remains compatible with older style wiring. Sticking with 8P8C and UTP is becoming a real issue for higher speeds. At some point we may have to have a break where new standards require a different kind of jack and connector.

Also in terms of "data center only" devices that isn't how things work. You care what data centers use because you connect to them. There can be big advantages in terms of cost, simplicity, and latency, to stick all on one spec. So 40gbps or 400gbps could well be useful. No, maybe you don't see that to your desktop, but that doesn't mean it doesn't get used in the switching infrastructure in your company.

Also each order of magnitude you go up with Ethernet makes the next matter less. It's going to be awhile before there's any real need for 10gbps to the desktop. 1gbps is just plenty fast enough for most things. You can use things over a 1gbps link like they were on your local system and not see much of a performance penalty (latency is a bigger issue than speed in most things at that point). I mean consider that the original SATA spec is only 1.5gbps.

As for 100gbps, it'll take some major increases in what we do before there is a need for that to the desktop, if ever. 10gbps is just an amazing amount of bandwidth to a single computer. It is enough to do multiple uncompressed 1080p60 video streams, almost enough to do a 4k uncompressed video stream.

Big bandwidth is more of a data center/ISP need than a desktop need. 1gbps to the desktop is fine and will continue to be fine for quite some time. However to deliver that, you are going to need more than 1gbps above your connection.

Re:Ya well (2)

beanpoppa (1305757) | about 2 years ago | (#41476211)

The first round of all the recent 802.3 standards (1000baseT, 10GbaseT) have all forgone the requirement of 8P8C, and as technologies improved, added them back in. Only recently could we do 10GbaseT over Cat6. Early implementations were over fibre, and twinax cables. 40/100Gbps ethernet is still fiber only.

Re:Ya well (3, Insightful)

petermgreen (876956) | about 2 years ago | (#41476641)

Just because the engineers have pulled two rabbits out of hats and managed to run first 1 and then 10 gigabit over slightly improved versions of cheap twisted pair cable with the 8P8C connectors (though at present afaict the cost of transciever hardware is such that for short 10 gigabit runs you are better off with SFP+ direct attach) doesn't mean they will be able to do it again.

Re:Ya well (1)

Anonymous Coward | about 2 years ago | (#41478287)

40/100Gbps ethernet is still fiber only.

Small correction: 40 Gbps can now be done with twinax and is MUCH cheaper that way. As a matter of fact, I just deployed it at work.

Re:Ya well (1)

Miamicanes (730264) | about 2 years ago | (#41477097)

At the rate we're going, "8P8C" in the terabit+ category will probably end up meaning "cable with 4 pairs of single-mode fibers". When you start talking about terahertz signaling rates, a single fiber starts looking like a pair of copper wires & you start to feel like if it hasn't quite outstripped the final viable limits of what it can do, it's getting pretty damn close.

As a practical matter, wire speeds faster than 10gbps almost *have* to be treated like parallel bundles of fast, but independent bitstreams that happen to be sharing a common transmission medium, but are all traveling in parallel & are oblivious to each other's content, just because you eventually have to switch or route the traffic, and there's a practical limit to the speeds even the fastest DSP-like purpose-built CPU can achieve on-die, let alone on its circuit board.

Aggregating 10 1-gigabit quasi-serial links into 10 independent streams that might be modulated together into a single fiber, then later demodulated and restored to 10 1-gigabit quasi-serial links is one thing. Trying to actually switch (let alone route) a true single-10gbps bitstream by studying its IP header and making decisions based upon things like its ipv6 address is another matter entirely.

I met somebody about a year or two ago who told me that routing 10-gigabit traffic today is kind of like sexing newborn chickens. At that speed, you aren't analyzing headers... you're making single-bit snap judgments on a slightly-blurry bitstream, and hoping it wasn't noise that sent a datagram meant for someone in Ohio to Shanghai instead. Or more precisely, you have a few circuits sniffing the blur in parallel, voting on what they think its destination is likely to be, and majority rule deciding where it goes next.

Putting into perspective just how fast 10gbps is from the perspective of a single user, in the time it takes the fastest Intel-architecture AMD64 CPU money can buy today to test a single byte already in a register and determine whether its value is zero or nonzero, an entire byte or more would fly by on the 10gbps wire.

Re:Ya well (1)

kasperd (592156) | about 2 years ago | (#41477637)

I met somebody about a year or two ago who told me that routing 10-gigabit traffic today is kind of like sexing newborn chickens. At that speed, you aren't analyzing headers... you're making single-bit snap judgments on a slightly-blurry bitstream, and hoping it wasn't noise that sent a datagram meant for someone in Ohio to Shanghai instead. Or more precisely, you have a few circuits sniffing the blur in parallel, voting on what they think its destination is likely to be, and majority rule deciding where it goes next.

Then how come I never see the number of hops between two endpoints vary as packets occasionally take a longer path than they should have? When switching and routing, you need to be able to buffer packets. (If output port is idle and running at the same speed as the input port, packets can be forwarded without buffering, but that's rarely the case on the entire path). When you need to be able to store the packet, then you can spend more time finding the correct destination. If you know the destination by the time half the packet is received, then it is far better to store and forward than to send it down the wrong path.

Ethernet has a minimum packet size, which is longer than the header. But not all equipment can handle packets at wire speed if they are all of minimum size. There is a reason why there is a desire to increase the maximum packet size. If 500 bytes can fly by before you know where to send the packet, then you buffer those bytes, and that may mean packets have to be larger than 500 bytes on average in order to achieve wire speed. Though the minimum packet size does help a little bit here, the reason there is a minimum size is actually something different. Originally the minimum size was in the standard to ensure that packets were long enough to reach from one end of the cable to the other.

I'm not sure you are correct there (1)

Sycraft-fu (314770) | about 2 years ago | (#41478421)

10gbps is not that fast in terms of computer speed. A single lane of PCIe 3.0 is nearly 10gbps (it is 1Gbytes/sec). Memory is generally in the range of 20Gbytes/sec and up. L1 cache is over 100Gbytes/sec.

I'm not trying to say routing 10gbps is easy or anything, just that you seem to think processors are slower than they are. They deal with pretty vast amounts of data.

Re:Ya well (1)

KingMotley (944240) | about 2 years ago | (#41478443)

Putting into perspective just how fast 10gbps is from the perspective of a single user, in the time it takes the fastest Intel-architecture AMD64 CPU money can buy today to test a single byte already in a register and determine whether its value is zero or nonzero, an entire byte or more would fly by on the 10gbps wire.

I think you lost something in conversion there. 10gbps is 1.25GBps. Today's fastest Intel Desktop processors have 12 threads all running at 4GHz+ (My desktop is running 4.5GHz). Assuming you aren't using any of the fancy (faster) SIMD instructions and doing a simple test r,0 instruction at the byte level (actually it can do 4 bytes/32-bit words at a time, but I'm not counting that), Sandy Bridge processors can cache, decode, issue, execute, complete and sustain 3 of those per cycle per thread. Reference: http://gmplib.org/~tege/x86-timing.pdf [gmplib.org]

4.5(GHz)*3*12/1.25(GBps)=129.6. Desktop processors are capable of doing this 129.6 times faster than needed to do what you supposed, not even using the faster SIMD/SSE/AVX instruction sets which optimized for such things. No fancy logarithms needed either.

Re:Ya well (1)

KingMotley (944240) | about 2 years ago | (#41478519)

Oh, and just for an idea, I often copy data (not just simple test bytes for zero) around on my computer from multiple drives to other drives on my system at a much higher rate that that -- physical drives, not ram disks or the like. Granted, they are raid arrays hanging off of different disk controllers and go through the CPU to do so and still uses next to nothing CPU wise.

Re:Ya well (1)

tlhIngan (30335) | about 2 years ago | (#41478137)

You may discover that you can't have what you want. There are real physical limitations we have to deal with. One issue, with regards to copper Ethernet, that we are having is keeping something that remains compatible with older style wiring. Sticking with 8P8C and UTP is becoming a real issue for higher speeds. At some point we may have to have a break where new standards require a different kind of jack and connector.

Actually, the biggest limitation for Ethernet right now isn't the wiring (all the new fancy high-speed interconnects never use UTP - usually fiber or other, then ported backwards to cable.

The biggest issue right now is that if you want 100m, you have to increase the minimum packet size at the faster speeds - 64 bytes is barely able to meet it at GigE speeds, nevermind 10G or faster. The thing is, at the faster speeds, you can send out a minimum-sized packet and it'll be completely "on the wire" before the other end gets it (the host would've finished the last bit before the remote end has even got the first sync bit!). It's one reason why hubs aren't defined faster than GigE - besides the inefficiency, you can have every host transmitting packets and not seeing the results for the packet (collision or not) until many packets later (remember a collision is detected when a host receives back a bit different from what it sent).

Also each order of magnitude you go up with Ethernet makes the next matter less. It's going to be awhile before there's any real need for 10gbps to the desktop. 1gbps is just plenty fast enough for most things. You can use things over a 1gbps link like they were on your local system and not see much of a performance penalty (latency is a bigger issue than speed in most things at that point). I mean consider that the original SATA spec is only 1.5gbps.

As for 100gbps, it'll take some major increases in what we do before there is a need for that to the desktop, if ever. 10gbps is just an amazing amount of bandwidth to a single computer. It is enough to do multiple uncompressed 1080p60 video streams, almost enough to do a 4k uncompressed video stream.

Perhaps you don't realize how slow standards card to produce - every company is fighting to include their technology in the standard (because it guarantees patent royalties - e.g., HP gets paid for every GigE port thanks to Auto-MDI/X, and there's a patent on autonegotiation as well I believe). A lot of back scratching, technical analysis, backwards compatibility handling, etc, you have a standard. This can take 5 years or more. Then after the standard is approved, it can take another 2-3 years for chips and equipment to hit the market, and years after that for enough volume to build up that it becomes cheaper.

Heck, we've had GigE for probably over a decade, and high-end PCs shipped with GigE ports over half a decade ago (Apple was one of the first to start making it standard in their computers). GigE switches were still pricey until a few years ago, and these days, it's now affordable to run a GigE network at home with switches falling under the $50 mark.

Just because it's "fine now" doesn't mean it'll be fine later, and the process is a slow grind.

Of course, the other thing holding back adoption of 10G and 40G is cabling - CAT6 or fiber, few of which people have, but I'm sure you'll start finding high-end PCs shipping with 10G ports in the next couple of years.

Re:Ya well (0)

Anonymous Coward | about 2 years ago | (#41478143)

But I want an uncompressed 3D 4k video stream!

Re:Sigh (1)

Overzeetop (214511) | about 2 years ago | (#41475805)

Maybe they just meant the standard should be 414.2 Gbps...and the next iteration will be 1Tbps. Sort of an A4-A3 transition, but for one dimentionsal....yeah, you're right. It's a stupid idea.

Re:Sigh (3, Insightful)

burning-toast (925667) | about 2 years ago | (#41475853)

OTOH. These standards, by sheer fact that they are referencing 1Tbs needs, are most certainly relevant to the backhaul providers and not any normal business outside of that group. Fractional or non-base 10 speeds have been common in those networks since well before the power of 10 thing came about. Once the rest of the technology catches up and makes the power of 10 thing feasible, then the standard "commodity" equipment picks it up (primarily for marketing reasons IMO). Power of 10 is convenient for math reasons, but frequently means absolutely nothing to the backhaul guys (the early adopters).

Those businesses who purchase the regular "commodity" power-of-10 equipment really should be set for a while with the previously commoditised 10Gb links. They are performant, relatively cheap, available, run across the nation, and hard to saturate with the equipment that plugs into either side. I've worked with 8x10Gb multiplexed cross-country low-latency fiber wan links. It is a ludicrous amount of bandwidth unless you are routing other networks like a backhaul provider. I would struggle to name normal businesses which would be unable to use 10Gb links due to a lack of bandwith (for the immediate future). The needs really are different between these markets.

As an aside, fiber may be sold commonly in 100m lengths, but that has nothing to do with the distance the light will work at properly for the speed it is rated. Some fiber / wavelength pairs are only good for a few feet. Others go km, but not with the same NIC, Fiber, Switches, or patch panels. 100m is a really shitty (too short) standard for datacenter use anyways. Frequently, we will get two cages in a datacenter at different times... and they end up farther than 100m apart making copper irrelevant for that use.

Change is incremental like ripples, but big changes come in waves. Back-haul wants the ripples, everyone else wants the wave. I say, let them have their ripples and pay for the development of the waves. It saves both groups of consumers money so long as there aren't TOO many ripples per wave.

- Toast

Re:Sigh (1)

drinkypoo (153816) | about 2 years ago | (#41478411)

These standards, by sheer fact that they are referencing 1Tbs needs, are most certainly relevant to the backhaul providers and not any normal business outside of that group.

A lot of people would like to have just one [partitioned] network, and if you're [over?]using SAN you might have quite a lot of traffic. 1 Tb/sec divided up between a hundred or thousand active clients doesn't sound like quite so much data. On the other hand, we're still not talking about many links in most cases.

Isn't it about time we stopped calling it Ethernet (5, Insightful)

Viol8 (599362) | about 2 years ago | (#41475819)

Hardly any 10 base T systems bother with the CDMA/CD system that original ethernet had , in fact its more like a serial protocol rather than a broadcast "in the ether" one now. WHy not just give it a new name?

Re:Isn't it about time we stopped calling it Ether (0)

Anonymous Coward | about 2 years ago | (#41475957)

Please mod up. CSMA/CD is what originally defined Ethernet, and it is completely dead except perhaps for some very special applications.
(The fact that you don't have any intermediate electronics between sender and receiver on coax based Ehernet eliminates some point
of failures).

Re:Isn't it about time we stopped calling it Ether (2)

dbIII (701233) | about 2 years ago | (#41476011)

The name came from the original idea of it being a wireless protocol so has never made sense in any device ever sold with that name.

Re:Isn't it about time we stopped calling it Ether (1)

Viol8 (599362) | about 2 years ago | (#41476259)

Depends how you look at it. Coax ethernet did to all intents and purposes use an RF signal and , though I'm not an electronics engineer, I can't see any reason why - interference aside - you couldn't simply have plugged it into an antenna and with some suitable RX/TX amps used it as wireless.

Re:Isn't it about time we stopped calling it Ether (0)

Anonymous Coward | about 2 years ago | (#41476753)

eh, no. Not even close.

Re:Isn't it about time we stopped calling it Ether (1)

petermgreen (876956) | about 2 years ago | (#41476809)

AIUI the "ether" in ethernet was an analogy coming from the fact that a shared coax cable has some aspects in common with a radio system.

However while coax ethernet shares some things in common with a radio system there are also big differences that mean running ethernet over radio would NOT be a simple matter of adding amplifiers and antennas.

1: Radio systems have FAR more loss than coax cable systems. In particular this means it is MUCH harder to detect collisions since when you are transmitting your own signal is FAR stronger than anyone elses signal. That is why radio systems have tended to use CSMA/CA rather than CSMA/CD.
2: As well as having more loss radio systems also have highly variable loss so your receiver has to be able to cope with a wide range of signal levels.
3: DC can't be passed over radio. IIRC coax ethernet does use DC for some signalling functions (I belive it uses it for collision detection)
4: It is very difficult to make an antenna that works well with consistent performance over a very wide bandwidth (relative to the center frequency). So it is pretty much essential to modulate radio transmissions onto a carrier with a frequency many times the symbol rate.
5: Multipath can be very strong in indoor radio applications and requires special modulation techniques to deal with

Re:Isn't it about time we stopped calling it Ether (2)

drinkypoo (153816) | about 2 years ago | (#41478379)

The name came from the original idea of it being a wireless protocol so has never made sense in any device ever sold with that name.

WiFi is ethernet, with wireless extensions. MACs, frames, etc etc etc. Before everyone knew what 802.11 was, it was even referred to regularly as wireless ethernet.

Re:Isn't it about time we stopped calling it Ether (1)

petermgreen (876956) | about 2 years ago | (#41477431)

Hardly any 10 base T systems bother with the CDMA/CD system that original ethernet had

I don't think i've ever seen a 10BASE-T system that didn't use CSMA/CD. Switches were too expensive back then to justify a fully switched network so people used hubs and let the end nodes continue to do collision detection and retry. Also afaict the autonegotiation system needed to automatically disable CSMA/CD didn't come in until 100BASE-T was introduced (it's certainly defined in the 100 megabit section of the spec).

OTOH at higher speeds CSMA/CD is basically gone. While I know 100BASE-T hubs exist i've never actually owned or knowlingly used one. I'm not sure gigabit hubs existed on the market at all (despite being defined in the spec). At 10G and beyond hubs aren't even defined.

WHy not just give it a new name?

Because the change came in gradually. Changing the name now would just serve to confuse people.

Also while modern ethernet networks don't use CSMA/CD much if at all the equipment does generally still support it. You can still take an old peice of equipment with an AUI port, plug a 10BASE-T transceiver into it and plug it into your brand new gigabit switch. The switch will detect the speed, turn on CSMA/CD on the port and things will just work.

Re:Isn't it about time we stopped calling it Ether (1)

Dwedit (232252) | about 2 years ago | (#41477465)

It's time we stop calling it "10 base T". The speed got boosted from 10mbps to 100mbps back in 1995, so there's nothing "10" about it anymore, unless you're surrounded by very bad networking equipment.

Re:Isn't it about time we stopped calling it Ether (0)

Anonymous Coward | about 2 years ago | (#41478665)

Out here in the vaguely technical world, we call it 10baseT, 100baseT, 1000baseT or 1GbaseT, and 10GbaseT as appropriate. And this has been fairly standard nomenclature for as long as I've been working at jobs with computers and LANs, since late 1990s.

Re:Isn't it about time we stopped calling it Ether (0)

Anonymous Coward | about 2 years ago | (#41479119)

10megabit ethernet is still very much used today. Every switch better damn well support it and support it well.
  And I'm not talking about legacy hardware.

Modern ethernet controllers can and do drop to 10 megabits while in a low power or standby mode(Uses a lot less power/circuitry/computational power to maintain a 10mbit link). This is so systems can send and receive "baseband" or system management data while suspended or powered off (Or things like wake on lan). Typically this is very low bandwidth, sometimes only a handful of frames/packets at time so 10megabits is more than enough.

If anyone can even comprehend 1 terabit (2)

Spectrumanalyzer (2733849) | about 2 years ago | (#41475851)

We ads on TV for 200gbit internet here in Sweden, yet - most people dont have anything above 4mbit. Sweden is pretty much a long forest country, and only the few big cities we have can enjoy really fast internet.

I live in a small city here, 10K+ something citizens, and Im the "lucky" one to live nearby the city core itself, so I get around 12-14mbit on a good day, this is far more than my peers get, they are lucky to hit 2mbit, and live only 2-3km away from the city core.

But you know what? I do just fine on 12mbit. With that, I can even watch television in FULL HD without any jumping or skipping, even directly from the USA via proxies.

Re:If anyone can even comprehend 1 terabit (2)

isorox (205688) | about 2 years ago | (#41475975)

We ads on TV for 200gbit internet here in Sweden

No, you don't. You might have adverts for 200mbit internet, but not 200gbit.

Re:If anyone can even comprehend 1 terabit (1)

Spectrumanalyzer (2733849) | about 2 years ago | (#41476051)

No, you don't. You might have adverts for 200mbit internet, but not 200gbit.

Typo!

But youre right, thanks for noticing.

Re:If anyone can even comprehend 1 terabit (2)

Luckyo (1726890) | about 2 years ago | (#41476479)

Terabit connections are what ISPs doing those big 200meg "to each customer"-links want to use to link their switches and routers together and datacenters serving "full hd" content to millions of users want to use on their internal networks. That way, instead of having to running multiple switches over multiple cables, you could do with fewer switches/routers and cabling for the same or better performance.

At home, most machines can't properly utilize gigabit ethernet as of writing this due to internal bottlenecks of each machine. Vast majority of "integrated terabit" circutry you see advertised on motherboard packaging can barely push 300mbit on a good day. And few home users complain, because it's pretty much enough for anything home-based that you would need it for, and those few that need more will usually have pricy dedicated gear for it and methods for elimination of those internal bottlenecks.

Re:If anyone can even comprehend 1 terabit (1)

Kharny (239931) | about 2 years ago | (#41476751)

200 mbit per customer, means only 5 customers for a gbit. a small area in say stockholm could easily contain 500 customers, that's already 100Gbit.
Connect all of stockholm or a similar place, and you will need huge backbone connections already, and that is still on city level, not even national, let alone international

Re:If anyone can even comprehend 1 terabit (1)

h4rr4r (612664) | about 2 years ago | (#41476973)

A Gbit connection is tiny in 2012, even 10Gb is cheap now.

Re:If anyone can even comprehend 1 terabit (1)

Kharny (239931) | about 2 years ago | (#41478349)

private use, there is only one place in the world afaik that even has gbit internet, that's korea.
In europe, highest you get is 200mbit(down) in sweden/finland/norway, only in mayor cities.

sooooo (0)

Anonymous Coward | about 2 years ago | (#41475959)

i need a litre of water tomorrow so ill get a hlaf litre container today LOL idiots the lot of them....all i see is them trying to limit progress ....Are they all catholic by chance?

Understandable (1)

Cornwallis (1188489) | about 2 years ago | (#41476047)

That way we can pay for 400g equipment then be told we need to "upgrade" to 1t equipment. Got to keep planned obsolescence moving...

Re:Understandable (1)

Virtucon (127420) | about 2 years ago | (#41476087)

I see a new CCTB certification now popping up at Cisco, that's what will drive the move to TB Ethernet.

If we ever get to these speeds with broadband (2)

Virtucon (127420) | about 2 years ago | (#41476081)

Verizon, Comcast and others will still prioritize traffic so that P2P will never be faster than 1Mbit/sec. because they just won't have the capacity to handle it.

Why sell one, when you can sell two? (2, Interesting)

MetricT (128876) | about 2 years ago | (#41476363)

I manage several petabytes of storage on a large compute cluster, and we could use Terabit ethernet yesterday. Network fabric throughput is our limiting factor on pushing data out.

One senses that vendors went for the 400 Gb standard on the premise of "why sell one network upgrade when you can sell two at twice the price", and not from actually catering to customer's needs.

It's similar to the current 40 Gb/100 Gb standards. No one that I know actually wants 40 Gb. I can bond 4 x 10 Gb and get that already. But vendors want that double upgrade fee from those companies that have to have every ephemeral competitive advantage.

Re:Why sell one, when you can sell two? (3, Interesting)

mla_anderson (578539) | about 2 years ago | (#41476601)

Yep it's definitely not a technical problem, after all getting serial data to run at 312.5 Gbps over long distances of un-shielded twisted pair copper is simple. The edges of the data are only in the 1.2 THz range after all.

Even on a PCB, 312.5 Gbps gets tricky and expensive, over long distances of fiber or copper it will be very difficult. Dropping to 400 Gbps brings it into the realm of slightly possible but still ridiculously expensive, plus at 400 Gbps you can bond just three links and get 1.2Tbps through, well probably less after overhead.

Damn CS/CE's think they know RF!

Re:Why sell one, when you can sell two? (1)

MetricT (128876) | about 2 years ago | (#41476757)

I care about as much for Terabit over copper, as I do for Terabit over caloric, phlogiston, or aether. Short-haul Terabit over fiber would be quite sufficient for our use-case (network never leaves the NOC, which I suspect is probably the major use case, long-haul is a smaller though higher margin market) and is *much* easier to pull off.

And FWIW, physicist, not CS/CE.

Copper? How quaint. (2)

Stavr0 (35032) | about 2 years ago | (#41476661)

Shouldn't we pushing photons over glass by now. Fibre infrastructure has existed for decades now, isn't it time it was scaled down to individual computers and appliances?

Re:Copper? How quaint. (2)

Areyoukiddingme (1289470) | about 2 years ago | (#41478505)

It has been. You can get it at the desktop, and people do. (I recall a former coworker who did a fiber-to-the-desktop deployment for an NSA office nearly a decade ago.) It's still really really pesky to deal with, even to this day. Plastic fiber does make things a lot easier, but it has its own downsides. Terminating copper for use at gigabit speeds is finicky enough that I learned not to try. I buy manufactured patch cables, and still have the odd one fail (albeit fewer than hand-terminated cables). Terminating fiber is considerably worse. Maybe if somebody made a little portable do-everything-automatically machine that could cut fiber, polish the end, and attach the connector, and achieve very high reliability in doing it, fiber could be deployed more widely. Until then, copper is king, 'cause I own a pair of wirecutters and so does everybody else.

My sides hurt... (0)

Anonymous Coward | about 2 years ago | (#41477359)

from laughing. Why? Because the cost for ISP's to upgrade their networks to that kind of speed wont happen. Why? Because companies bend over to their shareholders instead of keeping their networks up to date.

Dead, for now. (1)

Another, completely (812244) | about 2 years ago | (#41477571)

So ... not dead then.

Does anyone remember hearing that Des O'Malley once claimed the Maastricht treaty had "been dealt, at least temporarily, a fatal blow."

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?