Beta

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

IEEE Seeks Consensus on Ethernet Transfer Speed Standard

samzenpus posted about 2 years ago | from the setting-the-bar dept.

Networking 92

New submitter h2okies writes "CNET's News.com reports that the IEEE will start today to form the new standards for Ethernet and data transfer. 'The standard, to be produced by the Institute of Electrical and Electronics Engineers, will likely reach data-transfer speeds between 400 gigabits per second and 1 terabit per second. For comparison, that latter speed would be enough to copy 20 full-length Blu-ray movies in a second.' The IEEE also reports on how the speed needs of the internet continue to double every year. Of what consequence will this new standard be if the last mile is still stuck on beep & creep?"

cancel ×

92 comments

Sorry! There are no comments related to the filter you selected.

Hype! (4, Funny)

fm6 (162816) | about 2 years ago | (#41058105)

Ethernet transfers never use more than a fraction of available bandwidth. So it's 2 blu-ray discs per second, 4 tops!

Re:Hype! (0)

Anonymous Coward | about 2 years ago | (#41058187)

I have had the results of SQL queries nearly max out my Ethernet connection (96Mbps)

Re:Hype! (2)

MyLongNickName (822545) | about 2 years ago | (#41058217)

I have had the results of SQL queries nearly max out my Ethernet connection (96Mbps)

I told you it wasn't good to use a CROSS JOIN across all of your Access tables.

Re:Hype! (-1)

MyLongNickName (822545) | about 2 years ago | (#41058253)

Although, I am not certain, I think that it really wouldn't matter on an Access table. When you write a query against the DB, you'll get all rows returned to you. Filtering and joining occur on the client side... feel free to correct me if I am wrong.

Re:Hype! (-1)

Anonymous Coward | about 2 years ago | (#41058293)

You are completely and utterly wrong.

Re:Hype! (0)

ibwolf (126465) | about 2 years ago | (#41058453)

You are wrong. The whole point of databases is to handle such data intensive workloads.

Re:Hype! (0)

Anonymous Coward | about 2 years ago | (#41058637)

Do you know anything about Access? Basically, it is a glorified flat file and the client does the processing.

Re:Hype! (2, Informative)

keltor (99721) | about 2 years ago | (#41059003)

AC is 100% correct provided we're dealing with a local Access database and not Access fronting a SQL Server - in the latter case, all of the queries and what not take place on the SQL Server and not on the client. The latter case is not uncommon with Internal Applications that started off as an Access application and were later converted to run on SQL Server.

Re:Hype! (2, Insightful)

vlm (69642) | about 2 years ago | (#41058681)

Unfortunately I have met several programmers who do exactly that. Usually recent refugees from homemade .csv land.
Then they go on an epic bender of why SQL is not webscale and we need to use nosql solutions etc etc.
I realize this sounds like a daily WTF post but I've also seen people implement sorting in the app instead of letting the DB do it. Madness.

Re:Hype! (4, Funny)

jeffmeden (135043) | about 2 years ago | (#41058833)

Unfortunately I have met several programmers who do exactly that. Usually recent refugees from homemade .csv land.
Then they go on an epic bender of why SQL is not webscale and we need to use nosql solutions etc etc.
I realize this sounds like a daily WTF post but I've also seen people implement sorting in the app instead of letting the DB do it. Madness.

Why would I trust the lousy SQL server app to properly implement a superior bubble sort algorithm?

Re:Hype! (1, Offtopic)

SuricouRaven (1897204) | about 2 years ago | (#41059011)

I've only had to impliment a sort myself once. For a dedup program. It needs to sort a list many gigabytes in length, so it's not all going to fit in RAM. I used a modified radix sort. A quicksort would look faster on paper, but with a problem: Non-linear access, which kills performance on spinnydisks.

Re:Hype! (0)

Anonymous Coward | about 2 years ago | (#41061635)

There are valid reasons for having an application do this, largely depending on how the server is implemented. When you're dealing with a brain-dead implementation like, e.g., MySQL, it's not uncommon to shift such work to the application.

Of course, when implementing such optimizations, you should have empirical evidence under the same circumstances that it's worth the effort. It's this step that cargo cult programmers don't understand. They read some blurb on some blog where some dude published data on a micro benchmark, and then extrapolate way beyond anything the data might support (which is usually nothing beyond the micro benchmark setup).

But saying that it's never necessary is almost as worse. It's still cargo cult programming. "Always do X" is equivalent to "Never do X", in this context.

Re:Hype! (1)

keltor (99721) | about 2 years ago | (#41059047)

It really all depends on the particulars of the application, but in many cases they are totally correct that putting the logic in the application can be much more scalable. Often these are not all or nothing type arguments. It may be that you need a mix of traditional SQL for some stuff with business rules and all that jazz, another SQL setup for hosting certain types of dynamic data and then a NoSQL setup for the unstructured data. There is no one solution for all troubles, that's just the way it is.

Re:Hype! (1)

Nadaka (224565) | about 2 years ago | (#41059265)

Not everything can be done in the database, even sorting. I've had client requirements that a column by sorted by the 4th character unless the field only had a 2 character prefix instead of a 3 character prefix. and some values did not have a prefix at all, and it got worse from there.

The arcane sql that would have been required for that would have been nearly impossible to deal with. The good news is that I was eventually given permission to tell the client to fuck off (though I had to be slightly nicer about it).

Re:Hype! (1)

viperidaenz (2515578) | about 2 years ago | (#41059467)

I know right. You'd need to write a deterministic function or use a function based index to do your sorting! That's nearly impossible! Idiot.

Re:Hype! (0)

Anonymous Coward | about 2 years ago | (#41064701)

I know right. You'd need to write a deterministic function or use a function based index to do your sorting! That's nearly impossible! Idiot.

He never mentioned what database engine was used or if it was possible to migrate the data into another database.
Making assumptions about the capabilities of the database with incomplete information makes you the idiot.

Re:Hype! (1)

vlm (69642) | about 2 years ago | (#41066261)

Not everything can be done in the database, even sorting. I've had client requirements that a column by sorted by the 4th character unless the field only had a 2 character prefix instead of a 3 character prefix. and some values did not have a prefix at all, and it got worse from there.

Been there done that did not enjoy it at all. Well not that exact situation. The solution I chose, because I had DBA access, was to create a big fixed width synthetic sort key and index on it. You put all that icky if/then and case/end stuff into an app that squirts out a big sort key that'll always sort correctly based on the crazy rules. Often this is an excuse for implementing some kind of MVC wrapper where you put the key generator in the model, or an excuse to make triggers in the database if it can be done in SQL (triggers are trinary, they drive some people absolutely batty, some think its brilliant, and almost everyone has no idea the concept even exists), or an excuse to make your own wrapper class for DB access where its the only thing allowed to do inserts/updates (which you can enforce by only giving the "app uname pword" only select and delete access and the wrapper class uses a whole nother uname pword which actually has insert and update access).

Although your solution of telling the client to F off is more elegant because there's probably no business case for ridiculous stuff like that, they'd just trying to anger you or set you up for failure so they can replace you with the bosses son, etc. IF thats what they want, once they understand the solution they'll demand alternate sorting methods, changing the sort method on the fly, etc.

If you know what a hash is, its conceptually the opposite, sorta, where you still want it to be deterministic, its wide but you don't care, but instead of being nearly psuedorandom its very predictable... in fact its predictable in the precise order you need to sort the data...

Re:Hype! (0)

Anonymous Coward | about 2 years ago | (#41058299)

SQL server, ran a select * from table
  (1st time working on a real project, fresh out of college, didnt know dev database can have more than 500k rows)
Was SQL server though

Re:Hype! (1)

viperidaenz (2515578) | about 2 years ago | (#41059509)

I do select * from tables in our dev database all the time. It has tables with several million rows. Of course only 50 get returned at a time though since I'm not a lunatic and don't set the page sizes to unlimited.

Re:Hype! (1)

cheater512 (783349) | about 2 years ago | (#41060555)

I've got two webservers and a database server connected via gigabit. It routinely blasts past 100mbps in DB traffic.
Right now it averages over half a terrabyte of data a day and most of that is in certain peak hours.

Re:Hype! (1)

DJRumpy (1345787) | about 2 years ago | (#41058219)

I would be happy just to have speeds shown at 'real world' results, rather than 'theoretical' limits. What good are these ratings if people in the real world never actually see them?

Re:Hype! (1)

fm6 (162816) | about 2 years ago | (#41058419)

There's a good reason and a bad reason. The good reason is that theoretical limits are objective and reproducible; real world limits depend upon a host of factors.

The bad reason is that the most impressive-sounding statistic is the one that sells.

Re:Hype! (1)

DJRumpy (1345787) | about 2 years ago | (#41058475)

Yes, but knowing that overhead will always reduce the throughput by a specific amount, they could simply exclude that. Wireless is a good example of what you are saying as it would vary so much depending on location, interference, etc, but it should be based on the best possible 'real world' values, rather than a non-achievable theoretical limit.

Re:Hype! (1)

fm6 (162816) | about 2 years ago | (#41058549)

overhead will always reduce the throughput by a specific amount,

No it won't. Overhead depends on a variety of factors: cable quality, traffic levels, software.

Re:Hype! (2)

DJRumpy (1345787) | about 2 years ago | (#41058661)

I'm talking about protocol overhead.

For example, all things being equal, a computer connected to a hub via a stock ethernet cable with a guaranteed link speed up and down should produce a result that's generally in the same area each time (hence the 'real world').

It's not a difficult concept. We're not asking for a rating for every conceivable configuration, but best case real world numbers. WiFi theoreticals are nowhere near their real world numbers.

Re:Hype! (2)

TheRaven64 (641858) | about 2 years ago | (#41059105)

But running what? Are you measuring the speed of delivering ethernet frames? Or of IPv4 packets? Or IPv6 packets? Or of payload carried by TCP packets on either?

Re:Hype! (1)

fm6 (162816) | about 2 years ago | (#41059383)

I suppose you could probably get a consistent value if you were very specific about the physical setup and the benchmark. But it wouldn't be that much different from the theoretical limit, and wouldn't tell you jack about real-world use cases.

Benchmarking is a complicated, controversial branch of computing. Any time you try to "prove" that one piece of hardware or software is faster than another, somebody who's selling competing technologies will show you benchmarks to "prove" that you're wrong. You're not going to get any objective measures this way, especially not for ethernet, which is highly stochastic.

Re:Hype! (1)

viperidaenz (2515578) | about 2 years ago | (#41059571)

But when you're looking at layer 1, you can get those speeds in real world scenarios.

Re:Hype! (1)

dgatwood (11270) | about 2 years ago | (#41061569)

I would be happy just to have speeds shown at 'real world' results, rather than 'theoretical' limits. What good are these ratings if people in the real world never actually see them?

Sadly, for most of us, the real-world speed of our 1 Tbps Ethernet connection will be the speed of the 1.5 Mbps DSL line that feeds it. All the fast LANs in the world won't help you if WAN speed improvement is blocked by telecoms that don't want to spend any money on their infrastructure.

Re:Hype! (0)

Anonymous Coward | about 2 years ago | (#41062319)

Internet speed is irrelevant for local LAN connections.

Re:Hype! (1)

dgatwood (11270) | about 2 years ago | (#41063553)

Yes, and for a few esoteric markets like high-performance computing, that matters. For the rest of the world, it doesn't.

Businesses with lots of computers may say they want terabit Ethernet, but computers aren't built for them. They're built for consumers. When the average consumer has at best double-digit Mbps service to their home, there's not much impetus for computer manufacturers to build in hardware that goes much over gigabit. So when businesses realize that the only way they're getting terabit-E is to spend a thousand bucks per workstation, they'll be like, "Eh... lemme think about it and get back to you", and then they won't.

And even if it were only fifty bucks, the cost justification would be pretty difficult even for businesses. Outside of high tech firms, most businesses' internal networking needs are pretty modest. If a faster network could get them data from the outside world faster, they might go for it, but when you ask a manager how much it is worth for that file to get across the LAN in three seconds instead of fifteen, in most businesses, their answer would be "not much". Within high tech, of course, terabit Ethernet would be a godsend—it would improve performance of build clusters pretty dramatically, for example—but high tech companies don't provide the purchasing volume to bring 10-GigE down into the consumer space, much less terabit.

To put things in perspective, when gigabit Ethernet came out, it took just a little over a year before you could buy computers with built-in GigE. The 10-GigE standard came out five years ago, but still you don't see PCs with 10-GigE. Nada. Zip. Zilch. If it takes a decade for that to be anything more than a niche product, it'll take a century for them to see any return on their investment in designing the next generation of LAN technology. Why? Because the only people who buy networking products in any real volume are using it mainly to connect to the outside world, and if someone's WAN pipe is a garden hose, that person is not going to repipe his or her house with 12-inch pipes.

Re:Hype! (1)

Targon (17348) | about 2 years ago | (#41066407)

This is the old chicken and egg situation when it comes to parts, so the sooner the standard is released, the sooner products will show up that support that standard. Now, one thing that many businesses want would be the idea of clients with no local storage, or perhaps even remove all processing in the local "terminal", and have a central server provide EVERYTHING. The only way to make it so this sort of thing won't seem like crap compared to a reasonable workstation would be to have enough bandwidth to handle ANY demand. Displays connected via Ethernet could be possible for example.

For the residential market, there will also be some who go with a central "server" for all their music and videos in the home, and for that, higher speed links would be welcomed. Another thing that many people do not consider is how quickly SSDs will take over once the price comes down, which also means that access to information will become that much faster. There is also the element where connecting routers and switches to provide greater bandwidth within businesses is desired. The residential market will also generally trail the business market when it comes to "wanting/needing the latest and greatest", so just because you are looking at how things are TODAY precludes the potential for how things will be in 10 years.

Basically, think outside your perception of "the general public", and look at where things WILL be. I remember when 10 base 2 with terminators on the coax based cables was normal for networking, Things evolve, and the needs and capabilities evolve. I also remember the days of dumb terminals, which basically died as the needs of workstations precluded having a central server that held all the processing abilities. Those days COULD return if bandwidth was high enough to have the video, audio, and I/O be connected via a network connection.

Re:Hype! (2)

dgatwood (11270) | about 2 years ago | (#41073087)

Now, one thing that many businesses want would be the idea of clients with no local storage, or perhaps even remove all processing in the local "terminal", and have a central server provide EVERYTHING.

The last thing most businesses want are dumb terminals. It doesn't matter how fast the link is. A thin-client business is a business that ceases to function if the very-expensive server goes down. The few businesses I've seen that want thin clients are mostly in retail, and their networking needs are usually met quite well by 10BASE-T.... Most businesses prefer the robustness and redundancy of computers that can mostly function independently.

For the residential market, there will also be some who go with a central "server" for all their music and videos in the home, and for that, higher speed links would be welcomed.

No, they wouldn't. Even for most of those folks, 100 megabits per second is more than fast enough. Most users who set up a central video server haven't even bothered to upgrade to gigabit yet, much less 10 gigabit, much less terabit. You can reliably stream HD video over Wi-Fi. You pretty much have to have insane networking needs to want to go past gigabit. Even Blu-Ray HD video has a maximum data rate of 40 Mbps. So to require more than gigabit speeds, you would have to be streaming 25+ Blu-Ray-quality high definition video streams. And your server would have to be able to serve 25+ Blu-Ray-quality high-def streams simultaneously.

We've gotten to the point where in-home networking is way more than fast enough to suit even the most demanding home users, and all but the absolute most demanding business users. Improving LANs is not a useful thing to do right now. That time and energy should be expended where it will actually do some good in the real world, which means improving the speed of WANs, not LANs.

Basically, think outside your perception of "the general public", and look at where things WILL be.

I am. I still don't see the point. You have two options for servers: making the servers bigger or making the servers more numerous. Unless you are doing something where strict synchronization is important, there's no benefit to one versus the other, which means there's no real benefit for making LANs faster. Those rare exceptions are things like render farms, build farms, etc. and are such tiny exceptions to the rule that they almost aren't worth seriously pondering. So the number of people or companies that would be helped by terabit LANs even in a decade or two is going to be measured in the single-digit millions of computers, give or take. By contrast, the number of folks that would be helped by faster WANs even today would be measured in the single-digit billions of computers. On that scale, the benefit from faster LANs is a rounding error.

not so much hype (2)

Chirs (87576) | about 2 years ago | (#41058383)

It's pretty easy to max out a 100Mbit ethernet link. Gigabit is also doable with a bit of work. It's a bit harder to max out a 10G port but it can be done with multiple queues and large packets. Once you hit 10G you really need to be using multiple queues spread across multiple CPUs and offloading as much as possible to hardware.

Re:not so much hype (1)

KingMotley (944240) | about 2 years ago | (#41058821)

Gigabit isn't difficult at all. I max out my gigabit network between my main computer and my NAS quite easily. Well, using 95% of it anyhow -- over windows shares none the less. I'm sure I could do better with a protocol that uses less overhead and better windowing.

Re:not so much hype (2)

AaronW (33736) | about 2 years ago | (#41059253)

I have no problem saturating 10G links, but then again I'm working on multi-core CPUs with 10-32 cores optimized for networking (the 10G interfaces are built-in to the CPU). I have a PCIe NIC card on my desk with 4 10Gbe ports on it (along with a 32-core CPU).

It's also neat when you can say you're running Linux on your NIC card (it can even run Debian).

Re:not so much hype (1)

user32.ExitWindowsEx (250475) | about 2 years ago | (#41059995)

what card is this

Re:not so much hype (1)

AaronW (33736) | about 2 years ago | (#41061261)

Search for CN6880-4NIC10E [google.com] . It has a Cavium OCTEON CN6880 [cavium.com] 32-core CPU on it with dual DDR3 interfaces. It would take some work to make it run Debian (requires running the root filesystem over NFS over PCIe or 10Gbe). All of the changes to support Linux are in the process of being pushed upstream to the Linux kernel GIT repository and hopefully sometime in the future I will get enough time to start pushing my U-Boot bootloader changes upstream as well.

All of the toolchain support is in the mainline GCC and binutils and glibc, though some of it might be in GIT since we just pushed our stuff up recently. The toolchain supports all of the extended instructions including those used for encryption and hashing.

There is a SDK but the SDK is generally quite expensive. The SDK is used for writing stand-alone applications that run on bare-metal on various cores in parallel with the Linux kernel. That way Linux runs on some cores and custom networking applications run on other cores without the overhead of a general-purpose operating system. Of course Linux could just as easily run on all the cores. It's a nice 64-bit MIPS platform as long as you don't need floating point. The CPU also has built-in acceleration for encryption, hashing, compression (deflate), pattern matching (regex) and RAID calculations (XOR/RAID6).

The PCIe bus just looks like another high-speed network interface as far as the CPU is concerned, so the card can basically be a network accelerator card for things like encryption or disk I/O. There's SDK add-ons to support things like TCP acceleration, Samba, SNORT, IPSEC and more.

20 bluray per tbit? (5, Insightful)

VMaN (164134) | about 2 years ago | (#41058117)

I think someone got their bits and bytes mixed up...

Re:20 bluray per tbit? (2)

JavaBear (9872) | about 2 years ago | (#41058161)

That always happen.
With a little overhead, 1 Tbit/s is at most 100GiB a second. 2 Blu rays.

Re:20 bluray per tbit? (0)

Anonymous Coward | about 2 years ago | (#41058267)

1Tbit = 125 GB, 1 BR disk = 25 - 128 GB.

Re:20 bluray per tbit? (2)

Frnknstn (663642) | about 2 years ago | (#41058599)

1 Terabit a second on the wire translates to about 100 Gigabytes a second of actual data transfer. Most modern encoding schemes and encapsulation protocols average 10 bits to represent an octet.

Re:20 bluray per tbit? (2)

ericloewe (2129490) | about 2 years ago | (#41059755)

Ethernet's specs account for encoding overhead. That means 1Gb/s is 1Gb/s minus protocol overhead, not 800Mb/s minus protocol overhead.

Re:20 bluray per tbit? (1)

Frnknstn (663642) | about 2 years ago | (#41068255)

True... the phrase I was searching for was 'protocol overhead'. I did a quick google and found the following:

http://sd.wareonearth.com/~phil/net/overhead/ [wareonearth.com]

Since we are talking ethernet, here are the numbers:
Ethernet: 1500/(38+1500) = 97.5293 %
TCP: (1500-52)/(38+1500) = 94.1482 %

Re:20 bluray per tbit? (1)

Anonymous Coward | about 2 years ago | (#41058209)

1 tbit = 1000 gbit = 125 gbyte = 20 x 6.25 gByte.

A 1080p feature film could be 6.25GB.

I don't see the mix up. (Though I do agree that's a theoretical maximum with no accounting for overhead or other factors that reduce the effective performance).

Re:20 bluray per tbit? (1)

mug funky (910186) | about 2 years ago | (#41064777)

but we're talking DVD9's with room to spare now, not blu-rays.

Re:20 bluray per tbit? (0)

Anonymous Coward | about 2 years ago | (#41065347)

1 tbit = 1000 gbit = 125 gbyte = 20 x 6.25 gByte.

The units require a bit fine-tuning: 1 Tb = 1000 Gb = 125 GB = 20 x 6.25 GB.

Re:20 bluray per tbit? (2)

fustakrakich (1673220) | about 2 years ago | (#41058319)

It's much worse than that. Somebody's reading comprehension isn't quite up to par

FTFA: "enough to copy two-and-a-half full-length Blu-ray movies in a second."

Re:20 bluray per tbit? (2)

fustakrakich (1673220) | about 2 years ago | (#41058367)

Whooops!!

Correction: "Updated 10:05 a.m. PT August 20 to correct the 1Tbps data-transfer speed in terms of Blu-ray disc copying times."

Re:20 bluray per tbit? (0)

Anonymous Coward | about 2 years ago | (#41058669)

If it had been 1TiB/sec the math would have been right for 20 double layer blu-Ray discs (50GiB each) a second

Re:20 bluray per tbit? (1)

Jah-Wren Ryel (80510) | about 2 years ago | (#41058397)

I think someone got their bits and bytes mixed up...

They never said how big the blurays actually were.

Re:20 bluray per tbit? (1)

VMaN (164134) | about 2 years ago | (#41058613)

A standard bluray is 120 x 1.2mm, right?

Re:20 bluray per tbit? (-1)

Anonymous Coward | about 2 years ago | (#41059541)

But the ray itself is infinitely narrow, like your brain

Re:20 bluray per tbit? (1)

swamp_ig (466489) | about 2 years ago | (#41066289)

More importantly, what's the conversion between Blurays and Blue Whales?

Re:20 bluray per tbit? (0)

Anonymous Coward | about 2 years ago | (#41068039)

Whats taht, 128 gigabytes? thats hardly 3 movies....

Please use the appropriate units. (0, Offtopic)

Anonymous Coward | about 2 years ago | (#41058143)

We need this information in Library of Congresses. Or a fraction of Ludicrous Speed.

copy 20 full-length Blu-ray movies in a second (1)

fustakrakich (1673220) | about 2 years ago | (#41058215)

The MPAA will be putting the kabosh on that.

I DRINK YOUR MILKSHAEK!!!! (0)

Anonymous Coward | about 2 years ago | (#41059913)

all the better for too big to fail firms to suck the foam right out of your 401k, peon!

Consequence for the last mile? None for ages. (3, Insightful)

jandrese (485) | about 2 years ago | (#41058261)

How important is 400G to the last mile? You might as well ask how important a new high bypass turbine engine for jumbo jets will be to my motorcycle. It's for a totally different market. We're just barely getting to the point where it starts to make sense for early adopters to get 10G Ethernet on their ridiculously tricked out boxes (and industry has been using it for backhaul for some time now), and 1G Ethernet is still gross overkill for the majority of users. We have at least gotten to the point where 10MB Ethernet is too slow however.

Re:Consequence for the last mile? None for ages. (4, Insightful)

Shatrat (855151) | about 2 years ago | (#41058619)

Consequences to me in long haul fiber optic transport? Massive.
Depending on how they implement 400G and Terabit it may affect the transport systems I deploy today, given that those speeds will likely require gridless DWDM which is currently just on the roadmap for most vendors.
Then, once it does come out, if our infrastructure is ready for it we will probably be able to deploy a Terabit link for the same price as 3 or 4 100G links. By that time 100G will start feeling a little tight anyway if we keep up the 50% a year growth rate.

There are no consequences to the last mile, for the same reason 100G has no consequences in the last mile.
Even 10G I only see used in the last mile to large customers like wireless backhaul or healthcare.
It's a silly summary but still an important topic.

Re:Consequence for the last mile? None for ages. (3, Interesting)

zbobet2012 (1025836) | about 2 years ago | (#41058747)

Agreed, the summary is really quite stupid. It does matter to the last mile since the last mile isn't currently the limiting factor in most downloads, but rather backhaul bandwidth is. Also it really matters for trans-continental lines, where upgrading the routers without having to upgrade the fiber can mean massive improvements without huge costs.

Ask CERN (1)

davidwr (791652) | about 2 years ago | (#41060847)

I'm sure they could put 400G last-mile to good use.

But yeah, for most of us, not so much, at least not this half-decade.

Re:Consequence for the last mile? None for ages. (0)

Anonymous Coward | about 2 years ago | (#41064285)

40G is directly relevant to the last mile and 100G is not far behind that. A lot depends upon the topology of the last mile network, alongside credible plans for future growth. I do this for a living.

Standards bodies need to work two generations beyond what can be economically realized today, so 400G and 1T are entirely defensible one and two layers up in the network from a 40G/100G last mile.

Re:Consequence for the last mile? None for ages. (0)

Anonymous Coward | about 2 years ago | (#41064751)

How important is 400G to the last mile?

For using youtube or other old-style services? Not much, the servers and backbones would not be able to handle that load from that many users.
For reducing LAN gaming latency, NAS access and torrents (Where it isn't too unlikely that one of the peers live close to you and can use your the internal network of your ISP to transfer packets.) it could be pretty nice.
And this is only today, but since the article is about consensus in an organization for standardization it will take a couple of years until the standard is done, and then a couple of more years until you can buy an ethernet card with that support. Your question is irrelevant, what should be asked is "How important is 400G to the end user in 10 years?".
I have no idea how important it is but if it turns out that it would be nice to have than it would be convenient to have the standard ready, I hate when I have to wait 10 years until I can buy what a want.

Re:Consequence for the last mile? None for ages. (1)

Sigg3.net (886486) | about 2 years ago | (#41070375)

I bought CAT6A bulk for my apartment, and have a star-layout with wall panels in every room. The termination is probably not up to spec, so I expect a little lower speeds. Then again, it's only a home network.

The point of going CAT6A was to avoid (or at least delay) upgrade.

So far, CAT6A equipment is nowhere to be found in my price-range. And laptop hard disks are still the number one bottleneck. Going all SSD on OS disks and 7200rpm on the NAS.

Isn't that how they say "start" in Boston? (0)

Anonymous Coward | about 2 years ago | (#41058277)

"the IEEE Will stat today"

Isn't that how they say "start" in Boston?

Re:Isn't that how they say "start" in Boston? (1)

Huggs (864763) | about 2 years ago | (#41058495)

Grr... editors and their misspellings.

"the IEEE will staht today"

there, fix that for ya.

That depends (1)

davidwr (791652) | about 2 years ago | (#41058311)

When will the standard become final?

If it will become final by Christmas, I'll give you a number I can live with.

If it won't become final for 12 months after that, I'll give you a higher number.

I'm still trying to get 10 GBs (0)

Anonymous Coward | about 2 years ago | (#41058323)

I've got a server that I'm upgrading, updating. While doing the ever popular "Windows Update", I initially thought that the a "un connected" network speed was defaulting to "10 mps", and kind of ignored it -- just looking at the task manager - network connection percentage speed to confirm that data was transferring.

Well, while I was waiting for a (windows - update) download to finish, I actually READ the speed in the task manager. It wasn't "un connected at 10 MBS", it was CONECTED ... at "10 GBS". Dammm.... like ... DAMM that is FAST.

So, I started thinking: What if I were to connect a switch to this that could keep up with this fast speed?

Then I started looking: I cannot find a Ethernet switch (less than $2,000) that can support a 10 GBS switch (With a standard Ethernet connector).

So I started thinking again... 1 GBs ... 10 GBs ... 1 GBs ... 10GBs.

Now the question: Does anyone have any ideas where I can get a reasonably priced 10-GBS switch?
      (Or a switch with one or two 10 GBS ethernet connections, and the rest running at 1 GBS ?)

not worth it in most cases (1)

Chirs (87576) | about 2 years ago | (#41058515)

Before you drop serious dough on a 10G switch...consider whether you'll be able to actually use the speed. That's roughly a gigabyte per second. You'd need a reasonably serious RAID to get anywhere close to that unless your data is all in RAM. You'd also need a fairly beefy PCI subsystem and likely 8+ CPU cores just to keep up with the I/O.

For backplane routing it makes sense because it's just forwarding lots of I/O aggregated from lots of other places. For most servers it's overkill.

Re:not worth it in most cases (2)

KingMotley (944240) | about 2 years ago | (#41059041)

Nah. My NAS (low end) maxes out my 1Gbps connection easily, and they claim I can team two 1Gbps connections together and it will fill them up. Based on the CPU usage and I/O, I'd say that it could do much more than that if it had better connectivity options. It's not unreasonable to need 10Gbps connections, although yes, to actually use all the bandwidth between any two connections would be more difficult. Most enterprise SANs and some NASs use RAM and SSDs as caching mechanisms and can easily saturate a 10Gbps link itself.

10Gbps = 1250MBps. My OCZ Revo could saturate that, easily.

Re:not worth it in most cases (2)

SuricouRaven (1897204) | about 2 years ago | (#41059061)

The most common reason for giving a server that much network capacity is virtualisation. A server hosting a collection of VMs will have hardware that substantial, and all those VMs sharing one physical interface will put the 10G to good use.

Re:not worth it in most cases (1)

afidel (530433) | about 2 years ago | (#41059769)

Perhaps more importantly when you combine storage, VM migration, and network traffic onto a pair of interfaces 10Gb is often barely enough and really requires some type of QoS so that migration traffic doesn't starve network or more importantly storage traffic.

Re: I'm still trying to get 10 GBs (0)

Anonymous Coward | about 2 years ago | (#41059979)

Not that I expected to use the full 10G network; but the system is (will be?) running with four 10K RPM (Raid 1+0) OS drives, eight (Raid 5) data drives. I frequently run against a wall with 1G networking. When transferring multiple large data (1G...10G; 500+GB at a time) files; not saturating my primary network would be 'nice'.

When I'm more comfortable with SSD, I'll consider replacing the 10K drives with them (Right now -- I'm seeing way to much negative comments to consider them a viable option).

Note 1: The 'primary' system will be running with two VM's, one that does work a little bit... 'all the time'; the other does work primarily at night (backup's)

Note 2: The eight data drives are 'limited' to 2TB recognized size (thank you dell) unless I want to shell out an additional $10,000+ to replace the base system.

Re:not worth it in most cases (0)

Anonymous Coward | about 2 years ago | (#41064761)

Before you drop serious dough on a 10G switch...consider whether you'll be able to actually use the speed. That's roughly a gigabyte per second. You'd need a reasonably serious RAID to get anywhere close to that unless your data is all in RAM. You'd also need a fairly beefy PCI subsystem and likely 8+ CPU cores just to keep up with the I/O.

For backplane routing it makes sense because it's just forwarding lots of I/O aggregated from lots of other places. For most servers it's overkill.

Your home network only have one switch? Hand in your geek card.

Wussies (1, Funny)

Quiet_Desperation (858215) | about 2 years ago | (#41058333)

I will accept nothing less than 1 zillion bits per second.

Re:Wussies (1)

lister king of smeg (2481612) | about 2 years ago | (#41059279)

pff i will except nothing less then a closed time like curve between my neural implant and every server in existence. information delivered straight to my brain just before i ask for it

Re:Wussies (1)

Anonymous Coward | about 2 years ago | (#41059845)

Well, at least there is plenty of room for it.

last mile (1)

McGruber (1417641) | about 2 years ago | (#41058351)

Of what consequence will this new standard matter if the last mile is still stuck on beep & creep?

We're gonna need a faster station wagon!

Re:last mile (1)

ericloewe (2129490) | about 2 years ago | (#41059827)

We could add a second station wagon and allow for full-duplex communications

Re:last mile (1)

aaronb1138 (2035478) | about 2 years ago | (#41064265)

Well now you got me thinking about racks of Backblazes in a cargo container.

We did this last time, and wasted a bunch of time. (4, Insightful)

Above (100351) | about 2 years ago | (#41058751)

Last time around there was a question about 40GE or 100GE. Largely (although not exactly) server guys pushed a 40GE standard for a number of reasons (cost, time to market, cabling issues, and bus-throughput of the machines), and the network guys pushed to stay with 100GE. Some 40GE (pre-standard?) made it out the door first, but it's basically not a big enough jump (just LAG 4x10GE cheaper) so there is no real point. 100GE is starting to gain traction as doing a 10x10GE LAG causes reliability and management issues.

This diversion probably delayed 100GE getting to market by 12-24 months, and the vast majority of folks, even server folks, now think 40GE was a mistake.

Why is the IEEE even asking this question again? The results are going to be basically the same, for basically the same reasons. 1Tbe should be the next jump, and they should get working on it pronto.

Re:We did this last time, and wasted a bunch of ti (2)

mvar (1386987) | about 2 years ago | (#41059023)

Why is the IEEE even asking this question again? The results are going to be basically the same, for basically the same reasons. 1Tbe should be the next jump, and they should get working on it pronto.

Because the companies that make the hardware are going to sell more modules :-P
I can't understand why the author is even mentioning laptops and PCs on this article. First make sure you can utilize the existing 1gbps technology, then see how to implement faster interfaces. Right now the bottleneck at home ethernet is slow hard drives and cheap "gigabit" NICs that underperform.

Re:We did this last time, and wasted a bunch of ti (0)

Anonymous Coward | about 2 years ago | (#41059375)

This, basically. 40GbE was a bodge, and an unnecessary one at that. Shit, I have machines with 8 10GbE ports on them already, LAGging 4 10GbE ports shouldn't present anyone with any problems. The only advantage to 40GbE is that you don't need as many physical switch ports, but that's relatively minor in the grand scheme of things. 100GbE and 1TbE should always have been the next logical steps.

Re:We did this last time, and wasted a bunch of ti (2, Informative)

Anonymous Coward | about 2 years ago | (#41059539)

The confusion between 40G ethernet and 100G ethernet is vast. But the actual reason for the standard has nothing to do with time-to-market or technological limitations beyond 40G. The 40G ethernet standard is designed to run ethernet over telco OC768 links. This standard allows vendors to support OC768 with the same hardware they use in a 100Gbps ethernet port.

dodgy calculations (1)

bloodhawk (813939) | about 2 years ago | (#41060157)

For comparison, that latter speed would be enough to copy 20 full-length Blu-ray movies in a second.'

someone doesn't understand the difference between bits and bytes.

LAN not WAN (1)

Princeofcups (150855) | about 2 years ago | (#41060679)

"The IEEE also reports on how the speed needs of the internet continue to double every year. Of what consequence will this new standard be if the last mile is still stuck on beep & creep?"

None what so ever, since ethernet is a LAN protocol, not WAN. It will be used in data centers that require big pipes between servers, and possibly compete with Fiber Channel for access to storage.

Infiniband is advancing over Ethernet (1)

Anonymous Coward | about 2 years ago | (#41063161)

I have been waiting 5+ years for 10 gigabit ethernet copper to fall in price like 100 mbps fast ethernet and 1 gigabit ethernet did, but it hasn't happened. Infiniband adoption has grown rapidly in the last few years. 12x FDR infiniband promises ~160 gigabit speeds, comparable to 16x PCI Express version 3. Maybe the IEEE should come up with cheap 2.5 gigabit ethernet, and give up on higher speed copper networking.

As for long distance optical, I want them to cram as many lasers into a single node fiber as economically feasible (couple hundred?), and use whatever speed that may be. Don't futz around IEEE, go all the way, and give us those terabits.

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?
or Connect with...

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>