Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

MIT May Have Just Solved All Your Data Center Network Lag Issues

Unknown Lamer posted about 2 months ago | from the hierarchy-beats-anarchy dept.

Networking 83

alphadogg (971356) writes A group of MIT researchers say they've invented a new technology that should all but eliminate queue length in data center networking. The technology will be fully described in a paper presented at the annual conference of the ACM Special Interest Group on Data Communication. According to MIT, the paper will detail a system — dubbed Fastpass — that uses a centralized arbiter to analyze network traffic holistically and make routing decisions based on that analysis, in contrast to the more decentralized protocols common today. Experimentation done in Facebook data centers shows that a Fastpass arbiter with just eight cores can be used to manage a network transmitting 2.2 terabits of data per second, according to the researchers.

cancel ×

83 comments

Sorry! There are no comments related to the filter you selected.

Time travel (0)

Anonymous Coward | about 2 months ago | (#47478453)

Derp. No lag. "solved all your data center network lag issues". "invented a new technology that should all but eliminate queue length in data center networking".

Happy Friday from the Golden Girls.

Re: Time travel (0)

Anonymous Coward | about 2 months ago | (#47478511)

Thank you for being a friend.

Re: Time travel (-1)

Anonymous Coward | about 2 months ago | (#47478611)

Thank you for being a friend.

Thank you for modding me down every time I say nigger. How would the world carry on if even one person ever got offended? Do your duty now. Serve the artificially inoffensive poltiical correctness. Yes, that's a good citizen.

Re:Time travel (1)

philip.paradis (2580427) | about 2 months ago | (#47481249)

You're a friend and a cosmonaut.

Token Ring is dead. (-1)

Anonymous Coward | about 2 months ago | (#47478465)

Net craft confirms it. In theory this should be faster just like token ring should be faster then Ethernet but do to uncontrollable factors Token ring failed to show any increase in speed over Ethernet while have a bunch of issues that Ethernet just didn't have.

Re:Token Ring is dead. (1)

JSG (82708) | about 2 months ago | (#47478539)

Nearly any network tech should be faster than Ethernet in certain circumstances. Ethernet is generally good though and appears to be quite good a scaling.

I remember the good old days and the joys of beaconing 8)

Re:Token Ring is dead. (2)

David_Hart (1184661) | about 2 months ago | (#47478885)

Nearly any network tech should be faster than Ethernet in certain circumstances. Ethernet is generally good though and appears to be quite good a scaling.

The key word, there, is scaling.

It looks like this is meant to make the network more efficient within a data center that handles a high volume of traffic, including high traffic spikes, by receiving a network time slot request from the end point (i.e. software running on a UNIX server) and sending a response that schedules packets to arrive just-in-time along a specific path to avoid queuing.

However, there is a less complicated way of achieving the same goal: Scalability - Increase your switch and server up-link bandwidth to eliminate congestion and queuing.

Yes, it costs money to add network capacity. But the big question is which would cost more? Adding capacity? or installing a pair of servers, rolling out software clients to all of your endpoints (servers), and supporting the system? Personally, I'd rather add network capacity and be done...

Re:Token Ring is dead. (1)

aaronb1138 (2035478) | about 2 months ago | (#47489489)

Even if it is just a data center technology, a key placement might be SAN switching. Currently, much of "the cloud" or rather server cluster based computing suffers heavily from latency and you can never have enough storage bandwidth issues.

Re:Token Ring is dead. (1)

bugs2squash (1132591) | about 2 months ago | (#47480169)

barely any link layer is shared today though, hubs are dead and gone (or at least hidden in cupboards by people who think they need them for packet sniffing). MIT's solution seems eerily like a Chris Christie plan to eliminate congestion on the NJ turnpike.

Re:Token Ring is dead. (0)

Anonymous Coward | about 2 months ago | (#47481195)

Nearly any network tech should be faster than Ethernet in certain circumstances.

We have 10GbE in the data center. We also have a 376lb, 38 year old network tech named Bob in the data center. Maybe if I had a dozen hot Krispy Kreme's on the far side of the room, Bob might just beat the packets. But I doubt it.

Great (-1)

Anonymous Coward | about 2 months ago | (#47478503)

They should just put this on the internet and we can have a centralized arbiter directing all the traffic.

They re-invented static scheduling (-1)

Anonymous Coward | about 2 months ago | (#47478507)

Ok, so these guys reinvented static scheduling. With all its advantages (few resources, no buffers, etc.) and disadvantages (can you predict your network traffic?). You have to come from MIT or Berkeley to present this as "new results".

Re:They re-invented static scheduling (0)

Anonymous Coward | about 2 months ago | (#47478595)

Is your comment some sort of thinly-veiled slur against the BSD and MIT licenses? Is it some sort of pro-GPL huzzah?

Why are you trying to start a software licensing flame war here, when we're talking about networking technologies?

Re:They re-invented static scheduling (0)

Anonymous Coward | about 2 months ago | (#47478757)

This hook doesn't even have any bait.

Re:They re-invented static scheduling (2)

mark-t (151149) | about 2 months ago | (#47478783)

Where in the comment did you read anything that suggested it would be about licensing? Or were you unaware that Berkeley and MIT are actual, real-world institutions, and it's possible to use those names without necessarily referring to the corresponding open source license.

Re:They re-invented static scheduling (0)

Anonymous Coward | about 2 months ago | (#47479297)

No he's saying that developers from Berkeley and MIT have used a lot of drugs. A whole awful lot. Like the drugs you should be on.

Re:They re-invented static scheduling (1)

sonamchauhan (587356) | about 2 months ago | (#47487547)

No, I don't think so. RMS worked at MIT for over a decade.

Re:They re-invented static scheduling (4, Informative)

postbigbang (761081) | about 2 months ago | (#47479521)

Nah. They put MPLS logic-- deterministic routing by knowing the domain into an algorithm that optimizes time slots, too.

All the hosts are know, their time costs, and how much crap they jam into wires. It's pretty simple to typify what's going on, and where the packet parking lots are. If you have sufficient paths and bandwidth in and among the hosts, you resolve the bottlenecks.

This only works, however, if and when the domain of hosts has sufficient aggregate resources in terms of path availability among the hosts. Otherwise, it's the classic crossbar problem looking for a spot marked ooops, my algorithm falls apart when all paths are occupied.

Certainly it's nice to optimize and there's plenty of room for algorithms that know how to sieve the traffic. But traffic is random, and pathways limited. Defying the laws of physics will be difficult unless you control congestion in aggregate from applications where you can make the application become predictable. Only then, or you have a crossbar matrix, will there be no congestion. For any questions on this, look to the Van Jacobsen algorithms and what the telcos had to figure out, eons ago.

Re:They re-invented static scheduling (0)

Anonymous Coward | about 2 months ago | (#47480145)

Exactly. Toss in the added bonus of ignoring prior work because the problem is "new," compare it only to a sub-par baseline and voila, revolutionary paper.

Papers like this are exactly why I usually shake my head at SIGCOMM.

scalability? (1, Insightful)

p25r1 (3593919) | about 2 months ago | (#47478509)

Good idea, however, its main problem is that it only scales up to a couple of racks and to scale to anything larger it will probably have to sacrifice the zero-queue design principle that it argues for...

Re:scalability? (3, Insightful)

Anonymous Coward | about 2 months ago | (#47478695)

FTA: “This paper is not intended to show that you can build this in the world’s largest data centers today,” said Balakrishnan. “But the question as to whether a more scalable centralized system can be built, we think the answer is yes.”

Yawn (0)

JSG (82708) | about 2 months ago | (#47478515)

Good grief: they appear to have invented a scheduler of some sort. I read the rather thin Network World article and that reveals little.

Nothing to see here - move on!

Re:Yawn (3, Informative)

Anonymous Coward | about 2 months ago | (#47478619)

A link to the paper is in the first article link. Direct link Here [mit.edu] . They also have a GIT repo to clone, if you're interested.

That's nice. (-1)

Anonymous Coward | about 2 months ago | (#47478517)

Now do it again, but in a distributed fashion.

So just have multiple arbiters. (0)

Anonymous Coward | about 2 months ago | (#47478569)

If you're going to be a smug windbag, at least think things through!

Instead of using just one arbiter, use a small number of them, connect them, and let them interact with one another to make the routing decisions. Now it's "distributed".

But what you need to realize is that there really is no such thing as "distributed networking". There is just networking. The entire system is the network. Or as my chums back at Sun in the good old days liked to yell out of car windows while driving through San Fran, "The Network Is The Computer".

Hooray (0)

BlackHawk-666 (560896) | about 2 months ago | (#47478535)

Now I can see pictures of other's people's food and children so much more quickly...can't wait..>.>

Re:Hooray (1)

jimmifett (2434568) | about 2 months ago | (#47478577)

You forgot about the pr0n and cats. I will say, faster pics of cats is probably worth some merit.

Re:Hooray (1)

6Yankee (597075) | about 2 months ago | (#47481041)

Optimise. Only friend people who eat their children.

rfc1925.11 proves true, yet again (1, Interesting)

mysidia (191772) | about 2 months ago | (#47478583)

Every old idea will be proposed again with a different name and a different presentation, regardless of whether it works.

Case in point: ATM To the Desktop.

In a modern datacenter "2.2 terabits" is not impressive. 300 10-gigabit ports (Or about 50 servers) is 3 terbits. And there is no reason to believe you can just add more cores and continue to scale the bitrate linearly. Furthermore... how will Fastpass perform during attempted DoS attacks or other stormy conditions where there are small packets, which are particularly stressful for any centralized controller?

Furthermore.... "zero queuing" does not solve any real problems facing datacenter networks. If limited bandwidth is a problem, the solution is to add more bandwidth -- shorter queues does not eliminate bandwidth bottlenecks in the network; you can't schedule your way into using more capacity than a link supports.

Re:rfc1925.11 proves true, yet again (5, Interesting)

Archangel Michael (180766) | about 2 months ago | (#47478693)

Your 300 x 10GB ports on 50 Servers is ... not efficient. Additionally, you're not likely saturating your 60GB off a single server, and you're running those six 10GB connections per server to try to eliminate other issues you have, without understanding them. You're speed issues are elsewhere (likely SAN or Database .. or both), and not in the 50 servers. In fact, you might be exasperating the problem.

BTW, our data center core is running twin 40GB connections for 80 GB total network load, but were not really seeing anything using 10GB off a single node yet, except the SAN. Our Metro Area Network links are is being upgraded to 10GB as we speak. The "network is slow" is not really an option.

Re:rfc1925.11 proves true, yet again (5, Funny)

chuckugly (2030942) | about 2 months ago | (#47478859)

In fact, you might be exasperating the problem.

I hate it when my problems get angry, it usually just exacerbates things.

Re:rfc1925.11 proves true, yet again (1)

mysidia (191772) | about 2 months ago | (#47480959)

I hate it when my problems get angry, it usually just exacerbates things.

I hear most problems can be kept reasonably happy by properly acknowledging their existence and discussing potential resolutions.

Problems tend to be more likely to get frustrated when you ignore them, and anger comes mostly when you attribute their accomplishments to other problems.

Re:rfc1925.11 proves true, yet again (0)

BitZtream (692029) | about 2 months ago | (#47479441)

Your 300 x 10GB ports on 50 Servers is ... not efficient. Additionally, you're not likely saturating your 60GB off a single server, and you're running those six 10GB connections per server to try to eliminate other issues you have, without understanding them.

You haven't worked with large scale virtualization much, have you?

Re:rfc1925.11 proves true, yet again (0)

Anonymous Coward | about 2 months ago | (#47479727)

no,
  not many people have.

Re:rfc1925.11 proves true, yet again (1)

mysidia (191772) | about 2 months ago | (#47481577)

You haven't worked with large scale virtualization much, have you?

In all fairness.. I am not at full scale virtualization yet either, and my experience is with pods of 15 production servers with 64 CPU Cores + ~500 Gb of RAM each and 4 10-gig ports per physical server, half for redundancy, and bandwidth utilization is controlled to remain less than 50%. I would consider the need for more 10-gig ports or a move to 40-gig ports, if density were increased by a factor of 3: which is probable in a few years, as servers will be shipping with 2 to 4 Terabytes of RAM and run 200 large VMs per host before too long.

It is thus unreasonable to pretend that large scale virtualization doesn't exist or that organizations are going to be able in the long run to justify not having large scale virtualization, OR moving to a cloud solution which is ultimately hosted on large scale virtualization.

The efficiencies that can be gained from a SDD strategy versus sparse deployment on physical servers are simply too large for management/shareholders to ignore.

However: the network must be capable of delivering 100%.

Perfectly content to overallocate CPU, Memory, Storage, and even Network port Bandwidth at the server edge. However the network at a fundamental layer has to be able to deliver 100% of what is there --- just like the SAN needs to be able to deliver within a degree of magnitude the Latency/IOPS and Volume space size that the vendor showed as the capacity of the SAN --- we will intentionally choose to assign more storage than we actually have, BUT that is an informed choice, the risks simply become unacceptable if the lower level core resources can't make some absolute promises about what exists and the controller architecture forces us to make an uniformed choice, OR guess about what our own network will be able to handle affected by the loads created by completely unrelated networks or VLANs outside our control, E.g. perhaps another tenant of the datacenter.

This is why a central control system for the network is suddenly problematic. The central controller has suddenly removed a fundamental capability of the network to be heavily subscribed, fault-isolated within a physical infrastructure (through Layer 2 separation), and tolerate and minimize the impact of failures, if designed appropriately.

Re:rfc1925.11 proves true, yet again (2)

mysidia (191772) | about 2 months ago | (#47479649)

Your 300 x 10GB ports on 50 Servers is ... not efficient. Additionally, you're not likely saturating your 60GB off a single server,

It's not so hard to get 50 gigabits off a heavily consolidated server under normal conditions; throw some storage intensive workloads at it, perhaps some MongoDB instances and a whole variety of highly-demanded odds and ends, .....

If you ever saturate any of the links on the server then it's kind of an error: in critical application network design, a core link within your network being saturated for 15 seconds due to some internal demand burst that was not appropriately designed for is potentially a "you get fired or placed on the s***** list immediately after the post-mortem" kind of mistake. Leaf and spine fabrics which are unsaturatable, except at the edge ports: are definitely a great strategy to approach sizing of core infrastructure --- from there most internal bandwidth risk can be alleviated by shifting workloads around.

Latency performance seriously suffers instability at ~60% or higher utilization, so for latency-sensitive applications especially: it would be a major mistake to provision only enough capacity to avoid saturation, when micro "bursts" in bandwidth usage are the reality for real-world workloads.
An internal link with peak usage of 40% or higher should be considered in need of being relieved, and a link utilized 50% or higher should be considered seriously congested.

Re:rfc1925.11 proves true, yet again (1)

Archangel Michael (180766) | about 2 months ago | (#47500283)

While it is possible to fill your Data pathways up. Aggregate data is not the same as Edge Server data. In the case described above, s/he is running 300 x 10GB on 50 Servers. Okay, lets assume those are 50 Blades, maxed out on RAM and whatnot. The Only way to fill that bandwidth is to do RAM to RAM copying, and then you'll start running into issues along the pipelines in the actual Physical Server.

To be honest, I've see this, but only when migrating VMs off host for host Maintenance, or a boot Storm on our VDI.

Re:rfc1925.11 proves true, yet again (1)

mysidia (191772) | about 2 months ago | (#47502483)

To be honest, I've see this, but only when migrating VMs off host for host Maintenance, or a boot Storm on our VDI.

Maintenance mode migrations are pretty common; especially when rolling out security updates. Ever place two hosts in maintenance mode simultaneously and have a few backup jobs kick off during the process?

Re:rfc1925.11 proves true, yet again (5, Informative)

Anonymous Coward | about 2 months ago | (#47479403)

This is about zero in-plane queuing, not zero queuing. There is still a queue on each host, the advantage of this approach is obvious to anyone with knowledge of network theory (ie. not you). Once a packet enters an ethernet forwarding domain, there is very little you can do to re-order or cancel it. If you instead only send from a host when there is an uncongested path through the forwarding domain, you can reorder packets before they are sent, which allows, for example, to insert high-priority packets into the front of the queue, and bucket low priority traffic until there is a lull in the network.

Bandwidth is always limited at the highend. Technology and cost always limits the peak throughput of a fully cross-connected forwarding domain. That's why the entire internet isn't a 2 Billion way crossbar switch.

Furthermore, you can't install 6x 10-gigabit ports in a typical server, they just don't have that much PCIe bandwidth. You might also want to look at how much a 300 port 10GigE non-blocking switch really costs, multiply that up by 1000x to see how much it would cost Facebook to have a 300k node DC with those, and start to appreciate why they are looking at software approaches to optimise the bandwidth and latency of their networks with resources that are cost-effective, considering their network loads like everyone else's network loads never look like the theoretical worst-case of every node transmitting continuously to random other nodes.

Real network loads have shapes, and if you are able to understand those shapes, you can make considerable cost savings. It's called engineering, specifically traffic engineering.

-puddingpimp

Re:rfc1925.11 proves true, yet again (1)

Blaskowicz (634489) | about 2 months ago | (#47480099)

You can get consumer hardware with 40 PCIe 3.0 lanes that run right into the CPU, wouldn't that be enough PCIe bandwith?

The wonders of central planning... (0)

mi (197448) | about 2 months ago | (#47478591)

centralized arbiter to analyze network traffic holistically and make routing decisions based on that analysis, in contrast to the more decentralized protocols common today

Central planning works rather poorly for humans [economicshelp.org] . Maybe, it will be better for computers, but I remain skeptical.

Oh, and the term "holistically" does not help either.

Re:The wonders of central planning... (0)

Anonymous Coward | about 2 months ago | (#47478637)

So your computer does not have a CPU (The C stands for central *wink* *wink*) in it? Because having a centralized processing unit just doesn't work, right?

Re:The wonders of central planning... (0)

Anonymous Coward | about 2 months ago | (#47478781)

It's just central processing unit, and it is not the individual (computer) that matters, but the collective.

Re:The wonders of central planning... (0)

Anonymous Coward | about 2 months ago | (#47479161)

> Central planning works rather poorly for humans [economicshelp.org].
> Maybe, it will be better for computers, but I remain skeptical.

There are certainly not enough details, but it sounds like yet another proposal to move the smarts away from the edges and build it into the network infrastructure itself. Basically to undo what makes the internet the internet.

Re:The wonders of central planning... (1)

tepples (727027) | about 2 months ago | (#47479845)

Then perhaps what is needed inside a data center isn't the Internet but instead a smart network that happens to connect to the Internet.

Re:The wonders of central planning... (0)

Anonymous Coward | about 2 months ago | (#47480673)

EVERY computer uses "central planning" to perform multiprocessing: A single kernel arbitrarily dictates which processes get to run at what times! Or one scale up, a single head node dictates what jobs run on the cluster.

Not surprisingly, a central planner run system will generally make better decisions if it is close enough to omniscient within its problem-space. Distributed and agent-based systems work better when an omniscient isn't available (aka human economics). If you handed a traffic-routing supercomputer the complete state of all lights in the city and the position/velocity vectors of every car it would certainly outperform the existing decentralized traffic control systems... but that's not practical so it's every intersection for itself (outside of green-wave setups).

Ok (1)

Anonymous Coward | about 2 months ago | (#47478615)

Ok, but the most important question is: did they implement it in Javascript, Go, Rust, Ruby or some other hipster, flavor-of-the-month-language?

Re:Ok (0)

Anonymous Coward | about 2 months ago | (#47478765)

Looooool! Amen brother, Amen! Gawd, I just had to sit through a hipster give a presentation about Rust at work a couple of days ago. We're a C++ shop, and he claimed that Rust could replace C++. It was hilarious to see some of the neckbeards tear this poor young [and wimpy!] man seven or eight new arseholes. The neckbeards would point out that every benefit he listed for Rust could already be done in C++. Then they laughed at him when he said it was version 0.10 or something stupid like that. Then they asked him about how many compilers are available, and he said just the one. They ridiculed him for that! He couldn't even justify the total lack of IDEs and other tools. Lol the best part though was when he went to the Rust IRC channel to show us how 'great' the community is when he wasn't sure about the answer to a question a neckbeard had asked him. There were only a few people in the channel and when we got there they were talking about how they were still in high school and how they couldn't afford to go to college! Lol! Then they couldn't even answer the question. It was so hilarious! I don't think I've ever seen a presentation proposing the use of a new programming language go so wrong! Lol by the end of this fool's presentation I think even he was starting to think that Rust was just more hype with no substance. All I got from the presentation is that the hipster who gave the presentation is kind of a dumbass, Rust is a pathetically immature language, and its community is mainly made up of teenagers. It's totally not what we need in our C++ shop and I know the neckbeards will stomp it out if it ever comes near. Lol this guy's presentation gave it a reputation that's worse than Ruby's!

Re:Ok (0)

Anonymous Coward | about 2 months ago | (#47479633)

lol

Re:Ok (1)

philip.paradis (2580427) | about 2 months ago | (#47481317)

Does your shop have a relatively narrow development scope? Over the course of my career, I've found that single language shops are either fairly tightly tied to a small set of problem domains, or they're full of people who see every problem as a nail so to speak. The latter condition is an unfortunate state of inflexibility that tends to extend into other areas, including higher level systems work and network architecture. I'm not saying your organization suffers from that affliction, but I would like to understand a bit more about the sort of development your team does. For the record, I'm a big fan of mature systems in general, and for most of my work various combinations of Perl, Bash, C, and Python gets the job done (usually in that order).

Re:Ok (0)

Anonymous Coward | about 2 months ago | (#47486957)

It's probably not a problem with his shop. Everybody doing anything important uses C++. That's just because C++ is the best we have today. If you're not using C++, you're probably not working on something that's very important.

Re:Ok (1)

philip.paradis (2580427) | about 2 months ago | (#47487563)

How shall we define importance? In terms of scope, are we talking about kernel space, userland code that humans directly interact with, systems/infrastructure code, data processing systems, or something else entirely?

Great! Another single point-of-failure... (2, Insightful)

gweihir (88907) | about 2 months ago | (#47478625)

This is a really bad idea. No need to elaborate further.

OpenDayLight (1)

Thinman (59679) | about 2 months ago | (#47478673)

How different is this to http://www.opendaylight.org/ [opendaylight.org] ?

Ugh... (0)

Jeremy Gillespie (3664681) | about 2 months ago | (#47478687)

Call me when its faster than MPLS. Routing decisions aren't slowing down the world...

Re:Ugh... (1)

arth1 (260657) | about 2 months ago | (#47480179)

They may slow down the world if this gets hyped to the point that it sells.
The problem is T.ANSTAAFL. This is Yet Another Implementation that seeks to reduce the average latency, without thought to the fact that what really hurts is the worst case latency bottleneck. This, like many other approaches before it, will worsen the worst case in order to buy the average case lunch.
You either have to come up with a solution that reduces the worst case, which is what really hurts, or make Pareto improvements, i.e. those that hurts no-one, even in corner cases.

This is not it. And yes, I have looked at it.

Nginx? (1)

CrashNBrn (1143981) | about 2 months ago | (#47478711)

I thought Nginx was created by Igor Sysoev?

Re:Nginx? (0)

Anonymous Coward | about 2 months ago | (#47478855)

Nginx is only "faster" than Apache because it doesn't fucking do anything. Software that does pretty much nothing, and instead just delegates any real work to other software, always seems fast. But that's not because it truly is fast. Rather, it's because it isn't fucking doing anything!

Re:Nginx? (0)

Anonymous Coward | about 2 months ago | (#47479435)

I don't give a shit if Apache is doing more, it's not doing anything useful. I'll take my nginx doing only what's required: serving static pages and directing dynamic requests to the nodes that actually run the application, over Apache doing who knows what but I don't care because it's not making my sites any faster.

Re:Nginx? (1)

BitZtream (692029) | about 2 months ago | (#47479451)

Actually, it is fast because it does nothing, and thats also the point.

You don't use Apache to serve boat loads of static files, you use Nginx.

You don't use Nginx to serve ASP.NET or Java EE apps.

This is why we pay too much for internet (-1)

Anonymous Coward | about 2 months ago | (#47478771)

The ISP's don't have enough incentive to develop their technology, so they use old tech in a modern world.

This sounds familiar... (2)

certain death (947081) | about 2 months ago | (#47478793)

Maybe because that is what Token Ring did! Just sayin'!

Re:This sounds familiar... (0)

Anonymous Coward | about 2 months ago | (#47479569)

So... One Ring to rule them all?

Re:This sounds familiar... (0)

Anonymous Coward | about 2 months ago | (#47484019)

why not more than one ring, go with a x-dimensional torus network!

I hope Disney aren't the type to sue (0)

Anonymous Coward | about 2 months ago | (#47478899)

Fastpass is their trademark (since 1999), after all.

http://en.wikipedia.org/wiki/Disney%27s_Fastpass

Re:I hope Disney aren't the type to sue (0)

Anonymous Coward | about 2 months ago | (#47478919)

But Fastpass &,dash; isn't.

I for one (1)

fisted (2295862) | about 2 months ago | (#47478907)

I for one welcome all but our new Fastpass &,dash; static scheduling overlords.

Re:I for one (1)

X0563511 (793323) | about 2 months ago | (#47479307)

Careful, the next one might have "smart" quotes!

Re: I for one (1)

bill_mcgonigle (4333) | about 2 months ago | (#47480065)

"Editors"

Good news for ... (1)

CaptainDork (3678879) | about 2 months ago | (#47478933)

... my Candy Crush Saga.

Net Neutrality (1)

Lead Butthead (321013) | about 2 months ago | (#47479123)

And big network service provider will implement it to the detriment of their revenue (think Comcast and Netflix). Riiiiiight.

So, if it allows less restricted dataflow... (2, Funny)

jeffb (2.718) (1189693) | about 2 months ago | (#47479273)

...are they trying to say that "Arbiter macht frei"?

Re:So, if it allows less restricted dataflow... (1)

larpon (974081) | about 2 months ago | (#47481007)

Arnbitter [ytimg.com] macht frei.

Re:So, if it allows less restricted dataflow... (0)

Anonymous Coward | about 2 months ago | (#47481033)

-1 Insensitive Clod

New Single Point of Failure Added (0)

Anonymous Coward | about 2 months ago | (#47479647)

And it's an expensive single point of failure, which no one knows how to program and has never been tested under badly configured local networks.

This is about as stunning as the MIT built network router that thoroughly optimized BGP maps, unless there was a router loop, in which case it would hard crash the router and require a power cycle. See, adding the checks to verify no router loops actually made it considerably slower than the existing technologies.

Traffic lights... (0)

Anonymous Coward | about 2 months ago | (#47479839)

Meanwhile some traffic lights in my town are green a whopping 12 seconds on a backed up to hell road before they turn color for a minute to let 5 cars meander past on the perpendicular rout.

Please read carefully, this article is a carefully (1)

Anonymous Coward | about 2 months ago | (#47480433)

This paper shows no tangible benefit other than a slight decrease in TCP retransmits, something that the authors never test if it shows any real benefit.

Crucially, this system is not "zero queue". They simply move queuing to the edge of the network and in to the arbiter. Notice that there is no evaluation of the total round trip delay in the system. The dirty secret is because it's no better, especially as the load increases, since the amount of work that the arbiter must do grows exponentially with both the size and utilisation of the network this is guaranteed to have scalability issues. Finally, the evaluation at "Facebook" used only one rack and never tested the path selection, so in reality this paper has shown nothing and demonstrated no benefit.

Slow or am i missing something? (0)

Anonymous Coward | about 2 months ago | (#47480583)

42 terabit/s on one core

https://translate.google.com/translate?hl=da&sl=da&tl=en&u=http%3A%2F%2Fing.dk%2Fartikel%2Fdtu-slaar-verdensrekord-datatransmission-169478

two years ago petabit /s one 12 cores
https://translate.google.com/translate?hl=da&sl=da&tl=en&u=http%3A%2F%2Fing.dk%2Fartikel%2Fdtu-professor-slar-rekord-sender-en-petabit-i-sekundet-gennem-en-optisk-fiber-132652

Some back of the envelope calculations (1)

MerlynEmrys67 (583469) | about 2 months ago | (#47480603)

2.2TBit/sec is just under 40 ports which is just over 2 switches...
It will only take one extra management processor (8 cores) to manage two switches... Get back to me when you can drive 100TBit/sec with one core
PS - is there extra compute needed on the management plane of the edge switches here? I don't think so but it is hard to tell

This should also make monitoring simpler (NSA) (0)

Anonymous Coward | about 2 months ago | (#47481711)

Which is getting harder with decentralized network fabrics. Gotta snoop those conversations.

To bad ISP's don't use this (0)

Anonymous Coward | about 2 months ago | (#47483575)

In addition to eliminating the queue latency, it could enforce fairness among customers.
    Which would make the Netflix/Comcast who should pay for what debate moot.

Good for customers and Net Neutrality.
Bad for double charging strategies.

In the current 'competitive' environment, I won't hold my breath.

SDN (0)

Anonymous Coward | about 2 months ago | (#47483975)

it is

Centralization sucks (1)

YoungManKlaus (2773165) | about 2 months ago | (#47489813)

if that component goes down in flames you are screwed!

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>