Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!



ARPANET Co-Founder Calls for Flow Management

False Data research on root causes of congestion? (163 comments)

wonder if routing algorithms themselves aren't contributing to the problem.

BGP and the intra-domain routing protocols assume there is at most one correct route from a given source address to a given destination address. That assumption could give rise to unnecessary congestion. For example, suppose the source wants to use bandwith of 100 units and the destination is capable of keeping up. But between them there are two routers, in parallel, each of which can supply only 50 units. If there's exactly one path, source and dest can't talk any faster than 50 units because everything has to go through one of the two routers. (There are mechanisms to share bandwith in some situations, like the simple parallel routers one I described, but they dont' work for arbitrarily complex routing topologies across multiple BGP domains.)

It's possible, though, to imagine a network that routes in such a way that data could use both routers. For instance, in circuit switched networks the preestablished path tends to hang around even as the current "best" route changes, so in the earlier example two 50 unit connections between source and dest might end up being spread across both routers.

Rather than taking the congestion as a given, and figuring out work-arounds, I wonder if someone's done some research into why it exists and whether it's due to hot spots forming in the traffic flow.

more than 6 years ago


False Data hasn't submitted any stories.


False Data has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?