Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Too Many Connections Weaken Networks

timothy posted more than 2 years ago | from the you-are-the-weakest-link-goodbye dept.

Math 48

itwbennett writes "Conventional wisdom holds that more connections make networks more resilient, but a team of mathematicians at UC Davis have found that that is only true up to a point. The team built a model to determine the ideal number of cross-network connections. 'There are some benefits to opening connections to another network. When your network is under stress, the neighboring network can help you out. But in some cases, the neighboring network can be volatile and make your problems worse. There is a trade-off,' said researcher Charles Brummit. 'We are trying to measure this trade-off and find what amount of interdependence among different networks would minimize the risk of large, spreading failures.' Brummitt's team published its work (PDF) in the Proceedings of The National Academies of Science."

cancel ×

48 comments

Sorry! There are no comments related to the filter you selected.

HAHA (3, Funny)

Anonymous Coward | more than 2 years ago | (#39159767)

pnas... lol.

Re:HAHA (2)

vencs (1937504) | more than 2 years ago | (#39161671)

Too Many Connections Weaken Immunity

Easy if n m bad (1)

locopuyo (1433631) | more than 2 years ago | (#39159835)

If the neighboring connections use your connection more than you use theirs it is weakening your connection.

Re:Easy if n m bad (1)

Oxford_Comma_Lover (1679530) | more than 2 years ago | (#39159957)

If the neighboring connections use your connection more than you use theirs it is weakening your connection.

No, not necessarily. A use may be relatively costless to me, but the fact that I am a node that generates new connections may increase my value to others, for example.

Re:Easy if n m bad (2)

hairyfeet (841228) | more than 2 years ago | (#39161421)

Hell ask the P2P guys because if there is anybody that has to balance craploads of connections its those guys. Look at how much overhead the first gen P2Ps used compared to now, with each version they get better at moving data without the connections getting overloaded. Give me somebody that has actually had to deal with the BS day to day than somebody that is writing a paper any day of the week. the trial by fire quickly weeds out the dumb ideas and you fix it or die.

ah-ha! (2)

binaryhat (2494814) | more than 2 years ago | (#39159845)

The Goldie Locks network:
"As a first theoretical step, it's very nice work," said Cris Moore, a professor in the computer science department at the University of New Mexico. Moore was not involved in the project. "They found a sweet spot in the middle," between too much connectivity and not enough, he said. "If you have some interconnection between clusters but not too much, then [the clusters] can help each other bear a load, without causing avalanches [of work] sloshing back and forth."

torrents (-1)

Anonymous Coward | more than 2 years ago | (#39159847)

this problem would be an advantage if the whole internet adopted a torrent protocol model.

the more popular, the more resilient the data.

Re:torrents (1)

MrEricSir (398214) | more than 2 years ago | (#39162015)

There is a network like that, it's called Freenet [wikipedia.org] .

Primitive (4, Interesting)

gilgongo (57446) | more than 2 years ago | (#39159859)

I'm sure that in 100 years time, people will look back on our understanding of networks, information and culture in the same way as we look back on people's understanding of the body's nervous or endocrine systems 100 before now. This study hints at our lack of knowledge about what the hell is happening.

Re:Primitive (2)

NoNonAlphaCharsHere (2201864) | more than 2 years ago | (#39159877)

It would be interesting to apply what they learned here to the power grid.

Re:Primitive (4, Informative)

gl4ss (559668) | more than 2 years ago | (#39159921)

the study was about power grids, where it makes a bit more sense. of course in that context(and in data-networks, though with data it actually matters where a certain data packet goes as data consumers don't just want _any_ data, they need specific data, but power you don't much care where it actually came from..).

still, gotta wonder, in real world context you'd need to think about what kind of real mechanisms are used for making the new connections and safeties supposed to stop cascades from spreading.

Re:Primitive (0)

Anonymous Coward | more than 2 years ago | (#39160055)

[posting to fix accidental downmod]

Re:Primitive (5, Interesting)

Anonymous Coward | more than 2 years ago | (#39160091)

Cascades on power networks happen when you suddenly lose source, without rejecting the drain. I.e. the load remains high, but suddenly the flow required to supply that load has to shift because a link went down due to failure/overload.

There is a protection against this, it is called selective load rejection. You shut off large groups of customers, plain and simple. And you do it very very fast. Then you reroute the network to make sure power is going to be able to flow over links that will not overload, and do a staggered reconnect of the load you rejected.

That costs BIG $$$$ (in fines, lost revenue, and indirect damage due to brown-outs), and there is a silent war to try to get someone ELSE to reject load instead of your network. The only thing that balances it are extremely steep fines among the power networks themselves, and in countries that are not nasty jokes, the government regulatory body.

I am not exactly sure how to move that to a BGP4, IS-IS or OSPFv2/v3 network, where instead of a sink pressure, you have a source pressure.

Re:Primitive (1)

Tacvek (948259) | more than 2 years ago | (#39160271)

With if there is a clear distinction between sources and sinks, then surely the same thing applies. If your current routing patterns cannot handle the incoming packets, you drop some of the sources, while you reconfigure to handle routing to a destination (if possible), then start accepting data from the sources again.

The problem though is that in data networks the very same connections are both. In Power networks that cannot be the case since they cancel, but incoming packets do not cancel out outgoing packets. Also unlike the power network which is likely to be able to quickly heal if they go down, with BGP it is all too common to have have plenty of restrictions on permitted routes between distinct networks, so all too often a break takes a long time to heal because somebody has to manually reconfigure ... :-(

Re:Primitive (1)

Ihmhi (1206036) | more than 2 years ago | (#39162443)

Waitasec, cascades are a real thing? I thought it was just a bunch of technobabble when reversing the polarity of the warp core went badly.

Re:Primitive (0)

Anonymous Coward | more than 2 years ago | (#39159879)

And they like yourself can look down your nose at them.

Re:Primitive (1)

Anonymous Coward | more than 2 years ago | (#39159997)

You mean that in 100 years our understanding of a scientific topic will be better than it is now, instead of the same? How shocking! I was certain that there was a negative correlation to the passage of time and our understanding of topics!

Re:Primitive (0)

Anonymous Coward | more than 2 years ago | (#39160341)

You jest, but in reality there is that negative correlation - at least in the US. It is fueled by the "outwardly religious" (proselytizers) that try to reject science and get creationism, and other outlandish ideas taught in school while removing sex education and anything else that they can get their hands on. The result? Over time, people know less about and have an inferior understanding of actual science topics.

Re:Primitive (1)

cavreader (1903280) | more than 2 years ago | (#39161607)

"It is fueled by the "outwardly religious" (proselytizers) that try to reject science and get creationism"

Your generalization of the US attitude towards science is idiotic and unsupported by the facts. First off the US is the most culturally and religiously integrated country in the world. A US green card or H1-B Visa are still one of the most sought after credentials in the world. The "proselytizers" represent a tiny percentage of the total population and the same tiny percentage applies to political ideologies as well. Rash generalizations are worthless and do nothing but raise the level of animosity and obscure the actual facts.

Please explain why the US is the home of just about every major SW and HW company in the world. Most foreign countries do nothing but manufacture products designed in the US which give them the chance to steal the technology for themselves in an effort to keep up. Please explain the US ability to produce the sophisticated weapon systems with technologies that eventually end up being used for non-military related products. Please explain why foreign students flock to the US to attend universities. I could go on but the idea that technological advancement is some how blocked by religious organizations is total bullshit.

Re:Primitive (0)

Anonymous Coward | more than 2 years ago | (#39161849)

Ahhh, America, Land of the free*
*Offer void if lesbian, gay, transsexual, of or look like of arab descent, sick, drug user, poor, or anything which may offend anyone of Christian faith

Re:Primitive (1)

KZigurs (638781) | more than 2 years ago | (#39162125)

The fact that you have couple of saner states does not cancel out the majority of population.

Also, I'm sorry to break it to you but USA stopped being an attractive option or destination for most smart white folk at least 5 years ago. Sure, you still get all the asian and middle-east and mexicans hoping to get a H1b one day, but it's a far off from it being coveted universally.

As for the SW/HW companies of the world - fair enough, significant proportion of R&D indeed happens in the US, but at least in the IT sector that actually is on a light decline. Historically this oddity can be very easily explained by access to capital required for investments of this scale. And plenty of local customers happy to part with their money indiscriminately.

Re:Primitive (1)

cavreader (1903280) | more than 2 years ago | (#39164835)

The majority of white folk are happy to sit in their nanny state sucking on the government tit. Why bother doing anything productive when you can sit around demanding 35 hour work weeks and 6 weeks of vacation? The European "white folk" are becoming less relevant by the day. Sad to say it will probably take another world war to knock some freaking sense into those who have forgotten what it means to get off their pampered assess and do something besides blame all the worlds problems on the US thereby absolving themselves of responsibility for any troubles plaguing the world today.

Re:Primitive (1)

Hognoxious (631665) | more than 2 years ago | (#39163255)

Please explain why the US is the home of just about every major SW and HW company in the world.

Inertia. It was a great place to live & do business 20 or 30 years ago.

That was before the aristocracy reestablished themselves, shit like the patriot act was introduced and creators & innovators actually got rewarded in a comparable way to gamblers.

Re:Primitive (0)

Anonymous Coward | more than 2 years ago | (#39163249)

"sex education and anything else that they can get their hands on"

Huh huh. Heh heh.

Re:Primitive (1)

Hognoxious (631665) | more than 2 years ago | (#39165211)

I'm sure that in 100 years time, people will look back on our understanding of networks, information and culture in the same way as we look back on people's understanding of the body's nervous or endocrine systems 100 before now.

Perhaps one day we'll even figure out how car engines work.

Re:Primitive (0)

Anonymous Coward | more than 2 years ago | (#39175991)

and then move on to fucking magnets.

Too many cooks.... (4, Interesting)

bwohlgemuth (182897) | more than 2 years ago | (#39160049)

As a telecom geek, I see many people create these vast, incredibly complex networks that end up being more difficult to troubleshoot and manage because they invoke non-standard designs which fail when people wander in and make mundane changes. And then when these links fail, go down for maintenance....surprise, there's no 100% network availability.

Three simple rules to networks...

Simple enough to explain to your grandmother.
Robust enough to handle an idiot walking in and disconnecting something.
Reasonable enough to be able to be maintained by Tier I staffing.

Re:Too many cooks.... (0)

Anonymous Coward | more than 2 years ago | (#39161387)

Yeah, I'm not really sure who has this "conventional wisdom" that they're talking about. The conventional wisdom in the reality where I live is that the complexity of any system increases exponentially with the number of connections it has.

If you've ever built a cluster of computers, you've doubtless experienced this before. That is, the fact that the cluster is going to be less reliable than the single server it was replacing.

Re:Too many cooks.... (1)

Hognoxious (631665) | more than 2 years ago | (#39163749)

The conventional wisdom in the reality where I live is that the complexity of any system increases exponentially with the number of connections it has.

That's because you're a bunch of innumerate oafs.

Re:Too many cooks.... (2)

LordLimecat (1103839) | more than 2 years ago | (#39162253)

How are you going to explain VLANs, STP, and ACLs to your grandmother? Has it occurred to you that there are, in fact, situations where all of those technologies are useful?

Has it also occurred that if you are properly securing your network with port security, you cant just walk in and plug something in and have it work?

Re:Too many cooks.... (1)

maxwell demon (590494) | more than 2 years ago | (#39162973)

How are you going to explain VLANs, STP, and ACLs to your grandmother?

I have no idea what VLAN or STP is, but ACLs are dead simple to explain to your grandmother: "There'a a list giving detailed information about who may do what with the file, and the computer enforces that list rigorously."

Re:Too many cooks.... (0)

Anonymous Coward | more than 2 years ago | (#39163853)

STP - Spanning Tree Protocol: Stops connections from looping back.

VLAN - Virtual Local Area Networks: A subset of addresses that help separate the whole network.

ACL - Access Control List: self explanatory. (A list that controls what you can access)

Re:Too many cooks.... (1)

LordLimecat (1103839) | more than 2 years ago | (#39169045)

More accurately...
Spanning Tree Protocol: Allows you to interconnect several switches in a way which would normally cause switching loops, but instead allows for redundant switching paths.

VLAN-- allows you to separate a single switch into multiple broadcast domains.

Mesh level free surface effect (1)

WaffleMonster (969671) | more than 2 years ago | (#39160067)

I'll admit I don't understand the reasoning why more is not always better even after reading the article. Sure the calculation overhead for topology changes increase with more choices but such costs are trivial to the overall cost of most systems. Whether the system is more or less resilliant would seem to me anyway to have everything to do with the intelligence the system is able to bring to bear to plan a stable topology based on changing conditions. You can invent and constrain a dumb network with easy to calculate properties for the purpose of simulation but this would seem to have extremely limited implications to the real world where it is cost effective for networks to not be dumb.

Perhaps might be interesting to try this sort of simulation work on power transmission networks?

To their credit they do not claim their work has applicability to the Internet.. Even if BGP made poor choices the edges provides some degree of congestion avoidance which mitigates against the sloshing of snowballs... We learned that lesson the hard way :(

Re:Mesh level free surface effect (1)

Alan R Light (1277886) | more than 2 years ago | (#39162883)

Imagine a waterbed with baffles: the baffles prevent the water from sloshing around the moment the mattress bears a new load. Without these baffles, when a person lays down on the bed there will be waves going everywhere, and depending on how full the mattress is the person may even bump on the platform beneath the mattress before the water evens out and the load is supported. On the other hand, if the baffles prevented water from moving at all, there would be limited compensation for any load - the water could only spread out within the limits of each section, and the mattress would probably seem hard and bumpy. The mattress might be too hard in one section and too soft in another.

Baffles work because they allow water to pass from one section to another, but not too quickly. They exist in that sweet spot between the two extremes of too many connections and not enough connections. Air mattresses are similar. So are networks.

Ooh. I know the answer. (3, Funny)

dgatwood (11270) | more than 2 years ago | (#39160099)

Forty-two.

Now we finally know the question.

Other applications of this theory (3, Interesting)

dietdew7 (1171613) | more than 2 years ago | (#39160131)

Could these types of models be applied to government or corporate hierarchies? I've often heard about the efficiencies of scale, but my experience with large organizations is that they have too much overhead and inertia. I wonder if mathematicians could could come up with a most efficient organization size and structure.

Re:Other applications of this theory (3, Interesting)

Attila Dimedici (1036002) | more than 2 years ago | (#39160305)

One of the things that technological changes since the mid-70s have taught us is that the most efficient organization size and structure changes as technology changes. There are two things that exist in dynamic and as the relationship between them changes, the efficiency point of organizations changes. One of those factors is speed of communication. As we become able to communicate faster over long distances, the most efficient organization tends towards a more centralized, larger organization. However, as we become able to process information faster and more efficiently, a smaller, more distributed organization becomes more efficient. There are probably other factors that affect this dynamic as well.

Luis von Ahn (reCAPTCHA, DuoLingo) says ... (1)

bd580slashdot (1948328) | more than 2 years ago | (#39167803)

Luis von Ahn says that past collavorations like putting men on the moon and the A-Bomb project were all done by about 100,000 because tech constraints limited effective organization to groups of that size. But projects like reCAPTCHA, DuoLingo and Wikipedia use massive human computation and organize millions of people. We'll see many more massivly co-ordinated projects and the ideal size of organizations for each kind of project will become more clear as the number of these projects increases.

Re:Luis von Ahn (reCAPTCHA, DuoLingo) says ... (1)

Attila Dimedici (1036002) | more than 2 years ago | (#39168177)

Putting men on the moon and the A-bomb project were also limited by a need to keep a significant amount of the information secret. The more modern projects you listed had no such constraint. That does not mean that a project done at the time of the Manhattan Project or the Apollo Program could have harnessed more without the constraints of secrecy those had, just that even if those two projects were initiated today they would be unlikely to use more people.
That being said, it will be interesting to see how the tendency of those with power to exert control will interact with the technology that allows both the accumulation and processing of vast amounts of information as well as allowing loosely associated people to coordinate fairly complex actions.

Re:Other applications of this theory (1)

Lehk228 (705449) | more than 2 years ago | (#39162567)

the ideal size for an organization is just large enough to accomplish it's 1 task.

Study wasn't about Computer Networks (2)

fluffy99 (870997) | more than 2 years ago | (#39160241)

"The study, also available in draft form at ArXiv, primarily studied interlocked power grids but could apply to computer networks and interconnected computer systems as well, the authors note. The work could influence thinking on issues such as how to best deal with DDOS (distributed denial-of-service) attacks, which can take down individual servers and nearby routers, causing traffic to be rerouted to nearby networks. Balancing workloads across multiple cloud computing services could be another area where the work would apply."

The study was about the stability of power systems, which is a completely different animal. For power systems, as demonstrated by a few wide spread outages, are at the mercy of the control systems which can over or under react. Computer networks might have some similarities but trying to draw any firm conclusions from this study would be pure speculation.

I would agree though, that at some point you move beyond providing redundant paths to opening up additional areas of exposure and risk.

Re:Study wasn't about Computer Networks (0)

Anonymous Coward | more than 2 years ago | (#39163209)

Hmm, yes. But the same way as we have selective load rejection on power networks, we have limited view packet sinks on BGP4 networks.

A limited view packet sink is a route which does not have global scope (e.g. exported NO_EXPORT to all peers in a IXP), and which direct packets to an infinite sink (blackhole). Blackholes operate at line-rate. Just like selective load rejection in a power network, a RTBH or S/RTBH will cause collateral damage and will cost big $$$$ (in lost service, limited "tango down").

An extremely obvious example is the root server security architecture, the target of a FakeOPS a few days ago: you have >150 anycast nodes, each with a *limited* area of influence/visibility on the network through anycasting, and a few with global influence. The nodes NEVER go down, when overloaded, they become infinite packet sinks. So, when there is a packet flood, each node will divert a fraction of that traffic to itself and cause selective blackholing (it is selective because of the limited area of influence/visibility). On top of that, to disrupt the *service*, you need to take down all 13 groups of nodes visible by an "end user" of the service (recursive DNS cache resolver).

You can also direct traffic to scrubbing stations, again using anycast and BGP4 path changes, which will direct large floods into the larger pipes (> 10Gbps) of the backbones and scrubbing stations.

Biological hyper-connectivity (1)

Anonymous Coward | more than 2 years ago | (#39160281)

It's interesting that this story appeared on the same day I read an article [medicalxpress.com] on brain hyper-connectivity being linked to depression.

Re:Biological hyper-connectivity (1)

maxwell demon (590494) | more than 2 years ago | (#39162945)

Ah, so now we know what they did wrong with Marvin.
Well, that and the diodes down his left side ...

"Dunbar's number" for computers? (0)

Anonymous Coward | more than 2 years ago | (#39162593)

So I guess this can be considered a Dunbar's number for computer networks?

http://en.wikipedia.org/wiki/Dunbar's_number

It Only Makes Sense (1)

YetAnotherBob (988800) | more than 2 years ago | (#39164677)

Conclusions in TFA (The Fine Article) make sense.

I remember in College, back in the Dark Ages of the 1970's. Communications Theory. That has been known since the late 1940's.

For any system, at the bottleneck points there is a saturation level. When you try to get higher throughput, it saturates the bottleneck, and the throughput either breaks down, or slow substantially.

Ever been stuck in traffic? If so, then you have experienced this effect.

Ever tried to run a large Database on a Windows box? Slowdown and crashes happen.

For every system there is a limit to the communications that can be throughput.

The Internet works by having a massively parallel structure, but every Node on the Internet can only connect to a limited number of other points. The number of connections maintainable has risen as our technology has improved, but there are limits at each step of the way. Sorry guys, that's not because of software or system design, it's a basic result of physics. We can't get unlimited performance out of any foreseeable hardware. Software will always be limited to something less than the limitations of the hardware. There are limits.

This study was about the power system, but it applies to all systems. There is going to be some point of maximum benefit. Trying to go beyond that point will be largely an exercise in frustration. It will only be by changing the physical network that further expansion will be practical. That's the bad news.

The good news is that the current Internet isn't at that point. Yes, it is beyond the capacity of 1990. But, we have changed the parameters of the Hardware. We've even outgrown the capacity of the software from the 1990's. (TCP/IP V4) That's why we are moving slowly to IP V6. but V6, while 'infinite' for today's hardware will also be outgrown someday. Until, that is, our hardware reaches the real physical limits. Only then will we be stuck with no further performance improvements.

Research labs have already reached the physical limits. we know what they are. Fortunately, we probably won't be there for around 10 to 20 years.

Diamond semiconductors, Graphene, circuit elements that consist of fewer than 20 atoms. All these things have been done. Just not economically or reliably. That will come in the future. It'll be fun to see.

I wonder what we will be doing with Petabyte thumb drives and 1000 processor tablet systems?

My Grand-kids will be finding out.

I am a bit astounded... (1)

vikingpower (768921) | more than 2 years ago | (#39170625)

...at the low amount of comments on this paper. I read it over the weekend, and it offers some insights into network theory as applied "to the everyday world" that engineers have to deal with, that are not all trivial or unimportant. Is it because the article has a "pure" mathematics approach, in spite of using a model of the US power grid for illustration purposes ?
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>