Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Network

Calculating Total Network Capacity 48

New submitter slashbill writes "MIT's working on a way to measure network capacity. Seems no one really knows how much data their network can handle. Makes you wonder about how then do you calculate expense when building out capacity? From the article: 'Recently, one of the most intriguing developments in information theory has been a different kind of coding, called network coding, in which the question is how to encode information in order to maximize the capacity of a network as a whole. For information theorists, it was natural to ask how these two types of coding might be combined: If you want to both minimize error and maximize capacity, which kind of coding do you apply where, and when do you do the decoding?'" This is a synopsis of the first of two papers on the topic.
This discussion has been archived. No new comments can be posted.

Calculating Total Network Capacity

Comments Filter:
  • by sideslash ( 1865434 ) on Wednesday May 16, 2012 @11:06AM (#40017175)
    Didn't read the article, but I imagine that part of the difficulty is that network capacity isn't reducible to an individual scalar number, but rather looks like an N-dimensional graph. There are many points of failure and bottleneck depending on how each node behaves relative to other nodes.
    • by Z00L00K ( 682162 )

      Don't forget that the type of traffic that is passed over the net also is a factor involved.

      It's possible to have 1000 users on a 10Mbps network if the only traffic they have is text terminal traffic but you can completely saturate a Gbps network with a few users doing processing of video streams.

  • by i kan reed ( 749298 ) on Wednesday May 16, 2012 @11:08AM (#40017195) Homepage Journal

    It was a program by one Robert Tappan Morris, as I recall.

    That didn't go over so well with everyone.

    • by stiggle ( 649614 )

      Morris only experimented over TCPIP on unix systems running finger. He bypassed/ignored X.25 and other non TCPIP networks.

    • It was a program by one Robert Tappan Morris, as I recall.

      True, but he programmed it badly, and used active injection into the network to measure it, rather than programming each node to passively collect data and make decisions based on the results of that. In short, he was young and stupid.

  • Best way I've found to measure growth is to have a running history of traffic on each router. You don't need a $billion to do it. There are some decent enough FOSS tools out there to do it. MRTG [oetiker.ch] or Cacti [cacti.net] will work nicely and integrate with SNMP.

    For a smaller network, you could run a span port and graph your own data with a shell script, or hook up NTOP [ntop.org]. which will give you real-time views of traffic but you would need to implement something to save those reports daily.

    • by ender- ( 42944 ) on Wednesday May 16, 2012 @11:13AM (#40017257) Homepage Journal

      I think they're trying to do something a bit more detailed and theoretical than seeing how much traffic is going through a given interface...

    • by bengoerz ( 581218 ) on Wednesday May 16, 2012 @11:23AM (#40017399)

      Best way I've found to measure growth is to have a running history of traffic on each router. You don't need a $billion to do it. There are some decent enough FOSS tools out there to do it. MRTG [oetiker.ch] or Cacti [cacti.net] will work nicely and integrate with SNMP.

      For a smaller network, you could run a span port and graph your own data with a shell script, or hook up NTOP [ntop.org]. which will give you real-time views of traffic but you would need to implement something to save those reports daily.

      You suggest some good tools, but they primarily measure network utilization rather than capacity. The question isn't "how much data is my network handling now" but "how much data could my network handle at peak"?

      • Hook up a BitTorrent seedbox to the live Internet. You'll find out the maximum capacity pretty quickly.

      • The question isn't "how much data is my network handling now" but" how much data could my network handle at peak"?

        Just insult Anonymous [slashdot.org] and you should have your answer shortly.

      • It's not "how much data could my network handle at peak" but rather "what is the maximum amount of information I can send through a network" where this 'maximum' is usually only attainable in ideal conditions. The summary is somewhat misleading since "capacity" relates to "information", in the information theory sense and "data" relates to the idea most posters seem to have (i.e. the kbps you download at). Also the arxiv.org paper is two years old but the work in Network Coding is very interesting and has
  • by Anonymous Coward

    From TFS: "Makes you wonder about how then do you calculate expense when building out capacity?"

    They're not talking about "not knowing" the capacity of a given network like (e.g. what you buy in the pipe from the datacenter to your ISP. They're talking about the overall bandwidth between 2 points across all possible routings. It's the difference between knowing Ohm's law and computing the net resistance between 2 adjacent nodes on an infinite grid of 1 ohm resistors.

  • Seems no one really knows how much data their network can handle

    Doesn't that shoot a hole in the ISP's anti-bittorrent arguments?

    • While I think the "anti-bittorrent" argument is BS (In most cases)

      To answer your question, no, ISP are claiming bittorrenting is "already" overloading the network. This article deals with predicting at what point that will happen.

  • You reduce network capacity, but now your routers need to be smarter, so they're taking longer to encode-decode or you're spending more on hardware to keep throughput the same.

    • by skids ( 119237 )

      Most of this work will end up in MIMO radios. It's not horribly applicable to wired networks, at least not with currently in-use technologies and routing protocols. (Almost all wired connections pass through a stateful firewall or two, and just even trying to load balance can cause issues with OOP processing.) Wired networks are more base theory fodder rather than serious proposals. (How these strategies and quantum optical cryptography might work together is interesting food for thought.)

  • I dont really think coding is the real bottleneck breaker for getting to the Shannon limit of these channels. I believe a better routing mechanism would be a better approach.
    • Re: (Score:3, Insightful)

      by Anonymous Coward

      By channel you mean "a network of noisy, independent, memoryless point-to-point channels"? The result in the paper says that
      such channel can be seen as a network of error free channels. On such network it is already known that network coding delivers
      a better performance than routing alone. (see the butterfly network example in https://en.wikipedia.org/wiki/Network_coding)

      • This assumption is purely for academic, for in reality the networking overhead to make this assumption true is significant. The reduction of such overhead I feel is more interesting and practical. But in the end you want to eek out as much performance out of everything as you can.
        • In the end, you want to eek out as much performance as possible, given the constraints of the law of diminishing returns verses increasing (or decreasing, depending) the limits. That last 0.0001% increase in performance may not be worth the effort needed to achieve it.

          Theory is not the same as practice, but people often treat them the same.

  • Its not in the box.... Its in the band.
  • by David_Hart ( 1184661 ) on Wednesday May 16, 2012 @11:49AM (#40017703)

    It sounds like they are studying the effect of having intelligent nodes in a network that not just forwards a packet, but also performs error correction, has some basic path intelligence, and sends the packet out multiple interfaces. The end node then receives these hybrid packets from different directions, some coming faster, some later, developing a map with the most efficient path.

    One could argue that this could be used, for example, in a mesh MPLS cloud when a path through a specific hop (i.e. office) may be more efficient, because of network conditions, than going straight to the end node. However, this would require each node to have enough bandwidth to support the added traffic, over and above the normal location traffic. Which means requiring a larger budget for bandwidth that is only used in certain degraded conditions.

    Basically, it's a study of the Internet and, in my opinion, would have little application in a corporate LAN. The reason why I say this is because a Corporate LAN is more deterministic in path selection and is limited by cost.

    • >

      Basically, it's a study of the Internet and, in my opinion, would have little application in a corporate LAN. The reason why I say this is because a Corporate LAN is more deterministic in path selection and is limited by cost.

      Meant to say WAN non LAN for the last sentence...

    • Re: (Score:3, Interesting)

      by vlm ( 69642 )

      It sounds like they are studying the effect of having intelligent nodes in a network that not just forwards a packet, but also performs error correction, has some basic path intelligence, and sends the packet out multiple interfaces. The end node then receives these hybrid packets from different directions, some coming faster, some later, developing a map with the most efficient path.

      The eternal wheel of IT endlessly rotates old ideas into newness. Interpret that as either my mostly new source route bridged SDLC mainframe network in the early 90s or my decaying decrepit X.25 network in the late 90s. I played with some stuff like that using AX.25 as the phy layer around 1990. We had tools and papers and equations back then to analyze.

      Did you know you can make networks like that oscillate if you're not careful? We also collapsed a few accidentally by packet flooding beyond a certain h

  • by mordred99 ( 895063 ) on Wednesday May 16, 2012 @11:54AM (#40017753)

    and the answer is "It Depends". The traffic, the routing, the overall bandwidth (you never get 100% usage) all have factors. The easiest way is to look at your pipes (each segment is separate) and see the error rates, back pressure (QOS, Ethernet, etc.), average throughput breakdown (types of traffic), and usage percentage. This will give you a clear picture. Take those numbers and watch them over time, and you will get a clear picture of your network.

    You cannot answer a question such as this truthfully if you take one sample size, and assume that is fact. Many sample sizes make the true picture, and then you can also see trends to determine if things are getting out of control.

  • ...how to encode information in order to maximize the capacity of a network as a whole...

    I always send my data as a series of 0s and 1s. I tried using 2s, but they took up too much bandwidth.

  • People have to learn how to write a summary.

    "MIT's working on a way to measure network capacity. Seems no one really knows how much data their network can handle. Makes you wonder about how then do you calculate expense when building out capacity? From the article: 'Recently, one of the most intriguing developments in information theory has been a different kind of coding,

    Different from what? Compared to what, and on what context? The sentences preceeding that remark do not make any reference to any coding scheme whatsoever.

    called network coding, in which the question is how to encode information in order to maximize the capacity of a network as a whole. For information theorists, it was natural to ask how these two types of coding might be combined: If you want to both minimize error and maximize capacity, which kind of coding do you apply where, and when do you do the decoding?'"

    Two? Which is the other? There only mention (in the summary) of the newly proposed coding.

    YES, I can infer that, for the most part (and then confirmed from reading the article) that the other coding the summary refers to is error-correcting coding. But it shouldn't be necessary to neither rely on prior knowled

    • Just remember to proof read your summaries.

      This is, after all, extremelly important.

      Seriously though, it's not "error-correcting coding" ... which implies you did not comprehend the summary, nor the article. This is about testing capacity and is more along the lines of implementing something to expound upon dynamic routing. Which induces a clusterfuck of brainthink along the lines of "so we're buying bandwidth to supplement bandwidth that we should have put there, but maybe here" and so on and so forth. There's not a whole lot more to read into

      • Okay I'm just going ahead and apologize for not noticing your "summary refers to" bit in your paragraph.

        But I think my point still stands - the summary is sufficient for the purposes of summarizing the article.
        • Okay I'm just going ahead and apologize for not noticing your "summary refers to" bit in your paragraph. But I think my point still stands - the summary is sufficient for the purposes of summarizing the article.

          How can it be sufficient when it refers to two different codes while only mentioning one? Just because you say your point stands does not magically makes it so. The only way to ensure that the summary is in tandem with the article is to read the article. And that defeats the purpose of a summary. If the summary refers to two different coding schemes, but only mentions one by name, then the question follows: is the presence of another coding scheme different from network coding relevant to the model presente

          • The article is really only about the one of them. The summary can refer to both of them and only explain the one it is about, while still being sufficient. It really is kind of pedentic, IMO. If you want to know more about the error-correcting code (that the article is not about), you will click the article and discover the one-liner dedicated to explaning why this article is not about it, regardless.

            A long, well-written statement abstractly disagreeing with mine does not make a fact.

            is the presence of another coding scheme different from network coding relevant to the model presented by the article?

            No, it really isn't

  • Just right size individual components right from storage to client level...check the average bandwidth used at all levels and allocate accordingly. Its like asking how many molecules of water in a glass..when you know you only need to drink one glass of water ... And wouldnt traffic/second increase or decrease with the choice of medium and communication protocol used over same distance.
  • That ideal encoding method isn't XML.

    • Actually, XML might be the ideal method of encoding. It depends on what you mean by "ideal" ;) XML makes it much easier for a human to decode than say a bit stream of binary coded data. Just saying, that without further context, your statement may not be entirely accurate, or it could be perfectly accurate.

      Which in my estimation makes it not accurate at all.

  • Microsoft's Baseline Analyzer doesn't cut it anymore? ;-)
  • OPNET- great tools, horrific price...

    Dave

  • Release Diablo III...

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...