Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Networking The Internet

Does the Internet Need a Major Capacity Upgrade? 357

wiggles writes "According to the Chicago Tribune, the recent surge of video sites such as Youtube and Google video are pushing the limits of the Internet's bandwidth, or soon will be. Pieter Poll, chief technology officer at Qwest Communications, says that traffic volumes are growing faster than computing power, meaning that engineers can no longer count on newer, faster computers to keep ahead of their capacity demands. Further, a recent report from Deloitte Consulting raised the possibility that 2007 would see Internet demand exceed capacity. Admittedly, this seems a bit sensationalist, but are we headed for a massive slowdown of the whole internet?"
This discussion has been archived. No new comments can be posted.

Does the Internet Need a Major Capacity Upgrade?

Comments Filter:
  • by dada21 ( 163177 ) * <adam.dada@gmail.com> on Friday February 23, 2007 @07:18PM (#18129426) Homepage Journal
    As the article has a quote about it, here's specifically WHY I am against "Net Neutrality" -- the ISP has no control over throttling particular sites or protocols that can have major negative effects on their overall user experience. I've already noticed some network slowdowns, but in the past 60 days I dumped broadband and rely primarily on my EDGE connection from T-Mobile (200kbps). Latency isn't too shabby. When I use my T1 at the office though, I have noticed some slowdowns.

    The solution isn't just more bandwidth. We're not talking about more users accessing the same sites, we're talking about more users accessing more sites -- significantly more. The "long tail" of the web is exploding in access; all the blogs, vlogs, MP3 downloads and videos are across a huge incongruent group of sites. The solution is to nix net-neutrality legislation and allow the consumer and the producer to come to terms on need versus price.

    At home, I'd be more than happy for a Port80/Port110 prioritized connection, with other ports reduced in speed or performance. Sure, videos come over Port80, but the vast majority of cable users in my neighborhood are downloading torrentz and other similar protocols. I don't see a reason why everyone should pay the same price for different service. Sure, the telecom industry is scared of Net Neutrality because they WANT to ban Skype and VoIP, but that is why the FCC needs to back off on over-regulating the opportunity for competitors to enter the market. There is a huge opportunity for more wireless providers and more people bringing FTTH or other options.

    I know, I know, you were promised 160 Mbps and you want every last speck of it. Those ads will change, I think, as more people do get connected. I'd be happy with lower latency than higher data-rates, and I think this article forgets that it is latency that is just as important (if not more so) than just pure bandwidth speed.

    The Internet doesn't really have "bandwidth" limitations, because all it takes is more ISPs and more backbones to come into being. If the pro-Net Neutrality parties have their way, though, we may see significant restrictions in investment on both those fronts. The companies who invested in offering new limbs on the internet took great risks -- and some made great rewards. We want to keep that risk/reward ratio uncluttered by excess regulation legislation so others can offer us more options for who we can connect to.

    I'm sure if YouTube/Google had it their way, they'd get special consideration for providing more bandwidth -- State-paid consideration maybe? I sure hope not.

    When things slow down, it will give new competitors reason for entering the market. 20% more backbone speed interconnecting some Level 2 ISPs and things will be fine, until the next slowdown brings another run of entrants into the game, or gives the old companies reason to expand their network. Envision 2010: "Is your latency too low? Comcast Ultra offers you 50ms or less ping times across the board, guaranteed!" It may sound fishy, but who would have thought 10 years ago that we'd hear about Mbps on basic cable ads?

    The last paragraph is the most insightful part of the article:
    Any service degradation will be spotty and transient, predicted (John) Ryan (of Level 3), who said that underinvestment by some operators may "drive quality traffic to quality networks."

    EXACTLY.

    Sidenote: That damned GoogleBot sometimes hits my sites 5000 times a day -- maybe Google is doing a little more to aggravate the problem than they want to admit? Thankfully I use server-side compression and caching, so things aren't hammered too bad by the bot, but there have been times when things on my end were running slow and I had 100 "Guests" all registered at Google's IPs.
    • by ADRA ( 37398 ) on Friday February 23, 2007 @07:25PM (#18129526)
      What a well prepared talk piece. I however take the other approach.

      If I'm offered 5Mits/s from my cable provider, that is an obligation for them to fill my order. If they can't fulfill my expectations, then they shouldn't have offered the service to begin with. If telco XYZ is getting bitten for overselling their lines that sure as hell isn't my problem as a consumer. What I do with my 5Mbits/s is my own business. I could use the internet to check my email (10kb), or surf the web a while (2MB), or download a YouTube video (200M?).

      Why should my internet operator, the guys protected up the ass by common carrier protections dictate my internet surfing activities?
      • Re: (Score:3, Insightful)

        by dada21 ( 163177 ) *

        If I'm offered 5Mits/s from my cable provider, that is an obligation for them to fill my order. If they can't fulfill my expectations, then they shouldn't have offered the service to begin with. If telco XYZ is getting bitten for overselling their lines that sure as hell isn't my problem as a consumer. What I do with my 5Mbits/s is my own business. I could use the internet to check my email (10kb), or surf the web a while (2MB), or download a YouTube video (200M?).


        You're correct -- but they weren't offering
        • I personally am against common carrier protections
          So the US Postal Service should be responsible for all of Kaczynski's [wikipedia.org] bombs? That's a great idea!
          • by JimDaGeek ( 983925 ) on Friday February 23, 2007 @08:23PM (#18130158)

            So the US Postal Service should be responsible for all of Kaczynski's bombs? That's a great idea!

            I think you are taking the GP's post to the extreme. I think he meant that "common carrier protection" should be limited. Limited in the sense that if the "common carrier" does not impose _any_ restrictions (within some _sane_ safety limit like no explosives) then that "common carrier" _should_ be protected. However, many ISP's are now NOT acting like "common carriers". They are restricting services and bandwidth based on their perceptions of "importance" or ways to "maximize profits".

            Sorry, to me that does not qualify as a "common carrier" to be protected. If my ISP did not block any port, or restrict bandwidth in any way, I would be the first one at their defense to state that they have truly acted as a common carrier. Sadly, that is not the case for most ISP services. They "prioritize" services based on what _they_ think deserves more bandwidth. In other words... what the ISP can gain maximum profit from for the lest bandwidth.

            IMO, if an ISP wants to limit bandwidth in _any_ way, they should not have common carrier protection. Period.
        • by ScrewMaster ( 602015 ) on Friday February 23, 2007 @08:12PM (#18130050)
          I don't think most of the ISPs even have common carrier status. The telcos do, when it comes to phone service, but as a data service operator they don't. I believe (and someone who knows more can correct me) that they reason they don't want to be considered common carriers is that they would be subject to additional (read: expensive) regulatory burdens.
          • by ADRA ( 37398 )
            I could've been miss-informed, but I believe most if not all ISPs are considered common carrier. If they weren't, every single illegal download that the RIAA could sue for could also be enacted against the ISP, since they 'allowed' the infringement to take place, or some such.

            I don't live in the US, so don't blame me for not having a perfect understanding of your legal system =)
            • "I don't live in the US, so don't blame me for not having a perfect understanding of your legal system"

              I doubt that even very many of the people responsible for creating and interpreting the US legal system understand it perfectly. Not even blaming apathy or ignorance: it is pretty frickin complicated!
            • Re: (Score:3, Insightful)

              I could've been miss-informed, but I believe most if not all ISPs are considered common carrier.

              They are not. Only the telecommunications network itself is a common carrier. The DSL services layered on top of it (as well as cable, fiber, etc) are considered information services.

              If they weren't, every single illegal download that the RIAA could sue for could also be enacted against the ISP, since they 'allowed' the infringement to take place, or some such.

              That would be true, were not other legislatio

      • is that a connection is between two points - it's only 5 Meg if it gets from A to B at that speed.
        I've got a 24Mbit connection to my ISPs DSLAM - although it does tend to connect a bit slower (I'll forgive them for this).
        Anyway, that 24Mbit is max speed - but most IPs I connect to don't give me that throughput.
        Now I could blame my ISP for not peering properly to backbone, but that's only half the problem. There's the other leg from the backbone to the B-end.
        You connect to a server with 10M NIC, or even
      • If I'm offered 5Mits/s from my cable provider, that is an obligation for them to fill my order. If they can't fulfill my expectations, then they shouldn't have offered the service to begin with.

        If your ISP didn't oversell at all, they might not be able to offer you even 500Kbps, much less 5Mbps. Worse than that, there would be a tremendous amount of bandwidth being wasted at any given time. Overselling to a certain ratio makes a lot more sense.
      • by rekoil ( 168689 ) on Friday February 23, 2007 @08:39PM (#18130274)
        Let me try to explain the problem from the ISP side (pardon me while I don Les Asbestos Underpantz)...

        What we're seeing is the hazards of changing oversubscription ratios. I'm sure this term is familiar to many of you, but for those who don't, it's the concept that ISPs know that on average, each customer will only use a certain portion of the bandwidth that's made available to them. As such, an ISP doesn't have to provision one megabit of backbone capacity for each megabit it sells to a consumer; it might only have to upgrade on a 1:10 or 1:50 upstream-to-downstream ratio. There's no way that an ISP could sell bandwidth at a reasonable price without oversubscribing at some point. Without oversubscription your 1.5Mbit DSL line would be $500 a month, not $50. Those in the business know I'm not exaggerating here, given the cost of service provider network equipment and fiber capacity (which continues to fall, but not nearly fast enough).

        What's causing the problem is that those ratios are changing, such that (for example) the 1:10 ratio an ISP built its business model around is now 1:5, thanks to YouTube, iTunes, Bittorrent, WoW, etc, not to mention 0wned machines spewing spam and DoS traffic, which is overtaxing its infrastructure and increasing costs. The ISP can't get away with raising prices, and obviously has to remain profitable, so congestion is the inevitable result.

        Some ISPs, most notably Comcast, have gotten quite aggressive at disconnecting what they perceive as "abusive" customers whose usage is higher than the norm. This is absolutely the wrong way to go about this problem, but feeling of being between the proverbial rock and a hard place is understandable. ISPs simply can't stay in business if customers actually use all the bandwidth they're given, and if we all built our networks such that everyone could, no consumer would pay for it.

        I think it was 1994 when AOL introduced its unlimited dialup service (prior to 1994 AOL billed dialup connection time by the hour). Because the user that before was spending an average of, say, 30 minutes a day online was now spending 3 hours a day connected, and because AOL woefully misforecast those ratios, it became next to impossible to connect to AOL for quite a while until they caught up with modem provisioning (That's when I got rid of my AOL account and got my first real ISP acccount, yay!). Looks like everything old is new again.
        • Re: (Score:3, Insightful)

          by nick.ian.k ( 987094 )

          The thing is that overselling is selling more than you've (well, the ISP, not *you*) got and it shouldn't be happening in the first place. Playing this game of "we'll see if we can upgrade as real live usage increases and if we don't, no big deal" is a joke. It's about as stupid as (put your reduction safety hats on, it may not map well!) floating checks: sure, it's pretty likely that check from person A is going to clear in time for the check you wrote to person B to go through alright, despite the present

          • by rekoil ( 168689 ) on Friday February 23, 2007 @09:28PM (#18130600)
            Every ISP oversubscribes at some point. It's a fact of the business. It's built into the competitive environment, and if you know anything about longhaul capacity and network hardware costs (I'm looking at you, Cisco), you'd know that the cost of moving a megabit of traffic across the country costs *much* more than what it costs for an ISP to deliver a megabit of capacity from its edge routers to your home. They have to play the averages, counting on only having to move a tenth or so of the available sold capacity as actual traffic. As I said above, if you really want a non-oversubscribed link, be prepared to pay $500 and up a month for it. In fact, that's how much Verizon Business, AT&T, and the other "Business Class" providers charge for a T1 circuit, which is 1.5Mbits of non-oversubscribed bandwidth.
            • Re: (Score:3, Informative)

              You're both off by quite a bit. ATT sells T1 for $350 these days. Contractually, you get FULL capacity 24/7.
        • So what do you suggest? Should the providers charge YouTube for throwing the oversubscription model out of whack? You realize that very few of the popular sites would exist today if we had always had the kind of "toll road" networks that the anti-net-neutrality lobby wants, don't you? The primary difference between the Internet and the Compuserves and AOLs back then is that a content provider does not have to negotiate with all end user providers.

          If the net needs to be upgraded, the user will pay for it eit
        • by chill ( 34294 ) on Friday February 23, 2007 @09:38PM (#18130654) Journal
          Here's a summary of your argument "Networking is hard. We have to lie to sell it to people."

          There is an easy way out of this. Stop lying to your customers.

          Stop having big, flashing 8 MBPS INTERNET CONNECTION ads with teeny, tiny print on the bottom that says, basically "If you're the only one on, at like 3:05 a.m., if we're not working on something. Oh, and your upload bandwidth is only 384 Kbps."

          Don't fucking whine to me how hard it is. You idiots made it hard by LYING YOUR ASSES OFF about what is being sold. You made your bed, lie in it.

          [For the record, I'm a telecom engineer for a major equipment manufacturer so I'm intimately familiar with the costs, equipment and issues. I just don't like lying ISPs *cough* Comast *cough*.]
        • by Bananatree3 ( 872975 ) * on Friday February 23, 2007 @09:41PM (#18130680)
          The internet is essentially one giant stadium, and computers are the toilets in this stadium. When all the toilets go flush! at the same time, the sewage pipes inside the stadium walls cant handle it, and so they burst.
        • by mysticalreaper ( 93971 ) on Friday February 23, 2007 @09:45PM (#18130712)
          Thank you for the excellent explanation of how things work from the ISP side. However, i think you have betrayed the ISP by citing what you did when AOL started to suck:
          because AOL woefully misforecast those ratios, it became next to impossible to connect to AOL for quite a while until they caught up with modem provisioning (That's when I got rid of my AOL account and got my first real ISP acccount, yay!). Looks like everything old is new again.
          (emphasis mine)

          This is exactly the point. If qwest is starting to offer shitty service, it's proposterous to blame the customer, and then talk as if the internet itself is breaking cause of these damnable users.

          If company A is not capable of delivering a good product, i'm sure company B will have something you'd be more interested in.

          Following this logic, you come back to the situation that many of us in Canada and the US are faced with: Lack of competive choices for an ISP, resulting, in this case, with shitty service being blamed on the customer. I hope enourmously that no one in goverment is buying this tortured logic, and making policy decisions based on it.
        • Re: (Score:3, Insightful)

          by asuffield ( 111848 )

          There's no way that an ISP could sell bandwidth at a reasonable price without oversubscribing at some point.

          I disagree. ISPs are perfectly capable of selling bandwidth at a reasonable price. The problem is that they are currently selling unreasonable packages, where the price is far too low for the advertised capacity. That's not because they've set their prices too low, but because they wanted to advertise larger capacity - so they just made the numbers bigger by lying about them. The result is an ISP that

      • by SeaFox ( 739806 )

        If I'm offered 5Mits/s from my cable provider, that is an obligation for them to fill my order. If they can't fulfill my expectations, then they shouldn't have offered the service to begin with. If telco XYZ is getting bitten for overselling their lines that sure as hell isn't my problem as a consumer. What I do with my 5Mbits/s is my own business. I could use the internet to check my email (10kb), or surf the web a while (2MB), or download a YouTube video (200M?).

        Maybe the problem is too many providers off

      • My approach is, my internet connection is much faster than I need for just about anything I do. So people can shut up with their money-driven claims of how we need to do blah, bleh, and blih, until I see real indicators. Real data would be usability data collected from people with service. "What connection rate do you pay for? Do you consider it to be sufficient? Are you happy with the connectivity that you have?", etc.

        It's not like we can't easily get that kind of actual data and it's not like we d

    • by AvitarX ( 172628 ) <me&brandywinehundred,org> on Friday February 23, 2007 @07:36PM (#18129666) Journal
      As the article has a quote about it, here's specifically WHY I am against "Net Neutrality" -- the ISP has no control over throttling particular sites or protocols that can have major negative effects on their overall user experience. I've already noticed some network slowdowns, but in the past 60 days I dumped broadband and rely primarily on my EDGE connection from T-Mobile (200kbps). Latency isn't too shabby. When I use my T1 at the office though, I have noticed some slowdowns.

      "net Nutrality" does not prevent throttleing ports. It would even allow bandwidth capping from video sites if the policy was #GB/site or something. It does not allow the site to get improved performance by paying money or partnering or being owned by the provider. the only way a site or protocal would get better performance would be by the user paying extra. (a lot like what you describe).
    • by bky1701 ( 979071 ) on Friday February 23, 2007 @07:38PM (#18129690) Homepage

      The solution isn't just more bandwidth. We're not talking about more users accessing the same sites, we're talking about more users accessing more sites -- significantly more. The "long tail" of the web is exploding in access; all the blogs, vlogs, MP3 downloads and videos are across a huge incongruent group of sites.
      Say what? Up and down connections are technically the same load to process in most cases. I don't know how you think that somehow use != bandwidth, unless you want to talk about IPs.

      The solution is to nix net-neutrality legislation and allow the consumer and the producer to come to terms on need versus price.
      Whooh, it is? As I said above I don't think you understand the "problem", how can you know the magic fix to it?

      At home, I'd be more than happy for a Port80/Port110 prioritized connection, with other ports reduced in speed or performance. Sure, videos come over Port80, but the vast majority of cable users in my neighborhood are downloading torrentz and other similar protocols.
      That's the most idiotic thing I ever heard. Do you realize if such was done torrents would just start using port 80/110?

      but that is why the FCC needs to back off on over-regulating the opportunity for competitors to enter the market.
      That may be a great idea for cars, pop and beans, but not on something that is inherently a monopoly/near monopoly. Having such comes with responsibility. No matter what your pseudo-free market ideals say, it's not a monopoly because of those responsibilities.

      The Internet doesn't really have "bandwidth" limitations, because all it takes is more ISPs and more backbones to come into being. If the pro-Net Neutrality parties have their way, though, we may see significant restrictions in investment on both those fronts.
      Where did you get that one, other than a dark spot?

      Sidenote: That damned GoogleBot sometimes hits my sites 5000 times a day -- maybe Google is doing a little more to aggravate the problem than they want to admit? Thankfully I use server-side compression and caching, so things aren't hammered too bad by the bot, but there have been times when things on my end were running slow and I had 100 "Guests" all registered at Google's IPs.
      Google index bots only read text, not images or video. 5000 google views are probably the same as 1 normal view. I am not sure about image bots, but I noticed that they are normally far behind the main index.

      If we need any major internet change, it's nationalizing it. I don't see what's wrong with it right now other than some people crying they will not make enough money (like all companies), people stating that somehow it is making it hard for new companies to be started or people saying that there is a big dark technical problem looming over it waiting to kill us all. None of these are news.
      • by rthille ( 8526 )

        You're wrong about google index bots. Not sure if you've noticed, but you can search for images on google now.
        They had a bug where they would constantly retrieve a PDF file I've got on my site. The only think I can think of is that they failed processing it after the retrieved it and assumed that they needed to get it again. So they would constantly hammer my server sitting at the wrong end of a slow DSL line. I notified them and they sent me a polite email saying they'd fix it, but after awhile they (t
        • by bky1701 ( 979071 )
          I didn't think of PDA-scanning, thanks for pointing that out.

          But I DID say they don't index the images all the time. Google Images is always much behind in index updates than Google Search, therefore I would guess they send the image-gathering bots out less. I would like to experiment sometime by logging where bots go on my website... that would be interesting to see. Then I would be able to certainly point out to people percents downloading images. :)
      • That's the most idiotic thing I ever heard. Do you realize if such was done torrents would just start using port 80/110?

        BitTorrent doesn't need low latency. Web traffic generally does. Prioritizing web traffic, if done properly, wouldn't change anything about use of BitTorrent except imperceptibly increased latency.

    • by Doc Ruby ( 173196 ) on Friday February 23, 2007 @07:38PM (#18129696) Homepage Journal
      Studies of actual traffic congestion mitigation techniques have consistently demonstrated that increasing capacity is a much cheaper and more reliable remedy than QoS on backbones. With extra benefits in the raw capacity. The "quality traffic to quality networks" would require a whole extra architectural layer to route through several different Internet links on realtime route quality decisions, rather than leverage the full capacity of the Net to route anywhere at any time on local congestion conditions or other overall strategies.

      These whines are in fact "special consideration" pressure for the telcos to get "Net Doublecharge". They don't need service tiers, but they can use them to demand distant endpoints pay protection money. If they can get the protection money from the government, their favorite source of subsidy and protection for over a century, they certainly will. Especially if they've already used up the capacity for private accounts (people) to pay them directly, which makes them look less competitive.
    • Sidenote: That damned GoogleBot sometimes hits my sites 5000 times a day -- maybe Google is doing a little more to aggravate the problem than they want to admit? Thankfully I use server-side compression and caching, so things aren't hammered too bad by the bot, but there have been times when things on my end were running slow and I had 100 "Guests" all registered at Google's IPs.

      Configure robots.txt or move your servers off port 80 if you don't actually want visitors to your site. If Google is thrashing
    • Re: (Score:2, Insightful)

      by hedwards ( 940851 )
      You do raise some interesting ideas, but wouldn't it make more sense just to fix the spam problem?

      Right now spam takes up an inexcusably large portion of the internet's capacity, with meaningless, useless, annoying
      tripe. (Well to be fair, spam taking up any portion of the capacity is appalling)

      The main issue I have with giving up the net neutrality is the question of who gets to decide what is
      high priority and what is low priority. If I got some say in how it was divvied up, that would be much
      less annoying
      • Not only spam but also the DDoS attacks.

        The ISP knows the IP addresses on their network. There shouldn't be any reason for a forged packet to go out over their routers.

        Right now, there is no reason why the ISP's cannot charge different rates for blocking/opening outbound connections on port 25. The average home user won't be running an SMTP server INTENTIONALLY and will happily take a $5 per month savings for having such blocked.

        There, two of the worst problems on the Internet are significantly reduced. THE
        • Most DDoS attacks today don't even attempt to forge IP packets - they just overwhelm with legitimate but unwanted traffic.

          The old spoofed source thing started going away in 2000/2001. It's now quite rare in most network environments.
          • If the packets aren't being forged, then it's not difficult to identify them and block them at an upstream router.

            As long as the ISP has decent routers and people capable of correctly configuring them.

            And that allows one ISP to talk to the other ISP and provide them with a list of addresses and times so that those customers can be notified or other action taken.
    • The solution is to nix net-neutrality legislation and allow the consumer and the producer to come to terms on need versus price.

      That's not what net neutrality is aiming to regulate. Net neutrality is about the structure of the business relationships, not the content as such. The current situation is that customers pay the providers to which they connect. Providers have peering agreements. Small providers pay bigger providers, providers of equal size have cost-neutral agreements. If a provider can't satisfy
    • Re: (Score:2, Interesting)

      by Anonymous Coward
      You show an astounding lack of knowledge regarding network operations. People like you, with strong opinions
      one way or the other based upon an imagined understanding of "how things work," are the real problem. You
      should seriously consider withholding judgment about these kinds of issues until you can make an informed
      decision.

      I'd be more than happy for a Port80/Port110 prioritized connection

      Seriously now, what do you even think that means? Please get a clue.

      Here's one for free: There is nothing preventing

    • by troll -1 ( 956834 ) on Friday February 23, 2007 @08:49PM (#18130346)
      The solution is to nix net-neutrality legislation and allow the consumer and the producer to come to terms on need versus price.

      You're not by any chance a lobbyist for the non net neut advocates are you? i

      Net Neutrality is not a business concept, it's based on a theory in computer science that the most efficient and cheapest networks are those based on the principle that protocol operations (i.e. TCP/IP) should occur at the end-points of the network.

      See "End-to-end arguments in system design" by Jerome H. Saltzer, David P. Reed, and David D. Clark: http://web.mit.edu/Saltzer/www/publications/endtoe nd/endtoend.pdf [mit.edu]

      This principle was used by DARPA when it worked on Internet design and it's the reason TCP/IP communications have experienced massive growth.

      It's a principle supported by almost everyone except the backbone owners. Verizon's CEO has said many times that the pipes belong to him and if you're going to make a profit off them then he wants a cut too (referring to Google, Yahoo, Microsoft, et al who oppose Net Neut).

      Compare with mobile carriers who don't follow the principle of network neutraility where you pay more for cell phones that use a zero cost medium (the airways) than you do for the Internet which uses an expensive wired system. And where every service is separately billable. Is that the network of the future you're suggesting is better for us?

      I wouldn't be so opposed to your argument if I could be convinced the telcos weren't running a gnarly scheme to make my ISP bill look like my cell phone bill.

      The net has been so successful perhaps because it was designed and developed in large part, not by private companies, but by scientists an d engineers in an academic environment who were mostly employed by the government. Profit was not their goal. You want to give it over to the business folks because you think they can do a better job if they're involved in how the Internet continues to evolve?

      Be careful what you wish for. I'm not necessarily disagreeing with you. But what worries me the most about non net neut is that we're going to be giving companies a large hand in determining, not how the Internet will look in a few years, but ultimately we're going to be giving them a lot of power in influencing how it's developed later on down the road. I say we tread carefully.
      • Re: (Score:3, Insightful)

        by TubeSteak ( 669689 )

        Compare with mobile carriers who don't follow the principle of network neutraility where you pay more for cell phones that use a zero cost medium (the airways) than you do for the Internet which uses an expensive wired system.
        1. Cell providers have to pay for a license to use those airwaves
        2. You don't think those cell phone towers put themselves up do you?
    • Instead of the "tubes" analogy, let's compare Internet to a restaurant.

      Bandwidth is food. Application that eat up bandwidth are customers eating the food.

      Current situation :
      - Restaurant advertised "All-you-can-eat buffet" for XX.XX$
      - Telco are advertising "24mbits connection" for XX.XX$

      What Hapened :
      - As people get more obese, there are more and more people buying the "All-you-can-eat buffet" option. In fact much more people are buying it as there's food on the buffet. The buffet gets empty before all custo
    • by geekoid ( 135745 )
      "I don't see a reason why everyone should pay the same price for different service."

      That's not net neutrality.

      If an ISP wanted to charge different tiers past on port, or usage, they are free to do so. The market doesn't want that sort of service. The FUD about net neutrality makes it seem like this is net nuetrality, except what it is really an excuse to force the market in a disrection it doesn't want to go.
    • First of all I've heard that the Internet is going to collapse about once a year since '97. So I'm not going to believe it until I hear my DSL modem crying in agony.

      Second of all, eliminating net neutrality would make the problem worse. Why? Because it would get all these companies using complex routers to figure out how to prioritize all that data. The limitations expressed here are not bandwidth, but rather processing power limitations. It's about routing. Routing packets is a shit load easier when
  • The answer is... (Score:3, Insightful)

    by markov_chain ( 202465 ) on Friday February 23, 2007 @07:20PM (#18129458)
    Yes!
  • by KingSkippus ( 799657 ) * on Friday February 23, 2007 @07:20PM (#18129472) Homepage Journal

    Qwest is one of the companies speaking out [com.com] against net neutrality. The CEO even went as far as to call it "really silly [chicagotribune.com]." Could it be that the CTO's comments are politically motivated?

    I, for one, think so.

    • by killbill! ( 154539 ) on Friday February 23, 2007 @07:27PM (#18129552) Homepage
      It's pretty obvious the whole purpose of the article is to drum up support for ending the net neutrality rule.

      From the article:
      Backed by several consumer groups as well as large Internet enterprises such as Google, network neutrality legislation forbids phone companies from managing the network to favor one Internet user's content over another's.

      Notice how the article ends on the tired "it'll be good to the consumer" strawman:
      underinvestment by some operators may "drive quality traffic to quality networks."
  • Here's an idea (Score:2, Interesting)

    by killbill! ( 154539 )
    Easy fix: systematic caching of bandwidth-intensive content at ISP level.

    Disclaimer: I'm currently working on such a project. ;)
    • by treeves ( 963993 )
      why should we believe you? You're an impostor.
    • by b0r1s ( 170449 )
      That's not really an easy fix... There are dozens of video sites with millions of videos each. Most ISPs dont have the resources to chache the number of distinct files were talking about.

      10ge, 40ge, 100ge, the capacity will grow when the money makes sense. Even small video sites [vobbo.com] push terabytes of traffic per month, expecting a full caching model to work is almost silly. There's a certain benefit for a small set of large, popular files, but that's not what's causing the problem - its the sheer number of obsc
      • Google with its dark fibre network and mega-data centres ought to be offering ISPs fast mirrors as local and quick as possible for its two video sites. It's another argument against NN, if the biggest sites want to attract such bandwidth-hungry users to its ads they should make provisions for it and not clog up the network. They may do this already, of course. But it shifts the emphasis onto the company providing the popular service rather than the one just delivering it.
    • by hurfy ( 735314 )
      And i want to do YouTube meets P2P ;)

      Not that it will help as much as it should seeing as my house is 2 blocks away and the internet goes like 3000 miles to get there....wtf?
    • Easy fix: systematic caching of bandwidth-intensive content at ISP level.

      So, basically, just usenet in favor of torrents?

  • The main bottleneck is the link from the isp to the user.

    -uso.
    • No it isn't.

      Cable modems and DSL modems can both provide roughly 10Mbps of bandwidth. That a LOT.

      But how many websites can simultaneously provide EVERYONE ON THE PLANET WITH AN INTERNET CONNECTION with 10Mbps download speeds? The answer is none.

      Remember, information on the internet flows from a relatively small number of servers to a HUGE number of end-users.
    • by billstewart ( 78916 ) on Friday February 23, 2007 @08:02PM (#18129984) Journal
      The real net neutrality arguments started happening when the telcos started to deploy high bandwidth to customers' homes to deploy television on them (especially when their execs started making boneheaded remarks, but there really are technical issues.)

      The estimated bandwidth required for television is about 15 Mbps/house, to support a 9 Mbps High-def channel and a few low-def channel at the same time, and the various high-speed ADSL flavors mostly get about 20-50 Mbps depending on distance from your house to the green concentrator box, and there are similar bandwidth constraints to cable TV modem concentrators. The green box has fiber back to the telco office, and a typical telco office handles 10K-100K houses. Fiber-to-the-home systems have more bandwidth from the box to your house, but there's still typically around 25 Mbps per house between the box and the telco.

      So if everybody's watching TV at 8pm, and they're all watching different channels, the telco office needs somewhere between 150 gigabits to 1.5 terabits per second. That's *way* more than it's getting today. After all, TV watching has much different statistics than either traditional Internet web+email content or even occasional Youtube watching - it's full bandwidth for a couple hours of primetime.


      On the other hand, if the video signals are coming in as television-style content that's multicast, an OC48 2.4 Gbps feed could handle something like 200 high-def channels and 300 low-def channels. Internet-style multicast might or might not be able to handle it - as you start getting more people subscribing to content, it's going to hit the wall and choke at some point. On the other hand, if the telco or cable modem company manages it like a cable TV company selling channels, they can make sure everybody's got access to the "500 channels and nothing's on" vast wasteland of American television, and it'll work. It's not net neutrality, it's cable TV, but it works. There are hybrid models possible (e.g. the telco makes sure there's 100 channels of basic cable subscribed to the multicast feeds and the rest is first-come-first-served, with equipment enforcing the number of channels that get carried so it all fits in the telco office's available feed), but it's not clear that the telcos know how to sell that sort of thing. On the other hand, if they do too good a job of emulating the cable TV business, everybody's going to ignore them and use satellite dishes plus Youtube and Bittorrent.


      The real trick with net neutrality is going to be getting the telcos to realize that they should sell you the non-TV part of the new bandwidth they're deploying as Internet bandwidth, with a pricing model different from "it's twice as big as your current bandwidth so we'll charge twice as much".

    • by g-san ( 93038 )
      I thought it was between the user and the keyboard?
  • Bandwidth can always be bought (but latency can't)
  • by Twillerror ( 536681 ) on Friday February 23, 2007 @07:40PM (#18129716) Homepage Journal
    The internet will continue to grow in capacity and as it has new products will come out to fill the void.

    My biggest issues with youtube are at work in our main office. We have a large application hosted in data center. It is a major hub for internet connectivity for the region. Given that we are so close to some big vendors we can get lots of bandwith for relatively low prices. If my employees where sitting in that facility they could surf youtube.com all day.

    Now at home I can also do it. I pay Comcast a big more for the extra bandwidth and I can download over a meg a second from some sites. Verizon is going to be laying fiber directly to houses and businesses soon.

    Get into our offices and it is a different story. We have dual t1s coming in and only 60+ employees, but we are constantly saturated. Combine that with the fact that Cisco Pixes have horrible throttling support and you end up with times when I can't even access basic websites very quickly. The issue here is that T1s and DS3s are freakin expensive compared to a simple cable modem. We have been tempted to get Comcast bussines ( which makes me shiver a bit ) because I can get larger down pipes for general internet surfing. We only host a few services such as email here so it isn't like we need megs of up bandwith.

    Throttling would go along way to solve this issue. Youtube could buffer people down quite a bit, you would just have to wait for the movie to buffer a bit. For shared internet connections and ISPs this could allow for better QOS.

    Distribution models will help a lot. Youtube should have replicated servers in major market. As more players get in the video game I'm sure they will be setting up shop in several areas. Video doesn't change that much so when one person uploads it can be replicated throught out the network. You can still host the main links from a centralized place, but then stream the video from the closest location as it becomes available. This takes all the traffic from the west coast and keeps it there keeping people from the midwest from saturating the big pipes that connect the regions. Less hops also means less latency which is good for everyone.

    People have been saying this same thing for ever. Telecom companies are just afraid of admitting that they can't charge up the ying yang for DS3s anymore. They are also going to have to invest in their networks which there shareholders hate. It is also the local telcoms that irritates me. Although dealing with Sprint is no treat, dealing with SBC/ATT/other momma bells is huge pain.

    Networks are distributed by nature, so it just means you can't pipe all the data thru centralized routers. You are going to have to setup an infrastrute that can do very basic routing in a spider web. You can route packets very quickly if you just look at the first octect...and forward along to another router. All 1.xxx.xxx.xxx thru 5.xxx.xxx.xxx.xxx can be piped to a router that knows about those routes, and even breaks it down further. If you think about it they don't even need to do that they can just take the packet and load balance to many other devices. I think it'll be a while before we can't route faster...it is not like faster switching rates is completely dead.

    If anything video is just forcing the issue of increasing the capacity, which will always need to grow. Eventually we will be streaming high end video content, and this article will be a long forgotten joke.

    • by adamruck ( 638131 ) on Friday February 23, 2007 @07:56PM (#18129916)
      I don't understand why people keep equating T1's to fast internet. Your office has the equivalent of about 50x dialup connections for about 60 people. It doesn't take a veteran sysadmin to understand why that is a problem.
    • by jd ( 1658 ) <imipak@ y a hoo.com> on Friday February 23, 2007 @08:28PM (#18130200) Homepage Journal
      Everything - from the replication of databases or file storage to the distribution of high-end video - is delivered on a point-to-point basis. This simply does not scale. It is inefficient, it is expensive, it is wasteful, it is.... so mindbogglingly primitive. Point-to-point was great, when you had telephone switchboard operators. In the days of scalable reliable multicast (SRM) and anycast, when the Internet backbone runs multicast protocols natively (there has been no "mbone" per-se since 1996), it is so unnecessary.

      Even if you limit yourself to replicating the distribution points with a protocol such as SRM or NORM (NACK-oriented Reliable Multicast), you eliminate huge chunks of totally unnecessary Internet traffic. However, there is no reason to limit yourself like that. The latency involved over long-distance Internet connections must exceed the interval time between requests for high-demand video content, so by simply waiting a little and collecting a batch of requests, you can transmit to the whole lot in a single go. No need for a PtP connection to each.

      Then there is the fact that video is not the only information that eats bandwidth for breakfast. Static content - PDFs and other large documents - also devour any surplus capacity. So all an ISP needs to do is run a copy of Squid on each incoming line. How hard is that? It takes - what - all of 10 minutes to configure securely and fire up. You then forget about it.

      There are people who would argue that it would impact banner ad revenue. Well, banner ad revenue is typically per-click, not per-view, so that is really a weak argument. Then there is the problem of copyright, as the cache is keeping a copy of text, images, etc. Well, so is your browser. Until a major website sues a Firefox user for copyright infringement for leaving their local cache enabled, it would seem that this is more paranoia than practical. As writers have noted for many centuries, we need fear nothing but fear itself. It is our fear of these solutions that are creating our existing problems. It seems the height of stupidity to create real problems for the sole purpose of avoiding problems that might be entirely fictional. "Better the devil you know" is a weak excuse when the devil you know is unacceptable.

      • Mod Parent Up! Most insightful article in the entire discussion so far.
    • You appear to be blaming the Internet for the results of having 60 people on a ~3Mb pipe, when in reality the problem is that you have less than 1/20th the bandwidth per person than they do at home. Of course it's going to be slow. As usual, the problem in the U.S. is incredibly expensive client connections. If the Internet was the problem, you wouldn't be able to send your email or even get to youtube because everyone else would be clogging all the bandwidth. Throttling your tiny bandwidth probably won't h
    • Dual t1s? is that you gramps? is it 1996? Get some fibre for gods sake! oc3s like what, a few grand a month? For 60 employees doing internet related business thats nothing!

      Either that or put a dns entry in for youtube of 0.0.0.0 , or block it on your fancy ciscos. Honestly, if you are at work you should have no expectation of youtube working. I can think of no honest way that watching youtube could be considered work.

    • Get into our offices and it is a different story. We have dual t1s coming in and only 60+ employees, but we are constantly saturated. Combine that with the fact that Cisco Pixes have horrible throttling support and you end up with times when I can't even access basic websites very quickly.

      Sounds like you need to get yourself a halfway decent QoS box and start throttling traffic on your end. Throttling traffic at the ISP level isn't going to do a damn thing for you.

      Better throw in a content cache in there, t

  • This is kinda silly. On the fiber side, a pair of fibers is rarely used to transmit more than about 40 gbps, fiber has proven to handle speeds closer to a terabit and its trivial to run multiple fibers in parallel. We won't run out of fiber capacity on the trunks this century, let alone this year.

    The equipment side is a little harder, but only a little. It turns out its relatively hard to switch more than 10 gbps. Doable, but hard. So what? If A connects to B, B connects to C and B is overwhelmed with too m
  • Backbone ISP's that can't keep up with have to either upgrade or lose business. Same with local ISP's. This is called a "free market".

    What needs to happen is a two tiered bandwidth scheme, sort of similar to the local-vs-long-distance telephone issues.

    1) incredibly fast access from the ISP to their customers (similar to local phone service).
    2) slower access to other ISP's.

    It is insanity that I pay one price for relatively slow DSL that works the same whether I am connecting next door or to Japan. We shou
    • I've often wondered why the ISPs don't allow uncapped speeds if you don't go out of their network. It wouldn't cost them anything extra and I'm sure marketing could manipulate it to bring in more customers.
  • Does anyone else remember "way back when" in the mid-90s the internet would start to drag around lunch time and again around dinner time? Somehow, I don't think we'll swing back to that point, but the whining in the article sure seems fearful.

    "2007 may be the year of the tipping point where growth in capacity cannot cope with use," Tansley said.

    OH NOES!!!
    • Does anyone else remember "way back when" in the mid-90s the internet would start to drag around lunch time and again around dinner time? Somehow, I don't think we'll swing back to that point, but the whining in the article sure seems fearful.

      Clearly the entire Internet, the world wide communications network, was sensitive to your local time zones. Did it happen to respect Daylight Savings Time changes in its daily slowdown as well? You don't suppose that might have been localized or anything?
  • by zorkmid ( 115464 ) on Friday February 23, 2007 @07:49PM (#18129816)
    Raise prices until the underclass can't afford it. Then they'll drop off and stop clogging my intraweb tubes.
  • Consolidation of ISPs and centralization by telecoms have crippled the Internet. In the past, redundant routes and competing ISPs were commonplace. Today, everybody wants to play gatekeeper and charge for traffic. Less routes & pipes = less bandwidth & redundancy.

    For the record: ISPs are making plenty of money on YouTube traffic, they simply want to ALSO charge the consumer. How many $millions per month does YouTube pay? How many $millions per month do broadband subscribers pay? That, we're led to b
  • by strangedays ( 129383 ) on Friday February 23, 2007 @08:00PM (#18129966)
    This appears to be yet another atroturfing attempt.
    See Slashdot post: "How Would You Deal With A Global Bandwidth Crisis?" Posted by Zonk on Thursday February 15, @06:19PM
    http://ask.slashdot.org/article.pl?sid=07/02/15/18 25230/ [slashdot.org]
    (please remove the silly extra space slash adds to the url above, just before 25230, it breaks the link)
    Clearly we are going to be treated to this bogus bandwidth crises bullshit approximately once a week, probably to collect some supportive comments for the need for more control/cost/etc.
    Please don't feed the trolls, or help them lay more Astroturf for Net Neutrality.
  • by mr_mischief ( 456295 ) on Friday February 23, 2007 @08:05PM (#18130000) Journal
    When he's concerned about bandwidth demand outstripping computing power, that's not a fiber count problem. That's a router problem. He's saying the routers aren't gaining capacity to route packets as quickly as the number of packets to route is rising.

    No amount of extra fiber will help if the routers can't keep up. Setting up more routers in the same interconnect centers will bring either bigger routing tables or higher latencies depending on how they're connected to one another. Setting up more interconnects which are more geographically dispersed and which route more directly between the endpoints will help, but that's a very expensive option. New buildings in new areas with new fiber running to them and new employees to man them simply cannot be made into a small investment.

    Mesh networks, P2P content distribution, caching at the local level, multicasting, and some other technical measures can all theoretically help, too. So can spreading out the data centers of the big media providers and routing their traffic more directly that way, but again centralization of data centers saves a lot of money.

    If demand is really growing too fast to handle (I have my doubts about the sky actually falling) one of the best ways to assure that bandwidth demands are met is to slow the increase in demand. The quickest and easiest way to slow increase in demand for something is to raise its price. That's an ugly thought for all of us on the broadband price war gravy train, but it's basic economics. Let's hope for a technological solution (or a group of them) instead, if it's really a problem and not just hype to hit our wallets in the first place.
    • by Detritus ( 11846 ) on Friday February 23, 2007 @08:39PM (#18130272) Homepage
      Computing power isn't really the issue either. Routers do not have to be designed around general-purpose computers. I've written software for systems based upon 1970s technology that could process multi-megabit data streams. The key was clever design and architecture, with a dose of custom hardware for things that were impractical to do with software.
  • by deckert_za ( 837816 ) on Friday February 23, 2007 @08:08PM (#18130028)
    I live in South Africa. Like Australia we're geographically far away from most of the "internet content", but unlike Australia our bandwidth costs are astronomical (mainly due to a telecoms monopoly) on the thin fibre links that we have.

    But because of the bandwidth situation most SA ISPs have invested in massive cascaded caching infrastructure all over the country and at the so-called logical borders where the links exit to the US, Europe and far East. I continually monitor HTTP headers to check the cached status and easily 70% of the regular content I surf comes from one of the local caches.

    Even websites within South Africa are reverse-cached, i.e. the ISPs put caches in at the foreign landing points to speed up access (and lower return bandwidth costs) to foreign surfers.

    I sometimes think that the rest of the world has forgotten about caching due to the apparent abundance of bandwidth available in those countries. Maybe we'll see a return of caching polularity?

    --deckert

  • The front page of the Business section of the Chicago Tribune has a graphic showing the burgeoning use of the internet over the last decade -- the trouble is I think it is off by several orders of magnitude. The graphic is labeled something to the effect of "Gigabytes over major internet backbones per month" then lists 2006 as 700. 700 Gigabytes per month? With some people downloading HD content there are a significant number of users downloading 700 Gigabytes all by themselves per month. Maybe it was
  • In the UK broadband pricing has fallen to the same level as dialup used to be. At the same time traffic limits have been imposed which are very harsh. So in the Uk the demand won't be a problem as ISPs will disconnect their users or ask them to pay more.

    The trouble is, such broadband starves the ISPs of money to develop their networks and broadband should cost more than dialup used to.
    • A lot of this is down to BT's introduction of capacity based charging (i.e. metering) of the central pipes between their network and the ISPs. Hopefully if the LLU ISPs get their act together and start supplying a decent service we can bypass the BT tax and start selling high speed unmetered connections again.
  • generated by companies that do not want net neutrality.

    Make people fell like there bandwidth is in danger,
    Blame it on those kids that don't have a life and download videos all day,
    regulate priority.

  • by Anonymous Coward
    A week ago many Pipex ADSL users received letters telling them their bandwidth usage is too high, citing the Acceptable User Policy (AUP) and Fair Use Policy (FUP) documents on their website which they want the letter recipients to sign & return saying they understand and will comply with the AUP and FUP.

    But there's a couple of problems there. The users are on Pipex ADSL connections which are stated as being unlimited, the FUP says it doesn't want its users to download during peak times (weekday 6pm-1
  • by IBitOBear ( 410965 ) on Friday February 23, 2007 @08:37PM (#18130260) Homepage Journal
    If the net needs anything it needs Quality of Service routing at the customer access point.

    NO, I am _NOT_ talking about a non-neutral net. I think net neutrality is mandatory.

    What I am talking about is an end to TCP Retransmits in our lifetime. (Ok, that is overstating it a little 8-).

    At my home I put together a QOS gateway that throttled my _outgoing_ packets to the speed of my cable modem _and_ made sure that if it had to drop something it would _not_ drop outgoing "mostly ACK" packets. (e.g. outgoing TCP packets with little or no data payload get best delivery.)

    This action lowered my incomming packet count and got my effective download speed to closely approach the bandwidth tier I am paying for. This was a 3x to 4x improvement in throughput. This, when combined with the lower packet count, implies that previously I was wasting 2 out of every 3 packets due to unnecessary "recovery" (read useless retransmits).

    That cost must, then, have been paid at every step along every trip etc.

    Then I turned on HTTP Pipelining on all the (forefox) browsers in my house (etc).

    I suspect that if we could do something about the waste ratio, and generally speed up each transaction by squelching the noise and getting better effective throughput, "the intertubes" would be a lot clearer and the capacity wouldn't fall apart so readily.

    [aside]
    If we could (pie in the sky) get the porn and ewe-tube traffic onto the mbone with smart caching near the client so that each person didn't have to get each part "the whole way" from the provider even though everybody else is watching the same top-ten clips of the day, we could make more progress. This falls apart because it messes up the charging model for porn and advertising, and ewe-tube gawkers couldn't possibly stand waiting 2 to 6 seconds to join a synchronized swarm...
    [/aside]

    This is very like the whole thing where a guy with half-flat tires is standing around complaining about his gas mileage.

    Collision detect style arbitration falls apart when you saturate the link, and cable providers screwed themselves with the way most cable modems fail to buffer outgoing traffic. Penny wise and Pound foolish of them to make the default devices so cheap. Iterate as necessary for businesses and ISPs with their underpowered gateway machines terminating their PPOE (etc).

    As for the part where that failure to schedule packets at the most basic level will be turned into "demonstrable evidence" for the "need" non-neutral networks... That will be the "WMDs" of the net neutrality war.
  • Funny, i coulda sworn that email was gonna bring the 'Net to a grinding halt. And then IM was gonna. And then MP3 downloads were gonna. And then file sharing was gonna.

    But hey, far be it from me to question the wisdom of our corporate overlords... if video sites are gonna destroy the 'Net, then We Must Pass Laws!!!1!
  • by Anonymous Coward
    > meaning that engineers can no longer count on newer, faster computers to
    > keep ahead of their capacity demands.

    A wise man once said that the Internet was not a dump truck, but that it was in fact a series of tubes.

    Just pour some drano in there. Start with your local phone or cable box by your house or apartment. The sparks and smoke are just the internet cleaning itself. Presto: more bandwidth.

    (Disclaimer: do not do what is suggested above. The person referenced also isn't a wiseman, fortunately
  • Backbone? Pfftt! (Score:5, Informative)

    by JoelG ( 5822 ) on Friday February 23, 2007 @09:10PM (#18130484)
    Having worked in the ISP market for the past 7 years I have seen the access portion of Internet access go through multiple fazes. From a single T1, to 4T1's, a 10Mbps feed, 100Mbps and now Gigabit and even multiple Gigabit connections to multiple peers.

    For a standard ISP it's a given. Your bandwidth needs double, if not triple once every 2 years, 3 if you're lucky. New technologies come up at a regular pace, it's a part of the industry. Whether it's graphical websites, streaming audio, peer to peer networks, or streaming video, new technology creating more demands for bandwidth requires you to upgrade your network access over time.

    Having said this, working for a company with over 10,000 highspeed and dialup internet subscribers, I have found some interesting trends. It's not the Youtube's, VoIP, Peer2Peer &etc eating up our resources... It's spyware infected machines, spam attacks and hacked servers that eats up the bandwidth. When I take a look at my network utilization and see a spike I don't say to myself "Oh no! There must be a hot new movie on YouTube that everyone is watching." Far from that! I say to myself "Stink! What spyware program is it this time." or "is my web server under attack again?"

    In addition, as access rates increase I've noticed that performance issues is less affected by the speed of the provider network I'm connected to and more by the remote sites access speeds. You'd be surprised by the puny amount of bandwidth the majority of websites on the internet run on. High Bandwidth sites running on fractional T1's, it's just crazy! Entire computer networks run on the cheapest network equipment known to man.

    Has anyone taken a good look at the WorldComm's, Level3 and Bell networks of the world? They are already at 10Gbps with MPLS are their core, and multi-gig connections to their customers. The internet backbone looks better than it ever has! There are far more problems with the endpoints of the internet than the backbone, and it's about time that people took more responsibility in making sure that their network elements are properly backed by the appropriate amount of bandwidth and secured from basic security threats than complaining about their backbone providers.
  • Enable multicast.

    That way, all the pr0n and digital radio and multiple broadcast stuff like that won't be choking up the relays with redundant packets. You'll see a 1000% improvement the day they mandate multicast I'll betcha.

  • Lobbying (Score:3, Informative)

    by kahrytan ( 913147 ) on Saturday February 24, 2007 @01:29AM (#18131890)
    This article is a lobbying article. And trying to get people into writing letters to their Representatives in Congress. In other words, Slashdot got duped into a political lobbying.

    And Net Neutrality should exist. Where there is a problem, there is a solution. Qwest and others just wants to take the easy out instead of solving the network congestion.

    Solution: Serve up the data faster to clients can disconnect so others can connect, download, and disconnect. Since servers use T3 or OC lines, it's not the server in question. It is the clients inability to receive the data at faster rates. Imagine if every computer in the world had a OC-48 connection. Verizon has the right idea too. They are connecting Optical Fiber to residential customers and hooking up Gigabit Ethernet in homes. It is a start in resolving the problem.

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...