Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Fixing the Unfairness of TCP Congestion Control

CmdrTaco posted more than 6 years ago | from the why-isn't-the-internet-a-democracy dept.

Networking 238

duncan99 writes "George Ou, Technical Director of ZDNet, has an analysis today of an engineering proposal to address congestion issues on the internet. It's an interesting read, with sections such as "The politicization of an engineering problem" and "Dismantling the dogma of flow rate fairness". Short and long term answers are suggested, along with some examples of what incentives it might take to get this to work. Whichever side of the neutrality debate you're on, this is worth consideration."

Sorry! There are no comments related to the filter you selected.

Not all sessions experience the same congestion (5, Interesting)

thehickcoder (620326) | more than 6 years ago | (#22844896)

The author of this analysis seems to have missed the fact that each TCP session in a P2P application is communicating with a different network user and may not be experiencing the same congestion as other sessions. In most cases (those where the congestion is not on the first hop) It doesn't make sense to throttle all connections when one is effected by congestion.

Goatse (-1, Troll)

Anonymous Coward | more than 6 years ago | (#22845012)

Goatse. [goatse.ch] [twofo.co.uk]

You nerds love it.

In other news, Zeus still sucks cock.

Re:Not all sessions experience the same congestion (2, Informative)

Kjella (173770) | more than 6 years ago | (#22845082)

Well, I don't know about your Internet connection but the only place I notice congestion is on the first few hops (and possibly the last few hops if we're talking a single host and not P2P). Beyond that on the big backbone lines I at least don't notice it, though I suppose it could be different for the computer.

Re:Not all sessions experience the same congestion (4, Interesting)

smallfries (601545) | more than 6 years ago | (#22845282)

Even if that is true, the congestion won't be correlated between between your streams, if it occurred on the final hops (and hence different final networks). There is a more basic problem than the lack of correlation between congestion on separate streams - the ZDnet editor, and the author of the proposal have no grasp of reality.

Here's an alternative (but equally effective) way of reducing congestion - ask p2p users to download less. Because that is what this proposal amounts to. A voluntary measure to hammer your own bandwidth for the greater good of the network will not succeed. The idea that applications should have "fair" slices of the available bandwidth is ludicrous. What is fair about squeezing email and p2p into the same bandwidth profile?

This seems to be a highly political issue in the US. Every ISP that I've used in the UK has used the same approach - traffic shaping using QoS on the routers. Web, Email, VoIP and almost everything else are "high priority". p2p is low priority. This doesn't break p2p connections, or reset them in the way that Verizon has done. But it means that streams belonging to p2p traffic will back off more because there is a higher rate of failure. It "solves" the problem without a crappy user-applied bandaid.

It doesn't stop the problem that people will use as much bandwidth for p2p apps as they can get away with. This is not a technological problem and there will never be a technological solution. The article has an implicit bias when it talks about users "exploiting congestion control" and "hogging network resources". Well duh! That's why they have have network connections in the first place. Why is the assumption that a good network is an empty network?

All ISPs should be forced to sell their connections based on target utilisations. Ie here is a 10Mb/s connection, at 100:1 contention, we expect you to use 0.1Mb/s on average, or 240GB a month. If you are below that then fine, if you go above it then you get hit with per/GB charges. The final point is the numbers, 10Mb/s is slow for the next-gen connections now being sold (24Mb/s in the UK in some areas), and 100:1 is a large contention ratio. So why shouldn't someone use 240GB of traffic on that connection every month?

Re:Not all sessions experience the same congestion (1)

Shakrai (717556) | more than 6 years ago | (#22845938)

the ZDnet editor, and the author of the proposal have no grasp of reality.

Indeed. Here was my favorite bit: "They tell us that reining in bandwidth hogs is actually the ISP's way of killing the video distribution competition"

And it's not? Recall the recent news over Time Warner's announcement -- 40GB as the highest tier they plan on offering. How could a tier so low have any other purpose besides killing online video distribution? 40GB in one month is almost achievable with ISDN -- technology that's 20 years old. Can we really not do any better then that in 2008?

Ie here is a 10Mb/s connection, at 100:1 contention, we expect you to use 0.1Mb/s on average, or 240GB a month. If you are below that then fine, if you go above it then you get hit with per/GB charges

Shouldn't the customer get a bill credit if they use less then the 240GB? Why do overages only cut one way? If they want us to believe that bandwidth has a fixed cost then Grandma should probably be paying a lot less then $30 for her broadband connection.

Re:Not all sessions experience the same congestion (1)

electrictroy (912290) | more than 6 years ago | (#22846750)

I have no objections to Overusage Fees (same as cellphone plans work). After all, if you insist upon downloading 15,000 megabyte HD DVD or Blu-ray movies, why shouldn't you pay more than what I pay (the type who prefers 250 meg xvid rips).

15,000 versus 250 megabytes.
You SHOULD pay more.
Just like a cellphone.

Some fools pay $90 a month in overage charges. I pay $5 a month and use my minutes sparingly. People who use more minutes/gigabytes should pay more than the rest of us pay. That's entirely fair and entirely reasonable (and not the least bit evil). I'd prefer to see overage charges than blocking.

Re:Not all sessions experience the same congestion (2, Insightful)

Alarindris (1253418) | more than 6 years ago | (#22846896)

I would like to interject that just because cellphones have ridiculous plans, doesn't mean the internet should. On my land line I get unlimited usage + unlimited long distance for a flat rate every month.

It's just not a good comparison.

Re:Not all sessions experience the same congestion (4, Insightful)

electrictroy (912290) | more than 6 years ago | (#22847038)

And I bet you pay a lot more for those "unlimited minutes" than the $5 a month I pay.

Which is my point in a nutshell: - people who want unlimited gigabytes, should be paying a lot more than what I'm paying for my limited service ($15 a month). That's entirely and completely fair. Take more; pay more.

Just like electrical service, cell service, water service, et cetera, et cetera.

Re:Not all sessions experience the same congestion (4, Funny)

Alsee (515537) | more than 6 years ago | (#22845974)

All ISPs should be forced to sell their connections based on target utilisations. Ie here is a 10Mb/s connection, at 100:1 contention, we expect you to use 0.1Mb/s on average, or 240GB a month. If you are below that then fine, if you go above it then you get hit with per/GB charges.

The author of the article, George Ou, explains why he thinks you are a stupid and evil for suggesting such a thing. [zdnet.com] Well, he doesn't actually use the word "stupid" and I don't think he actually uses the word "evil", but yeah that is pretty much says.

You see in Australia they have a variety of internet plans like that. And the one thing that all of the plans have in common is that they are crazy expensive. Obscenely expensive.

So George Ou is right any you are wrong and stupid and evil, and the EFF is wrong and stupid and evil, and all network neutrality advocates are all wrong and stupid and evil, you are all going to screw everyone over force everyone to pay obscene ISP bills. If people don't side with George Ou, the enemy is going to make you get hit with a huge ISP bill.

Ahhhh... except the reason Australian ISP bills are obscene might have something to do with the fact that there are a fairly small number of Australians spread out across an entire continent on the bumfuck other side of the planet from everyone else.

Which might, just possibly MIGHT, mean that the crazy high Australian ISP rates kinda sorta have absolutely no valid connection to those sorts of usage-relevant ISP offerings.

So that is why George Ou is right and why you are wrong and stupid and evil and why no one should listen to your stupid evil alternative. Listen to George Ou and vote No on network neutrality or else the Network Neutrality Nazis are gonna make you pay crazy high for internet access.

-

Re:Not all sessions experience the same congestion (1)

smallfries (601545) | more than 6 years ago | (#22846152)

Nice, you made me smile. I saw his earlier article on ZDnet when I clicked through his history to establish how fucked in the head he was. He came high on the scale. The basic problem with his rant about metered access is that it's complete bollocks. A metered plan doesn't mean that you have an allowance of 0 bytes a month, with a per-byte cost. Instead it can be a basic allowance with a price to exceed that. This is how all mobile phone contracts in the UK work to price the access resource. We also have the metered without an allowance kind which are sold as Pay-as-you-go.

Ass end (1)

microbox (704317) | more than 6 years ago | (#22846840)

on the bumfuck other side of the planet from everyone else

I believe that the prime-ministerial term is "ass-end of the world". A proud moment for all Australians =), although, Colin Carpenter made us prouder when he said that Melbourne was the Paris end of the ass-end of the world.

Re:Not all sessions experience the same congestion (1)

Ed Avis (5917) | more than 6 years ago | (#22846768)

Indeed, on the face of it the proposal reminds me of range voting. If only each user would voluntarily agree to take less bandwidth than they're able to get, then the net would run more smoothly. But no P2Per would install the upgrade from Microsoft or from his Linux distribution to replace the fast, greedy TCP stack with a more ethical, caring-sharing one that makes his downloads slower.

What I don't understand is why this concerns TCP at all. An ISP's job is surely to send and receive IP datagrams on behalf of its customers. What you do in the higher levels of the protocol stack is up to you; some apps may be using UDP, others SCTP or other TCP replacements. The backoff and error correction of TCP is intended to ensure reliable connections between one host and another, not to somehow manage congestion for the whole Internet. Back in the days when the net was only thirty thousand hosts big and everyone had a competent, public-spirited sysadmin it used to work to insist that everyone change their client software for the greater good. Not now.

If any congestion filtering needs to be applied it should be done at the level of IP datagrams, no matter what they contain. If clients feel that high-priority applications like web browsing or VoIP shouldn't be blocked by P2P traffic, then they can set the priority field accordingly. Of course, this would instantly be defeated if ISPs honour that field too naively; everyone would just set all traffic to the highest priority level. An ISP would have to weight the priorities of a given subscriber based on their historical average, so that someone who sends every packet marked TOP URGENT gets no benefit.

Congestion notification is surely useful, as an additional mechanism to help higher levels of the network stack respond to packets being dropped or delayed lower down. But you can't rely upon users to always honour these voluntary systems. Any mechanism that is voluntary and isn't linked to some kind of metering or charging will be subverted, and probably sooner than you think. So the EFF and others are entirely right to suggest ISPs use metering. As long as the terms of service are clearly defined and you get what you pay for (so no bogus 'unlimited' promises), I don't see anything wrong in that.

A single slow connection changes your TCP window (3, Informative)

Gazzonyx (982402) | more than 6 years ago | (#22846238)

If you're using Linux, which TCP Congestion algorithm are you using? Reno isn't very fair; if a single connection is congested beyond the first hop, you'll slow down the rest of your connections when the window slides to smaller units. Have you tried Bicubic, Veno, or any of the other 9 or 10 congestion algorithms?

You can change them on the fly by echoing the name into your procfs, IIRC. Also, if you have the stomache for it, and two connections to the internet, you can load balance and/or stripe them using Linux advanced Routing & Traffic Control [lartc.org] (mostly the ip(1) command). Very cool stuff if you want to route around a slow node or two (check out the multiple path stuff) at your ISP(s).

Re:Not all sessions experience the same congestion (0)

Anonymous Coward | more than 6 years ago | (#22845106)

It doesn't make sense to throttle all connections when one is effected by congestion.

"Affected". The word is "affected". You should be upset with your English teacher -- he or she has clearly failed you.

Re:Not all sessions experience the same congestion (0)

Anonymous Coward | more than 6 years ago | (#22845178)

It doesn't make sense to throttle all connections when the throttle is caused to come about by congestion.
Although... if you use the verb form of "effected" in kind of makes sense. Haha. I kinda doubt it, though. Either poorly worded or a missed used non-word.

Re:Not all sessions experience the same congestion (1)

b96miata (620163) | more than 6 years ago | (#22845540)

He also ignores the fact that a throttling mechanism is already built into every DSL/Cable modem out there - the speed it's provisioned at. (incidentally, also the only place to implement any sort of effective dynamic throttling controls - anywhere else and users will find a way around it.)

If ISP's would just build their networks to handle the speeds they sell instead of running around with their hands in the air over the fact the 'net has finally evolved to the point where there are reasons for an individual subscriber to actually be sending data at something over the previous benchmark of orders of magnitude less than they receive, this might not be as much of a problem. Currently they come off sounding like a pissed off buffet owner when a NAAFA convention comes to town.

Also, calling net neutrality a "religion" is getting really, really old. Make your damn argument without resorting to silly name calling.

Re:Not all sessions experience the same congestion (1)

electrictroy (912290) | more than 6 years ago | (#22846838)

>>>"If ISP's would just build their networks to handle the speeds they sell"

Or better yet, advertise the connection realistically. i.e. If your network can't handle half your users doing 10 megabit video downloads, then sell them as 1 megabit lines instead. Downsize the marketing to reflect actual performance capability.

Re:Not all sessions experience the same congestion (0)

Anonymous Coward | more than 6 years ago | (#22845748)

It's not really surprising. Before he was a tech reporter, he received a solid background in tech issues as, erm, a ballet dancer [archive.org] .

Re:Not all sessions experience the same congestion (3, Informative)

Mike McTernan (260224) | more than 6 years ago | (#22845810)

Right. The article seems to be written on the assumption that the bandwidth bottleneck is always in the first few hops, within the ISP. And in many cases for home users this is probably reasonably true; ISPs have been selling cheap packages with 'unlimited' and fast connections on the assumption that people would use a fraction of the possible bandwidth. More fool the ISPs that people found a use [plus.net] for all that bandwidth they were promised.

Obviously AIMD isn't going to fix this situation - it's not designed to. Similarly, expecting all computers to be updated in any reasonable timeframe won't happen (especially as a P2P user may have less little motivation to 'upgrade' to receive slower downloads). Still, since we're assuming the bottleneck is in the first hops, it follows that the congestion is in the ISPs managed network. I don't see why the ISP can't therefore tag and shape traffic so that their routers equally divide available bandwidth between each user, not TCP stream. Infact, most ISPs will give each home subscriber only 1 IP address at any point in time, so it should be easy to relate a TCP stream (or and IP packet type) to a subscriber. While elements of the physical network are always shared [plus.net] , each user can still be given logical connection with guaranteed bandwidth dimensions. This isn't a new concept either, it's just multiplexing using a suitable scheduler, such as rate-monotonic (you get some predefined amount) or round-robin (you get some fraction of the available amount).

Such 'technology' could be rolled by ISPs according their roadmaps (although here in the UK it may require convincing BT Wholesale to update some of their infrastructure) and without requiring all users to upgrade their software or make any changes. However, I suspect here the "The politicization of an engineering problem" occurs because ISPs would rather do anything but admit they made a mistake in previous marketing of their services, raise subscriber prices, or make the investment to correctly prioritise traffic on a per user basis, basically knocking contention rates right down to 1:1. It's much easier to simply ban or throttle P2P applications wholesale and blame high bandwidth applications.

I have little sympathy for ISPs right now; the solution should be within their grasp.

This is a good proposal (1, Interesting)

Anonymous Coward | more than 6 years ago | (#22844950)

The point isn't to kill p2p. It's simply to make sure that everyone plays by the same rules... no more exploitive cheating and bandwidth hogging by the few. When there really is leftover bandwidth, p2p filesharers can use as much as they like. But it's ridiculous that when I'm spending 30 seconds downloading CNN.com during a high-demand period, some asshat is using twenty times my bandwidth downloading some file that could just as easily be sent at any time of day.

It's like taking a sofa on to the subway... if you're going to do it, pick a time when everyone else isn't trying to get to work.

Re:This is a good proposal (4, Insightful)

vertinox (846076) | more than 6 years ago | (#22845368)

But it's ridiculous that when I'm spending 30 seconds downloading CNN.com during a high-demand period, some asshat is using twenty times my bandwidth downloading some file that could just as easily be sent at any time of day.

1. Could that possibly be to the processor demand on the CNN servers at peak times?
2. Does not certain companies like Blizzard force P2P patches onto their customers?
3. Is your 30 second video file just as important as a technician using torrents to download a Linux Distro to put on a server used for business they need up and running ASAP?
4. And lastly... Someone using a torrent shouldn't soak up an ISPs entire bandwidth... Unless someone at CNN is using the web server to host torrents but thats nothing you or your ISP can control.

Re:This is a good proposal (1)

pacman on prozac (448607) | more than 6 years ago | (#22845576)

This is accomplished by the single-stream application tagging its TCP stream at a higher weight than a multi-stream application

The proposal seems to be relying on the clients to mark their traffic appropriately.

So p2p apps will just start marking their own traffic as high weight and we're back to square one.

I don't think any proposal that involves trusting the end clients is going to work on the internet. There are just too many untrustworthy people around ;)

Re:This is a good proposal (2, Interesting)

TheRaven64 (641858) | more than 6 years ago | (#22845764)

Depends on how the ISP charges you. IP packets have 3 relevant flags; low delay, high throughput and high reliability. I'm not really sure what high reliability means, since protocols that need reliability tend to implement retransmission higher up the protocol stack, so it can probably be ignored. There are very few things that need high throughput and low latency, so an ISP could place quite a low cap on the amount of data with these flags set you were allowed to send. If you exceeded this, then one or both of the flags would be cleared.

This then lets the user put each packet into one of three buckets:

  • Low delay.
  • High throughput.
  • Don't care.
A packet with the low delay flag set would go into a high priority queue, but only a limited fraction of the customer's allotted bandwidth could be used for these. Any more would either be dropped or have the low delay flag cleared. These would be suitable for VoIP use and would have low latency and (ideally) low jitter.

Those with the high throughput flag set would have no guaranteed minimum latency. They would go into a low-priority, very wide queue. If you're doing a big download, you set this flag - you'll get the whole file faster, but you might get a lot of jitter and latency.

Perhaps the high reliability flag could be used to indicate which packets should not have the flags cleared if the quota was exceeded (and other packets without the high reliability flag set were available for demotion).

Of course, Microsoft's TCP/IP stack sets all of these flags by default, so most traffic would simply be placed into the default queue until they fixed it.

Re:This is a good proposal (1)

yabos (719499) | more than 6 years ago | (#22846234)

If you let the application specify the priority then every application will set priority to high and the whole thing goes to the same as it is now.

Re:This is a good proposal (1)

TheRaven64 (641858) | more than 6 years ago | (#22846662)

That's my point. You shouldn't be specifying priority on a low/high scale, you should be specifying priority as low latency, high throughput or don't care. Background stuff would be don't care. Big downloads would be high throughput. VoIP would be low latency. Unless you make it an either/or thing, apps will just pick the best one. If you start doing traffic shaping based on categories, then apps will start picking the category that most applies to their traffic type or risk being in the wrong usage class.

Re:This is a good proposal (0)

Anonymous Coward | more than 6 years ago | (#22845772)

But it's ridiculous that when I'm spending 30 seconds downloading CNN.com during a high-demand period, some asshat is using twenty times my bandwidth downloading some file that could just as easily be sent at any time of day.
It's ridiculous that I'm trying to download some of my files at a time that's convenient for me and some idiot is trying to download CNN.com- hello, they have these things called televisions! Does he think that the internet was designed for him and everyone else is a second-class tuber?

Re:This is a good proposal (1)

Hal_Porter (817932) | more than 6 years ago | (#22845984)

The Briscoe proposal reminds me a bit of USB actually. USB allocates bandwith for control transfers (analogous to downloading a web page) and isonchonous transfers (analogous to Skype, VOIP and streaming video) first. Anything left goes for bulk transfers (analogous to P2P). So if you're loading CNN or watching a video you get prioritised and things get better. But P2P isn't hurt by much, since web pages and streaming actually only use a small percentage of total bandwidth, especially at peak times compared to P2P [zdnet.com] . Outside peak times, P2P could actually be allowed to use more bandwidth.

Re:This is a good proposal (1)

yabos (719499) | more than 6 years ago | (#22846048)

I've been suggesting this for a while. As someone downloading torrents, I don't care that if the network is congested that my traffic is given lower priority over http or IMAP or whatever. The ISPs already do identify P2P and throttle it when there's no congestion. If they just switched to QOS on their own network and not trust the client to provide the priority then that could work just fine. The problem is that ISPs are throttling P2P when it's not even a peak time on their network. If they can get QOS to work properly then everything should just work as long as they can identify the P2P streams, which they can do pretty well these days.

Re:This is a good proposal (1)

Thundersnatch (671481) | more than 6 years ago | (#22846606)

You ignore the fact that ISPs pay for transit of your traffic to other networks in some fashion. Even if your ISP is a Tier-1, they pay to maintain peering arangments with other Tier-1 ISPs. Those arrangements are based on traffic balance. So even if you're running torrents in the middle of the night when the peering links are mostly unutilized, you're still potentially costing your ISP a lot of money.

The problem is economic, not technological. Nobody has figured out a way to fairly distribute the infrastrcuture costs of P2P applications like BitTorrent. For the same reason - economics - IP multicast has never been widely deployed in the Internet at large. (Heck, BitTorrent is just a hack for getting multi-cast like behavior in the absence of Internet multicast support).

So who pays for a packet? The sending netowrk? The receiver? Both? "Neither" is not a sustainable option, despite the current Slashdot group-think opinion. If you figure it out, let us all know. I'm sure you'll get a PhD and a boatload of research dollars.

Weighted TCP solution (4, Interesting)

esocid (946821) | more than 6 years ago | (#22844956)

Under a weighted TCP implementation, both users get the same amount of bandwidth regardless of how many TCP streams each user opens...Background P2P applications like BitTorrent will experience a more drastic but shorter-duration cut in throughput but the overall time it takes to complete the transfer is unchanged.
I am all for a change in the protocols as long as it helps everybody. The ISPs win, and so do the customers. As long as the ISPs don't continue to complain and forge packets to BT users I would see an upgrade to the TCP protocol as a solution to what is going on with neutrality issues, as well as an upgrade to fiber optic networks so the US is on par with everyone else.

Re:Weighted TCP solution (2, Interesting)

cromar (1103585) | more than 6 years ago | (#22845274)

I have to agree with you. There is ever more and more traffic on the internet and we are going to have to look for ways to let everyone have a fair share of the bandwidth (and get a hella lot more of the stuff). Also, this sort of tactic to bandwidth control would probably make it more feasible to get really good speeds at off-peak times. If the ISPs would do this, they could conceivably raise the overall amt. of bandwidth and not worry about one user hogging it all if others need it.

On the internet as a democracy: would ISPs get more votes because they own more addresses? The users could band together as a union and use our votes to decide the fate of the net. Haha, but Im rambling.

Sadly, no, upgrading doesn't help... (3, Informative)

nweaver (113078) | more than 6 years ago | (#22845328)

There have been plenty of lessons, Japan most recently, that upping the avaible capacity simply ups the amount of bulk-data P2P, without helping the other flows nearly as much.

Proof? (0)

Anonymous Coward | more than 6 years ago | (#22846330)

Where is it?

Re:Weighted TCP solution (1)

Sancho (17056) | more than 6 years ago | (#22845416)

Don't ISPs tend to pay by the amount of traffic (rather than just a connection fee, as most of their users pay?) This solution seems to be looking at the problem from the perspective that p2p users are harming bandwidth for casual users, instead of simply costing the ISPs more money due to the increased amount of data that they're pushing through their pipes.

I agree,but it's hard. (2, Interesting)

jd (1658) | more than 6 years ago | (#22846860)

Fortunately, there are plenty of software mechanisms already around to solve part of the problem. Unfortunately, very few have been tested outside of small labs or notebooks. We have no practical means of knowing what the different QoS strategies would mean in a real-world network. The sooner Linux and the *BSDs can include those not already provided, the better. We can then - and only then - get an idea of what it is that needs fixing. (Linux has a multitude of TCP congestion control algorithms, plus WEB100 for automatic tuning, so it follows that if there's a rea problem, then it's not really there.)

I know that only a handful of these have been implemented for Linux or *BSD, even fewer for both. Instead of Summer of Code producing stuff nobody ever sees, how about one of the big players invest in students producing some of these meaty chunks of code?

Schemes for reducing packet loss by active queue management: REM, RED, GRED, WRED, SRED, Adaptive RED, RED-Worcester, Self-Configuring RED, Exponential RED, BLUE, SFB, GREEN, BLACK, PURPLE, WHITE

Schemes for adjusting packet queues: CBQ, Enhanced CBQ, HFSC, CSFQ, CSPFQ, PrFQ, Local Flow Separation,

Schemes for scheduling traffic: Gaussian, Least Attained Service, ABE, CSDPS

Schemes for shaping traffic flows: DSS, Constant bit Rate

Schemes for bandwidth allocation: RSVP, YESSIR, M-YESSIR

Schemes for active flow control: ECN, Mark Front ECN

Schemes for managing queues: Adaptive Virtual Queue, PRIO

A New Way to Look at Networking (5, Informative)

StCredZero (169093) | more than 6 years ago | (#22844968)

A New Way to Look at Networking [google.com] is a Google Tech Talk [google.com] . It's about an hour long, but there's a lot of very good and fascinating historical information, which sets the groundwork for this guy's proposal. Van Jacobson was around at the early days when TCP/IP were being invented. He's proposing a new protocol layered on top of TCP/IP that can turn the Internet into a true broadcast medium -- one which is even more proof against censorship than the current one!

Neutrality debate? (2, Insightful)

Anonymous Coward | more than 6 years ago | (#22844978)

Whichever side of the neutrality debate you're on, this is worth consideration.

There is a debate? I thought it was more like a few monied interests decided "there is a recognized correct way to handle this issue; I just make more profit and have more control if I ignore that." That's not the same thing as a debate.

Simple (-1, Offtopic)

truthsearch (249536) | more than 6 years ago | (#22844996)

Shove a dump trunk down the tubes. That should unclog the poker chips and horses.

Good luck with that.. (4, Insightful)

spydum (828400) | more than 6 years ago | (#22845018)

For what it's worth, Net Neutrality IS a political fight, p2p is not the cause, but just the straw that broke the camels back. Fixing the fairness problem of tcp flow control will not make Net Neutrality go away. Nice fix though, too bad getting people to adopt it would be a nightmare. Where was this suggestion 15 years ago?

So right, yet so wrong (5, Insightful)

Chris Snook (872473) | more than 6 years ago | (#22845024)

Weighted TCP is a great idea. That doesn't change the fact that net neutrality is a good thing, or that traffic shaping is a better fix for network congestion than forging RST packets.

The author of this article is clearly exploiting the novelty of a technological idea to promote his slightly related political agenda, and that's deplorable.

Re:So right, yet so wrong (2, Informative)

Sancho (17056) | more than 6 years ago | (#22845482)

The problem with traffic shaping is that eventually, once everyone starts encrypting their data and using recognized ports (like 443) to pass non-standard traffic, you've got to start shaping just about everything. Shaping only works as long as you can recognize and classify the data.

Most people should be encrypting a large chunk of what goes across the Internet. Anything which sends a password or a session cookie should be encrypted. That's going to be fairly hard on traffic shapers.

Re:So right, yet so wrong (3, Interesting)

irc.goatse.cx troll (593289) | more than 6 years ago | (#22846024)

Shaping only works as long as you can recognize and classify the data.


Not entirely true. It works better the more you know about your data, but even knowing nothing you can get good results with a simple rule of prioritizing small packets.

My original QoS setup was just a simple rule of anything small gets priority over anything large. This is enough to make (most) VoIP, games, SSH, and anything else that is lots of small real time packets all get through over lots of full queued packets (transfers).

Admittedly BitTorrent was what hurt my original setup, as you end up with a lot of slow peers each trickling transfers in slowly. You could get around this with a hard limit of overall packet rate, or with connection tracking and limiting the number of IPs you hold a connection with per second (and then block things like UDP and ICMP)

Yeah its an ugly solution, but we're all the ISP's bitch anyways, so they can do what they want.

Re:So right, yet so wrong (1)

Sancho (17056) | more than 6 years ago | (#22846286)

That's a fair point. Of course, the small packet example is not likely to help in the case of ISPs trying to reduce P2P, but there could be other solutions. Of course, in these cases, you run high risks of unintended consequences.

GP is not a troll (1)

poetmatt (793785) | more than 6 years ago | (#22845570)

Poster above is not a troll on this matter. The issue is pointing at TCP being "Exploited" by Bittorrent and people have failed to look at how biased and full of false information the graphs are.

There's a graph that shows a bittorrent user as the highest bandwith user over a day and then puts a youtube surfer and a websurfer on the same bandwith level as an xbox gamer and things of that nature. That is so far off from eachother that it is despicable.

Every one of the ones I mentioned in the previous paragraph are listed as using ".1 kbps" of upstream. There is no way in the world even websurfing alone can only use .1 kbps of upstream, as that would be 40 times slower than dialup. Meanwhile to exaggerate even further they suggest that average upstream usage is .05kb/s. That would be what, 400 times slower than dialup if not more? This is absurd. They even take it a step further and suggest that less than 500 kb is sent per DAY over all the aforementioned methods. I think one or two websites can breach that amount, even on dialup.

Anyone who takes a biased study with a clear and apparent lack of research done, should look further at the details here. It's embarrassing that the blog says it submitted this graph to the White House. I don't think they could have tried any harder to vindicate bittorrent than they did with it.

Yes, bittorrent uses more speed through efficiency, not exploitation. No, bittorrent is not the only thing that opens up multiple TCP streams at the same time. Nobody uses a single stream of TCP only anymore. More than one opens the minute you go to any website, so this whole process is flawed.

Re:So right, yet so wrong (1)

Hatta (162192) | more than 6 years ago | (#22846416)

Parent is no troll, the authors political motivation is obvious from statements like:

While the network isn't completely melting down, it's completely unfair because fewer than 10% of all Internet users using P2P hogs roughly 75% of all network traffic at the expense of all other Internet users.

Duh, higher bandwidth applications take more bandwidth. Expecting parity between low bandwidth and high bandwidth applications is fundamentally biased against high bandwidth applications. If I'm an IRC user, and you download HD video, and we share the same pipe, does it make sense for us to have the same amount of bandwidth?

The problem has gotten so severe in Japan that the nations ISPs in conjunction with their Government have agreed to ban P2P users who are trafficking copyrighted content.

Which has fuck all to do with network congestion, and everything to do with copyright violations.

Not all protocols should be supported equally (0, Insightful)

Anonymous Coward | more than 6 years ago | (#22845032)

Just because someone comes up with a high-bandwidth protocol or service does not mean that it can be supported or should be supported with our current network capacity - especially at the expense of other protocols. Nor does it mean network providers (and ultimately users) should bare the expense of every new protocol someone on the network edge dreams up. Throttling disruptive protocols may be the least reactive solution. Blocking such protocols may be equally valid. I don't see this as a fairness issue.

Re:Not all protocols should be supported equally (1, Insightful)

Anonymous Coward | more than 6 years ago | (#22845130)

Throttling disruptive protocols may be the least reactive solution. Blocking such protocols may be equally valid. I don't see this as a fairness issue.

There is no such thing as a "disruptive protocol", it's not like there's some kid with tourettes screaming obscenities at the back of the classroom keeping other people from learning. No protocol goes out and kneecaps other packets on its own. If the ISP wants to sell a megabit of bandwidth, it has plenty of tools available to make sure I don't take more than my megabit of bandwidth that don't rely on specifically targeting protocols like VoIP or iTMS that compete with products they sell.

Re:Not all protocols should be supported equally (0)

Anonymous Coward | more than 6 years ago | (#22845300)

This all depends on whether or not you view your subscription to your ISP as providing service, or providing the advertised service. While most ISPs now use weasel-words like "may receive up to 5MBps" the "may receive up to" being in 4 point font and the 5MBps in 25pt font, this was not always the case. Many ISPs used to just announce the amount of speed your pipe could handle as the service you were buying.

Throttling disruptive protocols will erode new technologies, simply because someone who didn't like your new protocol could call it disruptive and get it shut off. Think China and encrypted IM packets or something...let's not go that way. Instead, lets allocate bandwidth fairly amongst all users and let the customers change providers when the bandwidth available to them is not what they wanted.

Re:Not all protocols should be supported equally (1, Interesting)

Anonymous Coward | more than 6 years ago | (#22845618)

Fuck no! Nothing beyond the IP header should ever matter in a routing decision. TCP is just a payload (and invisible in IPSec!)

Fairness is when all IP packets are treated equally and any congestion is resolved purely by throwing virtual coins. It is up to the communication endpoints to negotiate stream bandwidth and throttle their output accordingly. If your network is congested to the point that it becomes unusable while all your customers are within their contractually acceptable usage patterns, you have to upgrade your network or lose customers.

Re:Not all protocols should be supported equally (1)

eldepeche (854916) | more than 6 years ago | (#22846470)

Why is the status quo inherently fair? Certain applications are bandwidth-intensive (bittorrent), some are time-sensitive (e-mail, web surfing), and some are both (streaming video). If you're downloading something on bittorrent, is it so unfair for your connection to go from 300 KB/s to 250 KB/s for 3 seconds while someone checks his e-mail? Or if an ISP charges for low-latency, high-bandwidth TCP connections?

from the why-isn't-the-internet-a-democracy dept. (0)

Anonymous Coward | more than 6 years ago | (#22845034)

from the why-isn't-the-internet-a-democracy dept.
I know this is sort of off topic but I think making the internet a democracy would be a horrible idea. This would be asking for special interest groups to sway decisions (think about that one "family" group who floods the FCC with almost all the indecency complaints).

The internet should stay as free and open as possible, and if it's to fall under any political philosophy it should be libertarianism.

Re:from the why-isn't-the-internet-a-democracy dep (1)

Jax Omen (1248086) | more than 6 years ago | (#22845138)

I'd say the internet is more "organized anarchy" than anything. And yes, I do realize that's an oxymoron.

Re:from the why-isn't-the-internet-a-democracy dep (1)

cromar (1103585) | more than 6 years ago | (#22845332)

That's a very good point. On the other hand, it may be the only option to fight against massive corporate internet warfare and overlordship and censorship.

Does he explain why a persistent stream gets more? (0)

Anonymous Coward | more than 6 years ago | (#22845048)

I read TFA, but he went from jumping to "10x = 10x the bandwidth" to persistent 10x = 100x the bandwidth. Can someone explain? since, he obviously didn't. And I'd like to know if it's a complete load of bollocks or not. The way he explained it in the first page, would mean that this *wouldn't* be the case. Is he twisting the truth? or just failing to explain himself adequately?

Wag their fingers? (2, Insightful)

rastilin (752802) | more than 6 years ago | (#22845074)

How do they get off saying THAT line? By their own admission, the P2P apps simply TRANSFER MORE DATA, it's not an issue of congestion control if one guy uploads 500KB/Day and another uses 500MB in the same period. Hell you could up-prioritize all HTTP uploads and most P2P uploaders wouldn't care or notice. The issue with Comcast is that instead of prioritizing HTTP, they're dropping Bittorrent. There's a big difference in taking a small speed bump to non-critical protocols for the benefit of the network and having those protocols disabled entirely.

Between the data transfer amount, and THAT line, this reads like a puff piece. It's not as if the P2P applications were the first to come up with multiple connections either, I'm pretty sure download managers like "GetRight" did it.

Re:Wag their fingers? (1)

arpunk (1017196) | more than 6 years ago | (#22845430)

Thats exactly what the problem is, we don't have to prioritize traffic by ourselfs, that should be the responsibility of TCP/IP.

Re:Wag their fingers? (1)

rastilin (752802) | more than 6 years ago | (#22846576)

That line of thinking is disturbingly like saying "We don't need a police force, the citizens should police themselves." While completely true, it misses the point that they won't. The idea of bursting seems pretty good. Notwithstanding that there are traffic shaping scripts out there that have done this since 2001 and that updating the protocols for people who still run Windows 98 will be interesting if nothing else. Ahh, here they go. "I could imagine a fairly simple solution where an ISP would cut the broadband connection rate eight times for any P2P user using the older TCP stack to exploit the multi-stream or persistence loophole. It would be fairly simple to verify whether someone is cheating with an older stack and they would be dropped to much slower connection speeds." If they don't update, cut their speed by 8 times, genius. I can see this going down so well for people who don't want to deal with the insecurities of disabling a working system, for whatever reason, but still want networking. So how will they distinguish between P2P users? Will they do packet inspection or just nuke everyone above an arbitrary limit? There are some really good congestion control algorithms out there, but my problem is with the article; they're a puff piece. Even right at the end, they completely ignore the fact that the guy with 11 connections is transferring MORE DATA than the user with 1 connection. Not to mention the user with one connection might be on 256/64 and the 11 connection guy on FIOS. Therefore, even with this, the end benefit to these companies will be almost..... nothing. Bandwith usage will stay exactly the same, there might be quality improvements but if Comcast is expecting this to fix their bandwith problems, they'll be surprised. Of course it's a straw man, he brought it up with THAT sentence. So yeah, it might provide QoS benefits to some people but it'll probably do squat for users or networks. I mean, if you transfer 500KB/Day like their example, would you really notice a difference on anything remotely fast enough to benefit?

ATTN CmdrTaco: it's not a democracy because ... (2, Insightful)

darkuncle (4925) | more than 6 years ago | (#22845096)

because the Internet is a group of autonomous systems (hence the identifier "ASN") agreeing to exchange traffic for as long as it makes sense for them to do so. There is no central Internet "authority" (despite what Dept of Commerce, NetSol, Congress and others keep trying to assert) - your rules end at the edge of my network. Your choices are to exchange traffic with me, or not, but you don't get to tell me how to run things (modulo the usual civil and criminal codes regarding the four horsemen of the information apocalypse). Advocates of network neutrality legislation would clearly like to have some add'l regulatory framework in place to provide a stronger encouragement to "good behavior" (as set out in the RFCs and in the early history of internetworks and the hacking community) than the market provides in some cases. It remains to be seen whether the benefits provided by that framework would at all outweigh the inevitable loopholes, unintended consequences and general heavy-handed cluelessness that's been the hallmark of any federal technology legislation.

Those networks that show consistently boorish behavior to other networks eventually find themselves isolated or losing customers (e.g. Cogent, although somehow they still manage to retain some business - doubtless due to the fact that they're the cheapest transit you can scrape by with in most cases, although anybody who relies on them is inevitably sorry).

The Internet will be a democracy when every part of the network is funded, built and maintained by the general public. Until then, it's a loose confederation of independent networks who cooperate when it makes sense to do so. Fortunately, the exceedingly wise folks that wrote the protocols that made these networks possible did so in a manner that encourages interconnectivity (and most large networks tend to be operated by folks with similar clue - when they're not, see the previous paragraph).

Not everything can be (or even should be) a democracy. Now get off my lawn, you damn hippies.

Re:ATTN CmdrTaco: it's not a democracy because ... (1)

bennomatic (691188) | more than 6 years ago | (#22845664)

Not everything can be (or even should be) a democracy. Now get off my lawn, you damn hippies.


Dad? Is that you?

Re:ATTN CmdrTaco: it's not a democracy because ... (1)

Sancho (17056) | more than 6 years ago | (#22845826)

All of that is perfectly reasonable as long as there are alternatives that customers can choose. When it's a content provider, it's not hard to switch to a new ISP. When it's an end-user, it can be quite hard to switch to a new ISP (in some cases, there just aren't other choices--there are plenty of areas where there is a monopoly on broadband.)

The government's own actions to help secure that monopoly are part of the problem. Cable providers don't have to share their lines with competitors, despite having effectively secured a monopoly on cable lines from the government (through regulations on who can lay the lines, as well as financial incentives from the government for laying them in the first place.)

Then there are the anticompetitive concerns. Cable providers want people to consume entertainment that they provide. When they artificially restrict access to entertainment sites (such as Youtube), they increase the value of their own offerings to their own customers.

Then there's the issue of advertising. They advertise an internet connection. That's what I should get. Instead, I get a crippled Internet connection.

There are lots of things to consider when talking about Net Neutrality that go beyond, "It's my network, and I'll do what I want with it."

Re:ATTN CmdrTaco: it's not a democracy because ... (1)

darkuncle (4925) | more than 6 years ago | (#22846342)

for the record, I completely agree with the points you made:

* gov't regulation (or lack thereof), combined with a woeful lack of due diligence in ensuring taxpayer investment sees a decent return (the POTS system was almost entirely subsidized by taxpayer dollars, and we're still paying for that initial investment in the form of surcharges and taxes on copper laid a hundred years ago in some cases, with further technological deployments (e.g. FTTP) coming late or not at all, and always with grudging complaint on the part of telcos and hints that more subsidies and monopoly support are required to "encourage" further development).

* Many, if not most, ISPs are blatantly lying in terms of what's advertised versus what they can actually deliver (and the fine print end users have to agree to in order to get a connection allows the ISP to deliver any class of service it likes, with no recourse for the customer aside from switching providers, which often isn't an option). I firmly believe that an ISP that offers unlimited Internet access should have to deliver the Webster's definition of "unlimited", or else call the offering something else.

* the vast majority of subscribers face a duopoly, at best (and one that's entrenched and is focused primarily on maximizing return on existing investment, rather than deploying new technology to stay ahead of competition - when there isn't any competition, you don't have to offer much to stay profitable). ... that said, the point I was trying to make was in response to the perhaps unintentional editorializing from Rob's tagline on the story. It's not a democracy because
1) it wasn't designed as any such thing - it's a loose confederation of autonomous systems;
2) most systems were not (and are not) directly or indirectly established and maintained by the general public of any single country or locale - why should the residents of $random_{city,county,state,country} be able to dictate operational policy for $random_privately_held_ISP? (but see my points above re: artificial monopoly and public subsidy; I'm in favor of a good return on public investment)

I think the feds have been entirely too chummy with Ma Bell (and the cablecos, and BigCorp in general) for the last several decades. However, I'm very skeptical that the answer to poor federal legislation is additional federal legislation. Our Congress has repeatedly demonstrated an uncanny ability to confuse and pervert even the clearest and most uncomplicated issues into a tangled morass of legislation that benefits only lawyers, legislators and those with the money to get the latter re-elected.

I suspect that the proposed cure for network neutrality woes would be worse than the disease in the long run.

Right, but... (1)

nweaver (113078) | more than 6 years ago | (#22845100)

TCP's fairness attempt (its not perfect, even so) is fairness among flows. But what people desire is fairness among users.

The problem, however, is that the fairness is an externality. You COULD build a BitTorrent-type client which monitors congestion and does AIMD style fairness common to all flows when it is clear that there is congestion in common on the streams rather than on the other side.

But there is no incentive to do so! Unless everyone else did, your "fair" P2P protocol gets stomped on like any other single-flow protocol. Fairness is an externality: you don't have a reason to be fair unless everyone else is, and the only reason the Internet IS even close is that TCP congestion control was done when there were a few thousand cooperating hosts, and any uncooperative entities could be squished.

Today, replacing the current congestion control with a user-weighted congestion control would be lovely, but its notgonnahappen.com, because even if you could get Microsoft on board to push a new TCP stack to 90% of the world, the P2P programs will STILL play games to increase their allocation: Vuse, in its FCC filing, actually calls it a feature how using multiple flows increases its performance.

We are going away from a world where we can trust the endpoints to "play nice" in the network. I'm afraid user-fairness traffic shaping is going to be a necessity and will be widely deployed.

Additionally, you want such traffic shaping to be protocol aware. Not just to degrade P2P, but to enhance VOIP, so even if the user is exceeding his allocation, you make sure the VoIP gets through first.

Leaving it up to applications? (1)

PolyDwarf (156355) | more than 6 years ago | (#22845132)

When you get to his actual proposal, he says that it's up to the application to send a message to the new TCP stack that says "Hey, I'm a good app, gimme bandwidth"? At least, that's how I read it.

I don't think I could walk to the kitchen and get a beer faster than it would take P2P authors to exploit that.

and for UDP ? (0)

Anonymous Coward | more than 6 years ago | (#22845188)

a lot of p2p networks use UDP how does this version of TCP solve that ?

Protocol filtering != Source/Destination filtering (3, Insightful)

Sir.Cracked (140212) | more than 6 years ago | (#22845200)

This article is well fine and good, but it fails to recognize that there are two types of packet discrimination being kicked around. Protocol filtering/prioritization, and Source/Destination filtering/prioritization. There are certainly good and bad ways of doing the former, and some of the bad ways are really bad (for a "for instance", see Comcast). However, the basic concept, that network bandwidth is finite over a set period of time, and that finite resource must be utilized efficiently, is not one most geek types will disagree with you on. Smart treatment of packets is something few object to.

What brings a large objection is the Source/Destination filtering. I'm going to downgrade service on packets coming from Google video, because they haven't paid our "upgrade" tax, and coincidentally, we're invested in Youtube. This is an entirely different issue, and is not an engineering issue at all. It is entirely political. We know is technologically possible. People block sites today, for parental censorship reasons, among others. It would be little challenge, as an engineer, to set to a VERY low priority an arbitrary set of packets from a source address. This however violates what the internet is for, and in the end, if my ISP is doing this, am I really connected to the "Internet", or just a dispersed corporate net, similar to the old AOL.

This is, and will be, a political question, and if it goes the wrong way, will destroy what is fundamentally interesting about the net. The ability to, with one connection, talk to anyone else, anywhere in the world, no different then if they were in the next town over.

Re:Protocol filtering != Source/Destination filter (1)

smallfries (601545) | more than 6 years ago | (#22846072)

Well said. The basic dishonesty in the argument is that p2p is used a boogie-man to allow filtering of traffic (which should be protocol filtering) by those who actually want to differentiate in pricing for source/destination. It needs to be said loudly and repeatedly that the two are both separate issues.

Re:Protocol filtering != Source/Destination filter (1)

eldepeche (854916) | more than 6 years ago | (#22846592)

Google bought Youtube, but you're right about the other stuff.

Why can't a network-level solution work? (0)

Anonymous Coward | more than 6 years ago | (#22845206)

I don't quite understand this analysis. Of course, switching from per stream fairness to per user fairness (or really, per host) is fine. That part makes sense. The article doesn't really do such a great job of explaining why TCP is per-stream fair, however.

Switches and routers don't work on the stream level, they work on the packet level. TCP handles this by using dropped packets as a congestion control signal. If a single host's TCP stream uses up 50% of the available bandwidth, somebody else's 50 TCP streams will experience packet drops if the aggregate tries to go over the remaining 50%.

In order for this analysis to be true, the congestion control algorithm would have to conspire to allow multiple streams from the same host to be subject to less congestion control than one stream from a single host. I assume this is the case, since I assume the proposer of weighted TCP isn't an idiot.

Now, why does this require a change in the TCP congestion control algorithm, or even to the protocol itself? If the point is to provide fairness to individual hosts/subnets connected to the network, why not just equalize bandwidth per IP/subnet, instead of randomly dropping packets so the one who sends the most packets wins? (Traffic shaping, in other words.)

While I'm all in favor of technical improvements to make the basic protocols better (for example, I have multiple users on my home network, a more responsive Web browsing experience while another host is running BitTorrent would be great), I also think it's a bit pie-in-the-sky to expect a client-end solution to this. Upgrading clients en masse is always a difficult proposition, so it's best when doing so will provide immediate benefits. If you don't get a benefit until someone else upgrades, you'll be waiting a long time for users to upgrade, especially if avid P2P users start to advise each other to the effect of "don't upgrade, Internets in Vista SP2/OS X 10.5.4/Linux 2.6.31 blows chunks".

Not in This World (1)

warrior_s (881715) | more than 6 years ago | (#22845232)

Unfairness problem of transport protocols can not be fixed in today's internet because it requires cooperation from other nodes that forward your data, and these nodes could be anywhere in the world.
Specifically for TCP, one can just hack into the OS kernel and force TCP to ignore all the congestion notifications etc.. and thus hogging all the bandwidth...(its not that difficult)

Congestion shaping at client end? WTF? (2, Interesting)

Ancient_Hacker (751168) | more than 6 years ago | (#22845244)

Lots of WTF's in TFA:
  • Expecting the client end to backoff is a losing strategy. I can write over NETSOCK.DLL you know.
  • Results of a straw poll at an IETF confab is not particularly convincing.
  • Expecting ISP's to do anything rational is a bit optimistic.
  • It's not a technical nor a political problem, it's an economic one. If users paid per packet the problem would go away overnight.

QoS is not Net neutrality (1)

serviscope_minor (664417) | more than 6 years ago | (#22845260)

Removing latency from voip packets at the expense of FTP is QoS. It's in general a quite good idea, and improves service.

Adding latency to only $foocorp (where $foocorp != $isp) so $isp can get more money violates net neutrality. This is a very bad idea, and borderline legal since the customer has alreaady paid.

Confusing... (3, Insightful)

SanityInAnarchy (655584) | more than 6 years ago | (#22845294)

I need coffee before I'll really understand this, but here's a first attempt:

Despite the undeniable truth that Jacobsons TCP congestion avoidance algorithm is fundamentally broken, many academics and now Net Neutrality activists along with their lawyers cling to it as if it were somehow holy and sacred. Groups like the Free Press and Vuze (a company that relies on P2P) files FCC complaints against ISPs (Internet Service Providers) like Comcast that try to mitigate the damage caused by bandwidth hogging P2P applications by throttling P2P.

Ok, first of all, that isn't about TCP congestion avoidance, at least not directly. (Doesn't Skype use UDP, anyway?)

But the problem here, I think, is that George Ou is assuming that Comcast is deliberately targeting P2P, and moreover, that they have no choice but to deliberately target P2P. I'd assumed that they were simply targeting any application that uses too many TCP connections -- thus, BitTorrent can still work, and still be reasonably fast, by decreasing the number of connections. Make too many connections and Comcast starts dropping them, no matter what the protocol.

They tell us that P2P isnt really a bandwidth hog and that P2P users are merely operating within their contracted peak bitrates. Never mind the fact that no network can ever support continuous peak throughput for anyone and that resources are always shared, they tell us to just throw more money and bandwidth at the problem.

Well, where is our money going each month?

But more importantly, the trick here is that no ISP guarantees any peak bitrate, or average bitrate. Very few ISPs even tell you how much bandwidth you are allowed to use, but most reserve the right to terminate service for any reason, including "too much" bandwidth. Comcast tells you how much bandwidth you may use, in units of songs, videos, etc, rather than bits or bytes -- kind of insulting, isn't it?

I would be much happier if ISPs were required to disclose, straight up, how much total bandwidth they have (up and down), distributed among how many customers. Or, at least, full disclosure of how much bandwidth I may use as a customer. Otherwise, I'm going to continue to assume that I may use as much bandwidth as I want.

But despite all the political rhetoric, the reality is that the ISPs are merely using the cheapest and most practical tools available to them to achieve a little more fairness and that this is really an engineering problem.

Yes, it is a tricky engineering problem. But it's also a political one, as any engineering solution would have to benefit everyone, and not single out individual users or protocols. Most solutions I've seen that accomplish this also create a central point of control, which makes them suspect -- who gets to choose what protocols and usage patterns are "fair"?

Under a weighted TCP implementation, both users get the same amount of bandwidth regardless of how many TCP streams each user opens. This is accomplished by the single-stream application tagging its TCP stream at a higher weight than a multi-stream application. TCP streams with higher weight values wont be slowed as much by the weighted TCP stack whereas TCP streams with smaller weight values will be slowed more drastically.

Alright. But as I understand it, this is a client-side implementation. How do you enforce it?

At first glance, one might wonder what might prompt a P2P user to unilaterally and voluntarily disarm his or her multi-stream and persistence cheat advantage by installing a newer TCP implementation.

Nope. What I wonder is why a P2P user might want to do that, rather than install a different TCP implementation -- one which tags every single TCP connection as "weighted".

Oh, and who gets to tag a connection -- the source, or the destination? Remember that on average, some half of the BitTorrent connections, whether they ultimately upload or download, are going to be inbound -- someone outside the network connecting to the P2P user. Who gets to set the weight here?

In other words, this looks entirely too trivial to game.

But without this fundamental fix in TCP congestion control, ISP have no choice but to specifically target P2P applications since those are undeniably the applications that hog the network.

Or they could simply target any application which uses too many connections. Sure, that generally means P2P applications, but it would be a lot fairer than singling out any packets which appear to have a BitTorrent protocol header.

At the very least, I think we can all agree that the current system is broken and that we need a TCP implementation that treats individual users and not individual flows equally.

If it's possible, I'm all for it. But how do we determine an "individual user"? Even that concept is tricky.

Re:Confusing... (1)

baerm (163918) | more than 6 years ago | (#22847044)

But the problem here, I think, is that George Ou is assuming that Comcast is deliberately targeting P2P, and moreover, that they have no choice but to deliberately target P2P. I'd assumed that they were simply targeting any application that uses too many TCP connections -- thus, BitTorrent can still work, and still be reasonably fast, by decreasing the number of connections. Make too many connections and Comcast starts dropping them, no matter what the protocol.

My limited experience with comcast problems (in Northern California coffee shops since I don't use comcast at home) is that when their pipe fills up, they start sending 'host unreachable' messages to random connections. My guess is that instead of shaping and using the TCP backoff mechanism that their NAT'ing router just says I have X TCP connections going through me, any new ones get 'host unreachable' until I have less than X. They also use an Asian net block for their NAT'd IP's instead of the say 10.x or the 172 and 192 ranges, I don't know what's up with that. But without knowing more (and they seem to prefer truthiness over truth), I'm frankly, um, under impressed with their competency in running networks.

FUD (5, Insightful)

Detritus (11846) | more than 6 years ago | (#22845298)

The whole article is disingenuous. What he is describing are not "loopholes" being cynically exploited by those evil, and soon to be illegal, P2P applications. They are the intended behavior of the protocol stack. Are P2P applications gaming the system by opening multiple streams between each pair of endpoints? No. While we could have a legitimate debate on what is fair behavior, he poisons the whole issue by using it as a vehicle for his anti-P2P agenda.

Re:FUD (1)

Kjella (173770) | more than 6 years ago | (#22845578)

Are P2P applications gaming the system by opening multiple streams between each pair of endpoints? No.
What about download accelerators? On a congested server, I've seen a near linear increase in bandwidth by opening multiple streams (which many servers now have limited, but not really the point). When I go from 25kb/s to 100kb/s, I took that bandwidth from someone. Same with some slow international connections where there's plenty on both ends but crap in the middle. I would honestly say I'm gaming the system then. P2P have a "natural" large number of streams because it has so many peers, but there's no denying that it too in part benefits from this.

Re:FUD (2, Informative)

asuffield (111848) | more than 6 years ago | (#22846790)

What about download accelerators? On a congested server, I've seen a near linear increase in bandwidth by opening multiple streams (which many servers now have limited, but not really the point). When I go from 25kb/s to 100kb/s, I took that bandwidth from someone.


You're making the same mistake as the author of that article. What you fail to realise is precisely why the single connection did not operate as fast: because your kernel was slowing it down incorrectly. You are not fighting other users by opening more connections, you are fighting your own TCP implementation.

Yes, that bandwidth came from somewhere - but it's probably bandwidth that wasn't in use anyway, and your TCP implementation was just failing to get at it. For a change that dramatic, I bet it was the Windows implementation (which is known to suck).

All of this has NOTHING TO DO with congestion control on the internet. This is the ad-hoc mode used between equal peers on brainless bus systems like unmanaged switches and hubs. On the internet, congestion control is performed by QoS on real routers. ISPs track the bandwidth load by source address or whatever, and distribute traffic fairly between them (some penny-ante ISPs may run without QoS, but you shouldn't be using them). You are not "gaming the system" by working around the limitations of your own TCP implementation, because that isn't the system.

The article is pure gibberish. And it's wrong.

Re:FUD (1)

RickHunter (103108) | more than 6 years ago | (#22845840)

And this is exactly the problem we're going to keep running into. People like this who want the Internet to return to a simplistic, centrally controlled few producers - many consumers model, rather than the distributed P2P model it's rapidly moving towards. P2P might be mostly about questionably-legal content distribution now, but the technology's going to be used for more and more "legitimate" purposes in years to come... If ISPs and "old media" advocates don't manage to kill it first.

Biased and poorly written (1, Insightful)

Anonymous Coward | more than 6 years ago | (#22845304)

3/4 of this article is basically an argument against net neutrality and p2p. It also seems to misrepresent the way ISP currently work for users to make its point. The article says if a user opens up multiple streams (not just p2p, but anything, FTP, HTTP downloads, Bittorrent), they're somehow "hogging" 10x the bandwidth of other users on the network.

But any idiot knows this isn't true, opening multiple streams only hurts you locally (causing everyone else in your house major slowdown and latency). The maximum download rate is the same and governed by your modem speed (1.5mbit, 5mbit, etc). I'm not suddenly downloading at 100mbit and hogging the shared bandwidth of the ISP. Also, if your ISP's TOS has no clause relating to bandwidth usage or limitations, you have the right to use all your available bandwidth 24/7 within reason. You pay for it. If that's not enough then charge more. I've actually called my ISP on this before and specifically asked them "So it's OK if I am downloading at max speed 24 hours a day all month." And they unequivocally stated yes.

Also doesn't anyone else find it funny that the author seems to think everyone should be limited to ONE stream? "Only big corporations need more..." WTF?

A nice little Net village in 1987 (1)

tringtring (1227356) | more than 6 years ago | (#22845336)

"By mid 1987, computer scientist Van Jacobson who is one of the prime contributors to the TCP/IP stack created a client-side patch for TCP that saved the day. Every computer on the Internet - roughly 30,000 in those days - was quickly patched by their system administrators."...

It must have really been a nice little Internet village at that time.

What's so wrong about this idea (1)

Kjella (173770) | more than 6 years ago | (#22845470)

...is that it expects every client to play nice. Ideas like "I could imagine a fairly simple solution where an ISP would cut the broadband connection rate eight times for any P2P user using the older TCP stack to exploit the multi-stream or persistence loophole." is such a major WTF I can't begin to describe it. If they wanted to control it, there should be a congestion control where packets were tagged with a custom id set by the incoming port on the ISPs router. So that if you have 5 TCP streams coming in to the router they'll be tagged like this:

TCP(1)
TCP(1)
TCP(1)
TCP(1)
TCP(2)

At the next router there'd be two virtual pipes:
ID1 (4 connections)
ID2 (1 connection)

It should then start randomly dropping packets from these pipes, so that half is dropped from each user. That would still be network neutrality to me, since it's content netural and protocol neutral. The "persistance" cheat is complete hogwash though, since there's no cheat. Demanding information 24/7 takes more bandwidth than an application that doesn't, well duh. What's next, sending video when an article woudl do is cheating?

Trick Question (0)

jimwelch (309748) | more than 6 years ago | (#22845492)

When I teach the Computer Merit Badge, I ask this trick question.

Q: How many computers exists on the OFFICIAL INTERNET? ....
A: One! Root DNS server (A)

Later we get to 13. But the rest of the computers, routers, servers, DNS's, DHCP, etc. on the internet belong to a corporation or local government. Only one belongs to the "official" internet, IANA.

Is this correct? I know it is a simplification, but I use this as a training tool to introduce ISP's roles.

Porn (1)

blakbeard0 (1246212) | more than 6 years ago | (#22845494)

Yea there are a few issues causes this:
1. Porn
2.Pr0n
...
3. Porn?

OMG it is Tubes! (1)

Friday (27240) | more than 6 years ago | (#22845608)

And I thought senator Ted Stevens was crazy!
http://blogs.zdnet.com/Ou/?p=1078&page=2 [zdnet.com]

I knew I shouldn't have read the article...

Engineering Solution (1)

autocracy (192714) | more than 6 years ago | (#22845634)

Figure out the real committable bandwidth (available bandwidth / customer connections). Then, tag that amount of customer information coming into the network with a priority tag. Customers may prioritize what they want, and it will be respected up to the limit.

Example: 1000k connection shared between 100 people who each have 100k pipes. They get a committed 10k. The first 10k of packets in per second that are unmarked are marked "priority." Packets marked "low" are passed as low. Packets marked "high" or unmarked are passed as high up to the committed limit. Use token buckets [wikipedia.org] for this. This would allow customers who care to choose what they want their committed bandwidth to be while leaving a free for all for the rest that's left over. No end user configuration necessary if you don't care or know to. No patches needed.

If you're feeling fancy, use logarithmically growing buckets of multiple tiers where tier always passes, tier one passes when there are no tier zero waiting, and so on.

Flow bandwidth per connection, per host, per app? (1)

flux (5274) | more than 6 years ago | (#22845714)

While on a certain level splitting the available bandwidth to separate users fairly seems, well, fair, how about multi user systems? In those cases the whole system would likely put the all the users under the same bandwidth limitation. [Note: I haven't read the white paper describing the proposed system.]

Well, that might also seem fair, and there aren't that many multi user systems around, atleast compared to desktops. What if you give each user in the system a new IP? Could easily be done in an IPv6 system, or if you otherwise happen to have a few extra C-classes around.. Actually, with this method you could get the bandwidth multiplied by the share of a single host to your desktop too, provided you have enough IP addresses to use.

In practice a bittorrenter would like to have atleast two IPs: one for bittorrenting and one for surfing. This would be because the bittorrenting host would likely to be much slower for surfing the web, unless the system somehow knows that there are two different applications doing the magic. (A local gateway with bandwidth limiting would not be as efficient, although for example stopping bittorrent activity for the duration of surfing should help.) Indeed, that would be one fair approach: each separate application would have its own fair bandwidth, perhaps according to its own bandwidth desires. In some comments this approach was mentioned in a passing, apparently this is what the talk has mentioned also, and it has one undesirable side: obviously an application that attempts to get as much bw as possible would pose as n separate applications.

Hey, I think Vista and MacOSX have a solution for this, I hear they have DRM/Trusted Computing.. ;-)

Re:Flow bandwidth per connection, per host, per ap (1)

jandrese (485) | more than 6 years ago | (#22846006)

This is an interesting point. I think most bittorrent users would agree that their websurfing doesn't seem to suffer much when BT is running, despite what you might expect.

Frankly, I think this article is a dirty trick. The author is talking about making the internet more "fair", but the ramification of his change is that ISPs will be able to charge more for "better" service if they want. In an attempt to make the network more fair, he could make it inherently unfair.

One way to implement this... (2, Informative)

vrmlguy (120854) | more than 6 years ago | (#22845818)

Simply by opening up 10 to 100 TCP streams, P2P applications can grab 10 to 100 times more bandwidth than a traditional single-stream application under a congested Internet link. [...] The other major loophole in Jacobson's algorithm is the persistence advantage of P2P applications where P2P applications can get another order of magnitude advantage by continuously using the network 24×7.
I agree with the first point, but not with the second. One of the whole points of having a computer is that it can do things unattended. Fortunately, the proposal seems to only fix the first issue.

I'd think that a simple fix to Jacobson's algorithm could help a lot. Instead of resetting the transmission rate on just one connection on a dropped packet, reset all of them. This would have no effect on anyone using a single stream, and would eliminate problems with the source of the congestion is nearby. Variations on this theme would included resetting all connections for a single process or process group, which would throttle my P2P without affecting my browser. This alone would be more than enough incentive for me to adopt the patch: instead of having to schedule different bandwidth limits during the day, I could just let everything flow at full speed 24x7. And by putting the patch into the kernel, you'd have less to worry about individual applications and/or users deciding to adopt it.

Re:One way to implement this... (0)

Anonymous Coward | more than 6 years ago | (#22846956)

"I'd think that a simple fix to Jacobson's algorithm could help a lot. Instead of resetting the transmission rate on just one connection on a dropped packet, reset all of them."

So because I have an ssh session to some crappy machine on the other end of a satellite phone link that drops packets left and right all my other sessions throttle down to match it? No thanks...

Or because I visit one web page on a crappy connection alll the downloads my browser is doing and the other web sites I'm looking at get throttled down to match...

low vs high priority marking (1)

hopeless case (49791) | more than 6 years ago | (#22845896)

Does anyone remember reading about a scheme for turning the usual QoS technique upside down?

That is, instead of marking packets you really care about (VoIP packets, say) high priority, you mark the ones you don't care that much (bittorrent downloads) about as low priority?

I recall reading about low priority marks having interesting advantages over high priority marks. It had to do with the high priority marks relying on perverse incentives (almost all routers would have to play by the rules and the more they did, the higher the payoff for not playing by the rules), while the low priority marks did not (you would start to see benefits if only a few routers amongst a sea of cheaters honored the concept).

It has an Achilles heel (2, Insightful)

Percy_Blakeney (542178) | more than 6 years ago | (#22846090)

Here is the glaring flaw in his proposal:

That means the client side implementation of TCP that hasn't fundamentally changed since 1987 will have to be changed again and users will need to update their TCP stack.

So he wants everyone, especially P2P users, to voluntarily update their TCP stack? Why in the world would a P2P user do that, when they know that (a) older stacks would be supported forever, and (b) a new stack would slow down their transfer rates? He does mention this problem:

At first glance, one might wonder what might prompt a P2P user to unilaterally and voluntarily disarm his or her multi-stream and persistence "cheat" advantage by installing a newer TCP implementation... I could imagine a fairly simple solution where an ISP would cut the broadband connection rate eight times for any P2P user using the older TCP stack to exploit the multi-stream or persistence loophole.

There are two issues with this solution:

  1. How would the ISP distinguish between a network running NAT and a single user running P2P?
  2. If you can reliably detect "cheaters", why do you need to update the users' TCP stacks? You would just throttle the cheaters and be done with it.

It's nice that he wants to find a solution to the P2P bandwidth problem, but this is not it.

Bandwidth still isn't free. (3, Informative)

clare-ents (153285) | more than 6 years ago | (#22846128)

In the UK bandwidth out of BTs ADSL network costs ~ £70/Mbit/Month wholesale. Consumer DSL costs ~ £20/month.

You've got three options,

#1 Have an uncapped uncontended link for the £20/month you pay - you'll get about 250kbps.

#2 Have a fast link with a low bandwidth cap - think 8Mbits with a 50GB cap and chargeable bandwidth after that at around ~ 50p-£1/GB

#3 Deal with an ISP who's selling bandwidth they don't have and expect them to try as hard as possible to make #1 look like #2 with no overage charges.

If you want a reliable fast internet connection you want to go with a company that advertises #2. If you can't afford #2, you can spend your time working against the techs at ISP #3, but expect them to go our of their way to make your life shit until you take your service elsewhere because you cost them money.

I love my unfair kernel. (1)

Surt (22457) | more than 6 years ago | (#22846140)

If you have a basic understanding of tcp, and reasonable c skills, it is not at all hard to make your kernel play unfair, and it can really make a big difference to your transmission rates, assuming you have a reliable connection. I sometimes wonder how many people out there have an unfair kernel like me.

Driving Miss Internet (3, Informative)

UttBuggly (871776) | more than 6 years ago | (#22846280)

WARNING ~! Core dump follows.

It occurred to me this morning that driving on public roadways and surfing the public networks were identical experiences for the vast majority of people. That experience being; "mine, mine, ALL MINE!....hahahaha!" AKA "screw you...it's all about me!"

Now, I have the joy of managing a global network with links to 150 countries AND a 30 mile one way commute. So, I get to see, in microcosm, how the average citizen behaves in both instances.

From a network perspective, charge by usage...period. Fairness only works in FAIRy tales.

We do very good traffic shaping and management across the world. QoS policies are very well designed and work. The end user locations do get charged an allocation for their network costs. So, you'd think the WAN would run nicely and fairly. After all, if the POS systems are impacted, we don't make money and that affects everyone, right?

Hardly. While we block obvious stuff like YouTube and Myspace, we have "smart" users who abuse the privilege. So, when we get a ticket about "poor network performance", we go back to a point before the problem report and look at the flows. 99 out of 100 times, it's one or more users hogging the pipe with their own agenda. Now, the branch manager gets a detailed report of what the employees were doing and how much it cost them. Of course, porn surfers get fired immediately. Abusers of the privilege just get to wonder what year they'll see a merit increase, if at all.

So, even with very robust network tuning and traffic shaping, the "me, me" crowd will still screw everybody else...and be proud that they did. Die a miserable death in prison you ignorant pieces of shit.

Likewise the flaming assholes I compete with on the concrete and asphalt network link between home and office every day. This morning, some idiot in a subcompact stuck herself about 2 feet from my rear bumper...at 70mph. If I apply ANY braking for ANY reason, this woman will collide with me. So, I tapped the brakes so she'd back off. She backed off with the upraised hand that seemed to be "yeah, I know I was in the wrong and being unsafe" She then performed 9 lane changes, all without signaling once, and managed to gain....wait for it.... a whole SEVEN SECONDS of time over 10 miles of driving.

I see it every day. People driving with little regard for anyone else and raising the costs for the rest of us. On the network, or on the highway, same deal. And they feel like they did something worthwhile. I've talked to many users at work and the VAST majority are not only unapologetic, but actually SMUG. Many times, I'll get the "I do this at home, so it must be okay at work". To which I say, "well you cannot beat your wife and molest your kids at the office, now can you?"

My tolerance of, and faith in, my fellow man to "do the right thing" are at zero.

A technical solution (to TCP Congestion Control, etc.) is teaching the pig to sing; horrible results. Charge the thieving, spamming bastards through the nose AND constrain their traffic. That'll get better results than any pollyanna crap about "fair".

George Ou Still has a job (0)

Anonymous Coward | more than 6 years ago | (#22846290)

The guy is a jackass and lives in a reality that is more distorted than Steve Jobs'.

At least jobs makes decent computers.

Doesn't stand a chance (5, Insightful)

gweihir (88907) | more than 6 years ago | (#22846314)

Every year or so somebody else proposes to "fix TCP". It never happens. Why?

1) TCP works well.
2) TCP is in a lot of code and cannot easily be replaced
3) If you need something else, alternatives are there, e.g. UDP, RTSP and others.

Especially 3) is the killer. Applications that need it are already using other protocols. This article, like so many similar ones before it, is just hot air by somebody that did either not do their homework or want attention without deserving it.

Interesting... but there's always a catch (0)

Anonymous Coward | more than 6 years ago | (#22846490)

What's to stop users from using their P2P applications to forge packets that all say that they deserve the best weight? Even if they noticed that a ton of packets were all marked as being weighted highly, what should they do about it without having the same debate we're having now?

This seems like a good idea, but it requires the same algorithm applied to quality of service instead of to the number of packets. It doesn't meant that people couldn't "game" the system just as well.

Garbage! (1)

j h woodyatt (13108) | more than 6 years ago | (#22846894)

Ask yourself this question: is $BIGWEBSITE one user or millions of users?

Will $BIGWEBSITE be required to use a "weighted TCP stack" and apportion each client their "fair share" of the network or will they get a special deal that allows them to use the traditional AIMD congestion control and rate adaptation algorithms? If the latter and not the former, why? Will ordinary residential customers be able to get such deals? If not, why not?

p.s. Yes, these are rhetorical questions, and the next time I'm in an IETF presentation from one of these people, I'll be sure to put them to the presenter and watch the rhetorical handwaving come back in response.

Solution (2, Interesting)

shentino (1139071) | more than 6 years ago | (#22846934)

Personally, I think they should move to a supply and demand based system, where you are charged per packet or per megabyte, and per-unit prices rise during periods of peak demand.

There are a few power companies who announce 24 hours in advance how much they're going to charge per Kwh in any given hour, and their customers can time their usage to take advantage of slack space, since the prices are based on demand.

If we do the same thing with internet service *both in and out*, a real bandwidth hog is going to wind up paying a shitload of money for his service, especially if he tries to tie up the net during peak hours. However, a casual user won't get burned.

And, coincidentally, it would solve the nasty "RIAA's making me block bittorrent" by comcast, or at least make it much harder for them to hide behind such a statement.

One particular property shared by almost ALL multimedia is that it is friggin HUGE. A movie can easily run into multiple gigabytes.

So start charging per-unit fees, and you'll put a massive leash on filesharing of media files. Suddenly, all those shared movies are costing major beaucoup to get, and they start going away.

Metered Net isn't really a suggestion... (0)

Anonymous Coward | more than 6 years ago | (#22847066)

The strange thing about this is how he discusses that he "debunked" the use of metered internet.

Now, while I agree that metered internet services are a bad idea, the author missed the point of the statement that we can see a working implementation is much different than saying it's perfect.

What I believe that the eff was suggesting was offering something akin to $5 to $10 services with a cap of monthly service in order to have an a la carte style internet for those housewives that use significantly less than a gig a month to stop the whole argument that people aren't paying more for their excessive net usage.

If a cheap, metered internet existed, it would effectively eliminate that argument. People could pay more for more service (something that we can't effectively do right now). But just like with cables services, the ISPs make their money by offering gluttonous services to people who don't need or want them.

So essentially the argument the eff was making breaks down to "You can't complain about our usage because you don't offer any alternatives." People paid for an unlimited service, and the ISPs will be DAMNED if they'll let people have it.

But like I said, I'd rather see the ISPs make their networks outrageous to support their own offerings then have them offer a la carte services.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?