×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Bufferbloat: Dark Buffers In the Internet

timothy posted more than 2 years ago | from the perfect-for-dark-fiber dept.

The Internet 124

Expanding on earlier work from Jim Gettys of Bell Labs with a new article in the ACM Queue, CowboyRobot writes that Gettys "makes the case that the Internet is in danger of collapse due to 'bufferbloat,' 'the existence of excessively large and frequently full buffers inside the network.' Part of the blame is due to overbuffering; in an effort to protect ourselves we make things worse. But the problem runs deeper than that. Gettys' solution is AQM (active queue management) which is not deployed as widely as it should be. 'We are flying on an Internet airplane in which we are constantly swapping the wings, the engines, and the fuselage, with most of the cockpit instruments removed but only a few new instruments reinstalled. It crashed before; will it crash again?'"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

124 comments

Is this a problem? (1)

sunderland56 (621843) | more than 2 years ago | (#38246880)

the existence of excessively large and frequently full buffers

Seems better than the existence of excessively large and seldom if ever full buffers.

Re:Is this a problem? (4, Insightful)

Anonymous Coward | more than 2 years ago | (#38246918)

You never want a full buffer. At that point, it ceases to do its job.

Re:Is this a problem? (5, Informative)

CyprusBlue113 (1294000) | more than 2 years ago | (#38246938)

That is actually the exact problem. You do not want buffers larger than the flight time of your circuit. You absolutely want the buffers to fill and drop packets otherwise.

Re:Is this a problem? (5, Funny)

syousef (465911) | more than 2 years ago | (#38248628)

That is actually the exact problem. You do not want buffers larger than the flight time of your circuit. You absolutely want the buffers to fill and drop packets otherwise.

You talkin' smack, fool? I will end you! I bloat like a buffer, sting like a TCP!

Re:Is this a problem? (-1)

Anonymous Coward | more than 2 years ago | (#38249552)

As the new era cap [bestneweracap.com] developing, the famous 59fifty caps [bestneweracap.com] are design by international first classic designer.They add some more fashion and comfort element into it. So you will never feel regret for own a new era caps from our cap shop [bestneweracap.com] for its comfort and fashion. No matter you want to looking the famous brand dc cap [bestneweracap.com] or MLB Team era caps, no matter you want to get the high quality or the latest fashion all hats [neweracaponline.com]. Our caps outlet is you best choice, with the 10 yeas experience for the caps online shop, we offer the full color, full style, full size for fashion hats [neweracaponline.com], and all the caps 100% double check before send to the customers.Our sell hats [neweracaponline.com] outlet store are professional new era caps company, we sell the famous brand new era caps, MLB era caps hat on sale [bestneweracap.com], NFL era caps, NBA era caps and more.Welcome to our baseball cap [bestneweracap.com] outlet, we accept paypal and credit card payment online directly. If you want to make wholesale order, please feel free to contract with us, we will quote the wholesale price to you.
new era 59fifty [mlbneweracap.com] cap online [mlbneweracap.com] have also been released in association with other brand names along with many other collections based on Muhammad sports cap [bestneweracap.com]Ali and other athletes, celebrities. Hats have also been released for charitable causes such as the 2010 release of the skate for cancer 59Fifty hat in which 100% of the proceeds help open the Dream Love Cure Centre. cap store [mlbneweracap.com] is a style of New Era Hats [neweracaponline.com] made by the Company. They are often referred to as "Fitted Hats" by the general public. new era cap [mlbneweracap.com] have become increasingly fashionable in urban centers and within the hip hop, emo, hardcore, and skater cultures. Many insist on leaving the gold sticker on the top of the new era hats sale [neweracaponline.com] bill so their "peeps" know it is a real 59Fifty hat. mlb cap [mlbneweracap.com] come in various different sizes that range from 6 5/8 to 8 . The average price is usually around thirty-five dollars. The caps are available for every Major and Minor League Baseball team in an assortment of colors, as well as NHL, AHL, NBA and NCAA fitted hats. The dc cap [mlbneweracap.com]is also available with assorted city names and Marvel Comics and Comics character designs. monster energy cap [mlbneweracap.com]

Re:Is this a problem? (4, Informative)

skids (119237) | more than 2 years ago | (#38246926)

Seems so, but isn't. For TCP traffic, a shallow buffer that drops traffic will result in more goodput than a deep buffer. Which is the point.

Re:Is this a problem? (5, Informative)

pla (258480) | more than 2 years ago | (#38247022)

Seems so, but isn't. For TCP traffic, a shallow buffer that drops traffic will result in more goodput than a deep buffer. Which is the point.

Yes and no...

If you don't (or only rarely) fill your buffer, a smaller buffer introduces less latency than a large one, while still allowing you to maximize throughput. If, however, you usually have your buffer full, you increase latency for literally no benefit, since you've already maximized throughput simply through resource demand.

The former will occur when your average load falls below your actual bandwidth, and allows you to get the most out of your link. The latter occurs when you consistently exceed your bandwidth, in which situation you may as well not even have a buffer, because it only increases latency without increasing throughput. That describes TFA's real point.

What he suggests amounts to actively choosing between those two conditions - If your average demand falls below your link speed, a larger buffer will help smooth the load over time. If, however, your average demand exceeds your link speed, throw away the buffer because it doesn't help.

But as per the GP's point - If you have an always-full buffer, you literally gain nothing but latency.

Re:Is this a problem? (5, Informative)

CyprusBlue113 (1294000) | more than 2 years ago | (#38247046)

The problem with buffers is most all of the time they are configured by size in bits. They need to be sized based on bit flight time of the circuit, which is in delay ms times throughput in bits. The disconnect between those values is a problem in *either* direction, especially past the retransmit threshold on the above side.

Buffers should be dynamicly sized based on flight time of data on the specifc link, and ideally kept updated. WRED is also highly suggested.

What really exacerbates the issue is devices with buffers that must be the same size for all links on X (be it card, slot, or chassis).

Re:Is this a problem? (-1)

Anonymous Coward | more than 2 years ago | (#38247234)

The problem with buffers is most all of the time they are configured by size in bits. They need to be sized based on bit flight time of the circuit, which is in delay ms times throughput in bits. The disconnect between those values is a problem in *either* direction, especially past the retransmit threshold on the above side.

Buffers should be dynamicly sized based on flight time of data on the specifc link, and ideally kept updated. WRED is also highly suggested.

What really exacerbates the issue is devices with buffers that must be the same size for all links on X (be it card, slot, or chassis).

One way or another it's the fault of those filthy ghetto niggers. If only because they are too busy hating Whitey and shooting each other and smoking crack and making the next generation of bastard children niggers on welfare to get an education and fix the fucking buffer problem.

Re:Is this a problem? (-1, Flamebait)

joocemann (1273720) | more than 2 years ago | (#38247380)

I think we can sum it up to "you can't fit 11 turds in a 10 turd bucket". No matter how they try to trick th system, the issue is bandwidth.... clearly, buffers are supposed to be for extremely rare unexpected peaks... the reality being that they are being used as bandaids on a much bigger problem.

Now if we could just back off the military spending, even just a little, we could upgrade infrastructure in the US. But military spending feeds campaigns more than citizens do...... and here we are.

Re:Is this a problem? (2)

CyprusBlue113 (1294000) | more than 2 years ago | (#38247406)

No, buffers are not for "rare peaks". They are fundamentally required due to the physics of data transmission from one device to another, especially when the link speeds are different from one hop to the next.

Re:Is this a problem? (1)

egamma (572162) | more than 2 years ago | (#38247962)

Now if we could just back off the military spending, even just a little, we could upgrade infrastructure in the US. But military spending feeds campaigns more than citizens do...... and here we are.

Huh? You are aware that the Internet is owned by companies like Level3, which doesn't spend any money on military campaigns, aren't you?

Re:Is this a problem? (0)

Anonymous Coward | more than 2 years ago | (#38248434)

Except against Cogent...

Re:Is this a problem? (3, Interesting)

Shinobi (19308) | more than 2 years ago | (#38248560)

Except that Cogent is the trouble instigator in every single instance. They are like the little trouble maker who runs around poking, kicking, pinching, insulting the larger kid at the playground, yet cries loudly and acts very hurt when he gets slapped in return.

Re:Is this a problem? (0)

Anonymous Coward | more than 2 years ago | (#38249532)

Not level3, your money is going to finance whatever campaign military pending feels it needs to promote to survive.

The problem is you have money and they're gonna take it from you to put to their use (which supposedly you didn't want when you cast your vote).

Re:Is this a problem? (1)

Surt (22457) | more than 2 years ago | (#38250358)

They don't pay taxes?

Re:Is this a problem? (1)

egamma (572162) | more than 2 years ago | (#38250590)

They don't pay taxes?

Do you have some evidence that if we weren't at war, that taxes would be lower? Or even if taxes were lower, that the extra money would be spent on infrastructure rather than stock dividends?

Re:Is this a problem? (0)

Anonymous Coward | more than 2 years ago | (#38248602)

That's not the purpose of a buffer : a buffer is used for handling differences in speed.

For example , if you would read a video stream directly, then most of the time , your connection would be fast enough to allow you to view the video in real time.
However, during some moments, it might be too slow, and you would have to wait.

By buffering , you take advantage of the moments when the speed is higher , to be able to handle moments when the speed is slower.

So you don't need to fit 11 bytes in a 10 bytes bucket : you store 10 when your speed is good , and start using it up when your speed becomes lower.
The problem is not that whether your buffer gets full , but that it might get empty.

However, if your speed is constantly bad, having a larger buffer won't improve it . The buffer will be empty all the time , and you get a lot of overhead.

Re:Is this a problem? (2)

trschober (1192951) | more than 2 years ago | (#38250256)

you sir, did not understand which buffers are being discussed buffering a movie in no way compares to tcp buffering

Re:Is this a problem? (0)

ravenshrike (808508) | more than 2 years ago | (#38250366)

Cutting military spending entirely would still leave us with a deficit in eah year's budget(Oh wait, a budget hasn't been passed in the past 4 years?) that is only going to grow exponentially from social programs, and that was before Obammy went on his spending spree and guaranteed a metric asston of future spending.

Re:Is this a problem? (3, Interesting)

TheLink (130905) | more than 2 years ago | (#38248430)

You don't necessarily have to size them in flight time of the circuit.

What you can do is have huge buffers, but just drop packets that are older than say 50 milliseconds since the time they entered the device (if the link/hop is supposed to be fast and low latency).

If the link is slow and/or high latency, you may wish to use higher values - 100 milliseconds. But not too high. I'm no networking expert but I don't really see the purpose of adding hundreds of milliseconds to a hop just to save a few packets that are likely to be dropped anyway, or should be dropped as an indirect signal that whoever is sending those packets should slow down.

Re:Is this a problem? (3, Insightful)

skids (119237) | more than 2 years ago | (#38247076)

What he suggests amounts to actively choosing between those two conditions - If your average demand falls below your link speed, a larger buffer will help smooth the load over time.

That's a pretty simplified way of putting it, but basically correct. Major equipment vendors have been slow to adopt more advanced queuing strategies (Stochastic Fair Queuing integrated with some of the more advanced flavors of early discard.) Fortunately we're budgeted for and piloting a shaper for purchase soon, and this time around have a chance to get something both well supported and cutting edge.

Personally I pine for ATM's ABR CoS with it's fast end-to-end congestion notification, but as history has shown us, the inevitable fate of the tech world is for the inferior to be gradually, painfully, and kludgingly adapted to become the same thing as the technologies it displaced through lowballing. In this case, that inferior thing being IP/ethernet.

Re:Is this a problem? (1, Offtopic)

Ethanol-fueled (1125189) | more than 2 years ago | (#38247114)

...the inevitable fate of the tech world is for the inferior to be gradually, painfully, and kludgingly adapted to become the same thing as the technologies it displaced through lowballing.

In this case Gettys is acting as an apologist for the municipal and/or corporate resistance to augmenting American infrastructure to 21st century standards. Whenever you have an elite few pinching pennies at the expense of the many, then you know that the ZOG machine is behind it. Their propaganda will explain everything away using bureaucratic technobabble while the final solution* to the problem is painfully obvious to anybody.

* the final solution being to railroad more fiber directly into the processing facilities.

Re:Is this a problem? (0)

Anonymous Coward | more than 2 years ago | (#38247610)

If it works, it'll make us free. At that point I'd just follow whatever orders came my way.

Re:Is this a problem? (1)

houstonbofh (602064) | more than 2 years ago | (#38247722)

No, because it only takes one congested link, and then the buffers start filling up along the entire path. In other words, my crappy d-link router can start filling up the buffers in the core Internet. OK, not just mine, but a few dozzen crappy routers, and it can be a regional problem. And we all know those "free" cable modems are of the highest quality...

Doing it wrong, again (4, Interesting)

Animats (122034) | more than 2 years ago | (#38248234)

That's a pretty simplified way of putting it, but basically correct. Major equipment vendors have been slow to adopt more advanced queuing strategies (Stochastic Fair Queuing integrated with some of the more advanced flavors of early discard.)

Right. The problem is not big buffers, per se. It's big dumb FIFO queues. There's nothing wrong with one big flow, like a file transfer, having a long latency, provided that other flows with less data in flight aren't stuck behind it. That's what "fair queuing" is all about. Each flow has its own queue, and the queues are serviced in a round-robin fashion. (With stochastic fair queuing, some hashing is done to eliminate some of the bookkeeping on flows, but the effect is roughly the same.)

I figured this out in the early 1980s (see RFC 970 [ietf.org]) and by the late 1990s, it was an established technology. We shouldn't be having this problem at this late date.

I wonder how much of the trouble comes from devices that are doing TCP-level processing in the middle of the network. Stateful firewalls and ISP ad-insertion engines [isp-planet.com] can introduce substantial latency.

If you want to test for bad behavior, try running two flows, one that never has more than one packet outstanding, and one that just does a big file-transfer like operation like a download. If the latency of the low-traffic flow goes up to the same as that of the bulk flow, there's a big dumb buffer in the middle. If the packet loss rate of the low-traffic flow goes up, there's a small dumb buffer in the middle.

Re:Doing it wrong, again (1)

sgt scrub (869860) | more than 2 years ago | (#38249336)

big dumb buffer

Nice. Stupid ToS settings, devices that always send the maximum data until they receive 100 ACK window size adjustments telling it to SLFD, and big dumb buffers. I like it!

Re:Doing it wrong, again (1)

John Hasler (414242) | more than 2 years ago | (#38249554)

I wonder how much of the trouble comes from devices that are doing TCP-level processing in the middle of the network. Stateful firewalls and ISP ad-insertion engines [isp-planet.com] can introduce substantial latency.

I doubt that the processing itself is the cause of more than a few milliseconds of latency, but the machines doing it may have been configured with large buffers not because the processing needed them but because those configuring them thought erroneously that bigger buffers are always better.

Re:Is this a problem? (1)

Exceptica (2022320) | more than 2 years ago | (#38248806)

> a smaller buffer introduces less latency than a large one

> a smaller RETARDED buffer introduces less latency than a largeR one

FTFY

More to the point: FUH FUH BAHFAH BAD, WAT DO. SUBMARINE SKNKING, ALARMS A-WAILIN' SO SCARED!!ONEELEVEN

How about abandoning this sorry state of scare reporting and taking a scientific view for once? You know, like, measuring things? Where does the problem happen? Under what circumstances? Test your assumptions, be honest? Oh, I know, screw that, this is IT, we run around in flames and then order something from Cisco. Protip: retards don't run core routers and queue management and traffic control are these new things now, but don't just tell anyone, only l33t h4x0rs now about them.

tldr: shut up.

somewhat 4channing b/c this rubbish deserves it.

Re:Is this a problem? (3, Informative)

pla (258480) | more than 2 years ago | (#38249248)

You know, like, measuring things? Where does the problem happen? Under what circumstances?

You mean, like figure-2 or even better, figure-5, in TFA? Where the (most common) 2^n buffer sizes stand out so obviously in the data that you'd need to try not to notice the trend?

Of course, this situation doesn't actually require much "real" data to prove. If each 1500 byte packet takes 10ms to transmit, and you have a full 256KB buffer - Which will unavoidably happen any time you try to sustain a transmit faster than your link can handle - You will have 1.7 seconds of latency in a FIFO queue.


tldr

Don't worry, we could tell.

Re:Is this a problem? (2)

Surt (22457) | more than 2 years ago | (#38250378)

Umm ... your 1500 byte packet had best not take more than about 10 us to transmit. 10ms would be quite ridiculous.

Re:Is this a problem? (1)

pla (258480) | more than 2 years ago | (#38250744)

your 1500 byte packet had best not take more than about 10 us to transmit. 10ms would be quite ridiculous.

1500 bytes per 10ms comes out to 1.2Mbit, very close to my actual (admittedly sucky) internet connection. 1500 bytes per 10us would mean you have a 1.2GIGAbit connection, faster than most LANs.

Re:Is this a problem? (1)

Surt (22457) | more than 2 years ago | (#38251118)

We're talking about 40+Gbit/sec internet backbones in this article, not end user connections.

Re:Is this a problem? (4, Informative)

icebike (68054) | more than 2 years ago | (#38247084)

Seems so, but isn't. For TCP traffic, a shallow buffer that drops traffic will result in more goodput than a deep buffer. Which is the point.

Exactly.

Early Congestion notification along with ONLY a minimal amount of client side buffering is really all you need.
The deep buffer just make it worse for everyone.

Oh, and And just as a Car Analogy is inappropriate to describe TCP traffic the Airplane Analogy is worse.

Re:Is this a problem? (1)

skids (119237) | more than 2 years ago | (#38247304)

Early Congestion notification along with ONLY a minimal amount of client side buffering is really all you need. The deep buffer just make it worse for everyone.

There's something to be said for shaping at intermediate hops, even with ECN (and especially when ECN isn't implemented) but it has to be done in a manner that it doesn't add latency out of proportion to the unladen RTT.

Re:Is this a problem? (5, Interesting)

ObsessiveMathsFreak (773371) | more than 2 years ago | (#38247396)

What we need is a ferry analogy.

Packet transmission is like a ferry, crossing a river at fixed intervals. But ferry sets off when it is full rather than at set times.

People wait at the shore and generally don't have to wait too long as the ferry is pretty fast and only needs a few people to fill up. For most people, walking onto the ferry involves very little waiting before the ferry actually departs and crosses the river.

Buffer bloat is when big buffers act like ferrys with huge capacity. People enter a huge 2000 passenger capacity boat, and are let on by their hundreds with seemingly no delay. But the ferry will not depart until it is reasonably full. So the people who got on first may have to wait for hours before the ferry actually departs and crosses the river.

It is clear that bigger ferries are no substitute for more ferries....or smaller rivers. Or possibly a bridge. In any case, you can get away without introducing cars or airplanes, so my job is done here.

Re:Is this a problem? (1)

LordLimecat (1103839) | more than 2 years ago | (#38247784)

It is clear that bigger ferries are no substitute for more ferries....or smaller rivers. Or possibly a bridge.

Or intergalactic starships, and teleporters?

What were we talking about, again?

Re:Is this a problem? (4, Informative)

skids (119237) | more than 2 years ago | (#38247888)

That analogy doesn't quite do the trick. TCP windowing is a bit more sophisticated than that. You can think of it maybe as a commander sending couriers out to support a mobile squad through hostile territory. If too many of them never make it to the squad, or back, he sends them less frequently so they can sneak through more discretely. If the troops make it through then he sends them faster because the more ammo he can get through the better. But he also has to decide how many men to put on courier duty. If the couriers take too long the squad has obviously moved further away from the base, and if he waited for the next one to return, he wouldn't be sending enough ammo. If the couriers return quickly, he can make do with less couriers.

Big buffers are like a flimsy rope bridge in the courier's path that takes a long time to cross. Couriers have to wait on one side because only one can cross at once, but the large groups waiting at the side of the cliff is more likely to get attacked. Until they do get attacked, however, the commander starts to think the squad has moved very far away, so he puts more couriers on duty. Since he thinks the squad is far away, he is not expecting them to return for a longer amount of time, it takes him longer to realize that they are starting to go missing entirely.

One of the best solutions to this problem turns out to be for some of the couriers to randomly go AWOL, and for more of them to go AWOL the bigger the crowd at the rope bridge gets. This basic concept is called Random Early Discard, and there have been a lot of ways invented for deciding who goes AWOL and why. If some of the couriers go AWOL, the commander thinks they are being attacked, so he slows down and also takes some troops off courier duty.

Re:Is this a problem? (0)

Anonymous Coward | more than 2 years ago | (#38249472)

AWOL = Awake With Old Lobster ?

All group transport mechanism run to a schedule (1)

Colin Smith (2679) | more than 2 years ago | (#38247966)

So none are an appropriate analogy.

HTH

Re:All group transport mechanism run to a schedule (0)

Anonymous Coward | more than 2 years ago | (#38248330)

Not entirely true - consider the taxibus/share taxi/jitney ( http://en.wikipedia.org/wiki/Taxibus ).

Re:All group transport mechanism run to a schedule (1)

Colin Smith (2679) | more than 2 years ago | (#38248948)

They all have a schedule. Even it it's no more than "today".
 

Re:Is this a problem? (1)

Troke (1612099) | more than 2 years ago | (#38248946)

Maybe the passengers need swimmies? or life jackets so they can be tossed overboard! either way, +1 to explanation being clear, concise, informative, and funny.

Re:Is this a problem? (2)

T Murphy (1054674) | more than 2 years ago | (#38250020)

In any case, you can get away without introducing cars

The only ferries I've ever been on were for cars, so you kinda did. Maybe you should've used a paraglider analogy, I've never seen a car use a paraglider.

Re:Is this a problem? (2)

sgt scrub (869860) | more than 2 years ago | (#38249408)

Early Congestion notification along with ONLY a minimal amount of client side buffering is really all you need.

Unfortunately, early notification doesn't work with a ton of wireless devices. Their drivers have minimal abilities to be controlled and they always send data at the speed of their negotiation. .eg if they connect at 11g they always send data at that speed and always send acks with window size adjustments to speed traffic up to that speed until they receive multiple window size adjustments telling them to STFD. Wireless devices are the dumbest things ever to be unleashed on the net and they are multiplying.

Re:Is this a problem? (0)

Anonymous Coward | more than 2 years ago | (#38249996)

Which is why you need a proxy between WiFi and the real link. Or at least shaping, if you mostly communicate with fast peers.

Re:Is this a problem? (-1)

Anonymous Coward | more than 2 years ago | (#38247172)

I don't know about TCP traffic, but I get better dick throughput with a deep pussy.

Re:Is this a problem? (1)

Idbar (1034346) | more than 2 years ago | (#38247418)

I'm not sure about goodput, but for sure shallow buffers result in better latency.

ECN helps to increase goodput, and AQM can help to keep high thoughput. The main concern of some (at least my research topic) is how to implement AQM to spread traffic spikes such that the link utilization increases while buffer occupancy reduces.

Re:Is this a problem? (2)

Surt (22457) | more than 2 years ago | (#38250348)

And the right balance between buffer size, drop percentage, and throughput should be measurable. But I bet those lazy bastards at cisco have never thought to measure performance, which is why no one uses their equipment.

Re:Is this a problem? (0)

Anonymous Coward | more than 2 years ago | (#38249188)

Example for bufferbloat:
1. Take a standard DSL setup with a WLAN router.
2. Place your notebook in a location where the speed of the WLAN link is lower than the speed of the DSL line. For a fast DSL line the WLAN speed should be much lower.
3. Watch youtube and try to access the web interface of your WLAN router.

In most cases you won't be able to access the router. Since the router fills up it's huge buffer before dropping packets, the TCP speed (of the video streaming) can't adapt to the speed of the WLAN link. Any packets of additional connections will be at the end of the buffer and likely be dropped.

Solution:
Buffer sizes should be dynamic based on the link speed. In case of WLAN it should be client-specific (each client may have another link speed).

Cringely again... (4, Informative)

beetle496 (677137) | more than 2 years ago | (#38246890)

Cingely has been writing about this all year. He cites Jim Gettys too. See: http://www.cringely.com/tag/bufferbloat/ [cringely.com]

Re:Cringely again... (4, Interesting)

hairyfeet (841228) | more than 2 years ago | (#38247044)

Uhhh...I just read the link but am a little confused. Even Cringley points out at the first of his article that originally TCP was written for a VASTLY different and weaker network than we have now, so instead of trying to make the networks go back to a mid 1980s design, wouldn't it be smarter just to update TCP to take advantage of new tech advances?

Now i'm not really a heavy network guy so excuse me if I put it in more of my lingo, but lets compare it to something I've got more first hand experience with, hard drives. If you don't write the controller code to take advantage of the large cache frankly the cache becomes worthless but if you DO write the controller code to take into account the size of the buffer it makes a BIG difference, so much so that I've seen 5400RPM drives whip 7200RPM drives simply by having better cache management.

So wouldn't the right way to go be to update TCP for the times? i mean we didn't slow computers down so we could keep PATA or PCI, we came up with new tech like SATA and PCIe to take advantage of the faster throughput. Shouldn't we do the same here as well?

Re:Cringely again... (5, Insightful)

mellon (7048) | more than 2 years ago | (#38247160)

Buffer and cache are not the same thing. Packets are written to a buffer once and read from it once. Caches are useless if, on average, blocks aren't read from them more than they are written to them. So treating them as analogous is highly misleading.

The deal with throughput is that you can only win by storing packets if there is going to be room to send them without delay. If you buffer every packet that's sent, it does get delivered, but by the time it gets to its destination, it's too late. You can adjust the TCP algorithms to behave somewhat less badly in this situation, but what you can't do is get genuine flow control with big buffers, because the endpoints have no way to determine the throughput of the network.

The only way the endpoints can determine the throughput of the network is if packets get dropped when there's congestion. When packets don't get dropped, what you see is that whenever there is more traffic to send over the link than the link can hold, it just winds up in a buffer. Latency rises. Eventually all the senders give up. Then the buffers start to drain, and packets get delivered. Then the acks start coming back. Now the endpoints think they are on a high latency link, so they crank back up again and fill the buffers again.

So what you see is a network that works great as long as the total load presented to the network is less than the aggregate capacity of the network. As soon as the demand for bandwidth exceeds the supply, every single stream starts to stall. If you've stayed at a hotel recently, you've seen this: a dozen people try to watch video streams over a fairly wimpy connection, and then you can't do _anything_ over the connection, because the buffer fills up.

If you didn't have that giant buffer, all the endpoints would be able to tell that the link was congested, and would slow down. If the total available bandwidth wasn't enough, the video streams would basically fail, but you could still get mail and surf the web. But with bufferbloat, not only can't you watch video streams, you also can't surf the web or get email or ssh to your server.

You can see this by pinging a server somewhere out on the internet. When the link isn't congested, you'll see reasonable round trip times, typically 100ms. Then when it gets congested, you'll see packets dropped, and you'll see the RTT rise to as much as a minute. Then as all the senders notice that their packets aren't being delivered, they back off and suddenly the RTT starts to drop again, and you start to hope the network's been fixed. But it's fool's gold: as soon as the senders notice, they bomb the buffer again, and the RTT goes back up. Rinse and repeat until you give up.

You probably don't see this very often on your home link, because you probably aren't saturating it. But it happens a lot at Wifi hotspots in particular, and also sometimes on 3G networks. It's quite disheartening, particularly when you're paying for the connection. You also see it on big ISPs like Comcast when you try to reach content providers that aren't willing to pay the ransom to Comcast to get on their uncongested link.

Re:Cringely again... (5, Interesting)

m.dillon (147925) | more than 2 years ago | (#38247416)

Well, you definitely CAN tell when one or more buffers along the path begins to fill up, because latency increases. Packet loss is not necessary and, in fact, packet loss just makes the problem worse since many TCP connections implement SACK now and can keep the bandwidth saturated even in the face of packet loss.

The ideal behavior is probably not to start dropping packets immediately... eventually, sure, but definitely not immediately. Ideally what you want to do is to attempt to shift the problem closer to the edges of the network where it is easier to fairly apportion bandwidth between customers.

Send-side bandwidth limiting is very easy to implement since TCP already has a facility to collect latency information in the returned acks. I wrote a little beastie to do that in FreeBSD many years ago, and I turn it on in DragonFly releases by default.

The purpose of the feature is not to completely remove packet buffering from the network, because doing so would put the sending server at a severe disadvantage verses other servers that do not implement similar algorithms (which is most of them).

The purpose is to unload the buffers enough such that the algorithms in the edge routers aren't overloaded by the data and can do a better job apportioning bandwidth between streams.

Our little network runs this coupled with fair queueing in both directions... that is, we not only control the outgoing bandwidth, we also pipe all the incoming bandwidth through a well connected colo and control that too, before it runs over the terminal broadband links. This allows us to run FAIRQ in both direction in addition to reserving bandwidth for TCP acks and breaking down other services. FAIRQ always works much better when links are only modestly overloaded and not completely overloaded. Frankly we don't have much of a choice, we HAVE to do this because our last-leg broadband links are 100% saturated in both directions 24x7. Anything short of that and even a single video stream screws up the latency for other connections beyond hope.

This sort of solution works great near the edges.

For the center of the network, frankly, I think about the best that can be done is modest buffering and RED and then trying to reduce the load on the buffers in the center with algorithms run on the edges (that can sense end-to-end latency). The modest buffering is needed for the edge algorithms to be able to operate without bits of the network having to resort to dropping packets. In otherwords, you want the steady state load for the network to not have to drop packets. Dropping packets should be reserved for the case where the load changes too quickly for the nominal algorithms to react. That's my opinion anyhow.

-Matt

A lot more to interoperate with (5, Insightful)

tepples (727027) | more than 2 years ago | (#38247206)

A replacement for PATA or PCI has to interoperate only with other components in the same chassis, or possibly on the same desk in the case of eSATA and Thunderbolt. A replacement for TCP would have to interoperate with every other computer in the world. Imagine what a flag day [catb.org] that would be.

endpoints can't fix individual hop buffering (2)

Chirs (87576) | more than 2 years ago | (#38247256)

Each hop has its own buffer. Endpoints can fix their own buffers, but they can't do anything about buffering in the next hop. If something changes in the network to reduce the available bandwidth, the ideal behaviour is for packets to start getting dropped right away so that the originator gets notified of the drop and can slow itself down to compensate.

If some device in the core network just buffers up seconds worth of packets instead of droping them it destroys the ability of the sender to adapt to the changing conditions.

Re:Cringely again... (4, Interesting)

WaffleMonster (969671) | more than 2 years ago | (#38248218)

So wouldn't the right way to go be to update TCP for the times? i mean we didn't slow computers down so we could keep PATA or PCI, we came up with new tech like SATA and PCIe to take advantage of the faster throughput. Shouldn't we do the same here as well?

We have SCTP which was intended to replace TCP except nobody seems to care.

At the end of the day the concept of TCP is not rocket science - there is a limit and diminishing returns to what more can be done twoard making TCP a perfect reflection of the concept of TCP.

Congestion management and ack/windowing have certainly evolved into high arts..but fundementally all TCP does is implement a loss free ordered data stream on top of an unordered lossy packet switched network.

This means your core limitation is embedded in the definition of TCP itself...the problem of head-of-line blocking. By using TCP you are by definition limiting yourself to the constraints of TCP.

Realtime voice/video and multi-player games use their own protocols because they are not willing to accept the constraints of TCP. It is not the implementation of TCP that is holding them back. It is the *concept* of TCP.

In my opinion we need more IP protocols to better handle varied use cases more than we need a new TCP.

Re:Cringely again... (2)

jgrahn (181062) | more than 2 years ago | (#38248924)

We have SCTP which was intended to replace TCP except nobody seems to care.

Some telecom standards are built on SCTP (we use it at work). Not sure if it's all that great though -- a lot of its problems are probably hidden by the fact that few care about it, and that it's used in isolated, high-quality networks.

At the end of the day the concept of TCP is not rocket science - there is a limit and diminishing returns to what more can be done twoard making TCP a perfect reflection of the concept of TCP. [---] In my opinion we need more IP protocols to better handle varied use cases more than we need a new TCP.

And we need application-level protocols on top of TCP which suck less, and implementations which suck less. You need some elemental understanding of TCP to use it efficiently, and from what I've seen, most programmers don't have it.

PS. As I recall it, last time Getty was in the media with his buffer bloat theory, several high-profile TCP experts (e.g. John Nagle) criticized him. We may be reacting to nonsense here.

Re:Cringely again... (1)

sgt scrub (869860) | more than 2 years ago | (#38249516)

I thought IBM designed SCTP to replace UDP. Regardless, I think your right. Using SCTP would benefit the web.

Re:Cringely again... (4, Interesting)

evilviper (135110) | more than 2 years ago | (#38248630)

Even Cringley points out at the first of his article that originally TCP was written for a VASTLY different and weaker network than we have now, so instead of trying to make the networks go back to a mid 1980s design, wouldn't it be smarter just to update TCP to take advantage of new tech advances?

There's nothing about a "weaker" network that necessitates a protocol redesign. TCP has had problems with congestion handling from day one, that have necessitated a million and one hacks and workarounds, because it stupidly conflates packet loss with congestion... Some links will have packet loss without any congestion, and others (like these with huge buffers) will have congestion without (immediate) packet loss. It was a bad design decision.

What's worse is that IP was designed correctly to begin with. The original design has ICMP control messages (eg. source-quench) to signal congestion, much like many other networking protocols. The real problem was that the specifics were vague, and there was no exact standard on how much to slow down, how it affects higher level protocols, etc., so it became a prisoner's dilemma, and highly unfair, and was deprecated.

Of course, this problem could occur with TCP's congestion control just as easily if any particular implementations reduced the rate of exponential backoff, so there's nothing fundamentally wrong with the original congestion control design, just the lack of consistent implementation.

Controlling congestion by dropping packets is like controlling freeway traffic by randomly pushing cars off the road with a bulldozer.

Re:Cringely again... (1)

Anonymous Coward | more than 2 years ago | (#38248988)

wouldn't it be smarter just to update TCP to take advantage of new tech advances?

But then you'll have a TCPv6 situation where no-one can communicate between the 2 different internets.

Re:Cringely again... (1)

Idbar (1034346) | more than 2 years ago | (#38247368)

I'll give you a few more names: Sally Floyd, Van Jacobson, Leonard Kleinrock and, of course, Raj Jain have been writing about this since 1983.

Push or Pull (5, Funny)

JoeMerchant (803320) | more than 2 years ago | (#38246910)

To configure your active queue management, the first thing I need to know is: do you have a push system, or a pull system?

Neither, sir, we have a suck system.

Alarmism? (-1)

Anonymous Coward | more than 2 years ago | (#38246958)

In danger of collapse? Seriously? Okay then.

Re:Alarmism? (1)

quasius (1075773) | more than 2 years ago | (#38246984)

As this is not my area of expertise, I have no idea if this is valid or not; but something having dire consequences does not allow you to simply dismiss it as "alarmism."

Re:Alarmism? (5, Informative)

Anonymous Coward | more than 2 years ago | (#38247100)

Except it is an alarmist. The current situation isn't optimal but being optimal and having a critical issue are two different things. The crux of the problem is basically "Long delays from bufferbloat are frequently attributed incorrectly to network congestion, and this misinterpretation of the problem leads to the wrong solutions being proposed." That means is the administrators *might* mistake large buffer slow downs for other causes of network congestion. Idealy, it should definitely be dealt with better but it's hardly a collapse of the network.

A network buffer acts just as that, a buffer to smooth out traffic spikes. A buffer does this at the cost of latency. If a buffer is large AND consistently full, that means that network link is always being fully utilized to where a large buffer isn't needed which basically induces large latency on top of waiting for the link to clear for no benefits (the extra latency *may* confuse administrators is basically the "danger"). On the other hand, if the link is under utilized the majority of times, the a large buffer is beneficial to deal with spike traffic. The majority of networks are the latter and hence designed as such. Two solutions, get faster links or deal with it more intelligently.

Re:Alarmism? (2)

CyprusBlue113 (1294000) | more than 2 years ago | (#38247424)

But that's the point, the buffers smooth the link, but not the streams going across them. At enough of a buffer bloat, the buffers actually make the link have to retransmit the same data multiple times due to the design of TCP congestion avoidance.

Re:Alarmism? (1)

OrangeTide (124937) | more than 2 years ago | (#38247820)

fine then, ignore the warnings, but don't come crying to me when you can't download pornography one day.

SPAM (-1, Flamebait)

fluffy99 (870997) | more than 2 years ago | (#38247048)

This is just a spam article pushing the AQM software.

Re:SPAM (1, Funny)

larry bagina (561269) | more than 2 years ago | (#38247060)

not only that, but Getty's buffer bloat theory has been featured [slashdot.org] on slashdot before. Maybe the dupe queue was full?

Re:SPAM (4)

jibjibjib (889679) | more than 2 years ago | (#38247086)

Maybe posting a new article on an issue that was also an issue a year ago is not a "dupe", but an acceptable and possibly even normal thing for a news site to do?

Re:SPAM (4, Insightful)

phayes (202222) | more than 2 years ago | (#38247240)

And just maybe some of us are interested in how research has progressed since the last article...

really? calling Jim Gettys a spammer? (2)

Chirs (87576) | more than 2 years ago | (#38247226)

Feel free to try it out yourself...I have and the problem is real.

thank god. i am so sick of the internet (-1)

Anonymous Coward | more than 2 years ago | (#38247124)

100MB videos of cats shitting

people devoting hundreds of hours to narcissitic, pointless blogs

lunatics being encouraged in their derangements by 'communities'

the pedophile's favorite invention of all time

the self-organization of the worlds greatest government surveillance tool (facebook)

the death of books and of intellectualism itself, replaced by screaming banshees, and the conflicted and bought off

the trivialization of everything meaningful, interesting, revolutionary, or important

and on and on and on and on and on

motherfuck the goddamn internet. may it rot in hell.

Re:thank god. i am so sick of the internet (-1, Redundant)

bmo (77928) | more than 2 years ago | (#38247204)

>rant about the evils of "teh interbutt"

You're here...

--
BMO

Re:thank god. i am so sick of the internet (0)

Anonymous Coward | more than 2 years ago | (#38249050)

the pedophile's favorite invention of all time

Surely you confuse internet with organized religion & enforced celibacy.

I've definitely noticed it on my DSL (4, Interesting)

Just Brew It! (636086) | more than 2 years ago | (#38247498)

As soon as I start trying to shove (or suck) more bits through the pipe than it can handle, round trip latency to "nearby" points of the Internet increases from ~25 ms to ~1 second. When I need to transfer a lot of data, I use rsync or wget if at all possible, and throttle the transfer to just below the rate the connection can handle; this results in ping times staying sane while only slowing down the transfer slightly. We shouldn't need to resort to doing stuff like this to make the network function properly!

Re:I've definitely noticed it on my DSL (1)

Rising Ape (1620461) | more than 2 years ago | (#38249116)

I've had similar problems a few years back, but either TCP implementations are better or the buffer sizes in DSLAMs are better tuned these days because it doesn't seem to happen. Yes, latency goes up if I saturate the connection with a big download, but not so much that all other connections grind to a halt.

Not like a few years back when a big download would cause latency to go go up to ~ 2 seconds and you could kiss goodbye to doing anything else with the connection at the same time, at least at tolerable speed. Now it's about 100 ms, and everything else is a bit slower but still perfectly usable.

Re:I've definitely noticed it on my DSL (0)

Anonymous Coward | more than 2 years ago | (#38250372)

I used to do similar things with torrents. Though the torrent programs seem to do a much better job these days of doing it themselves...

article's analogy is like (4, Funny)

OrangeTide (124937) | more than 2 years ago | (#38247796)

This analogy is like a bathtub, full of spiders, and on fire. It sounds dangerous, but it's self limiting.

Re:article's analogy is like (0)

Anonymous Coward | more than 2 years ago | (#38249394)

Huzzah! How many points do you recieve?

Buffer bloat animation (5, Informative)

Twinbee (767046) | more than 2 years ago | (#38247844)

I thought this animation by Richard Scheffenegger was a good way to show what's happening: http://www.skytopia.com/project/articles/lag/nam00000.avi [skytopia.com] Here's a description of the video:

The bad Bufferbloat setup is on the left (yellow dots), and the 'good' setup (i.e. how things used to be configured about 10-20 years ago when RAM was more expensive!) is on the right (cyan/blue dots).

Both sides start off okay, but notice how the left side 'queues' (tall yellow dot columns) keep on growing over time, while the right side blue columns stop short because of the small buffer size. As they stop short, some data 'packets' must be dropped, and this gets reported back to the upload site that it's shoving data to the user too fast. As a result, the upload site temporarily slows the sending of data, and thus the system self-corrects.

Meanwhile, on the left side, these packets of data never get dropped, so the giant bloated yellow buffers get filled more and more, but the computer at the upload site doesn't realise the carnage of these giant queues further down the line, and instead thinks "All is okay, let's keep sending data fast!".

Finally, when a smaller piece of data needs to be sent to the user (see 2:30+ signified by red dots on the left and dark blue dots on the right), the left side shows the red dots (which could be say, a small email) wading through giant queues to reach their destination, really slowly. Furthermore these tiny bits of data often need special 'emergency' treatment as they hold up other larger data associated with it. On the good right side, the dark blue dots have no such giant queues.

Lag-o-Meter-of-Internet-Doom (4, Interesting)

WaffleMonster (969671) | more than 2 years ago | (#38248008)

If you look at buffers allocated to fast multi-gigabit interfaces at the core of the network they are simply not large enough compared to forwarding rates involved to be able to induce the kinds of delays needed to cause Internet wide problems.

You can argue they may not be ideal for real time voice, game or video communication when these links are oversubscribed but no doomsday is possible.

Today buffer bloat effects are mostly observed at the edge even though they need not always be.

  Failure of a congestion control algorithm to control link saturation does not translate into congestive collapse of the larger network. It just results in *your* network connection turning to shit. When netalyzer runs it intentionally saturates your link at that time. In the real world only a few portions of the edge are ever saturated to the extent congestion control failure becomes an issue leading to more packets through core routers. The number of edge machines in this category would need to be significant to cause a rerun of previous issues.

That condition can not be met due to self feedbacks. If everyone maxed their pipes at once the core would saturate self-limiting edge saturation due to gross over-provisioning of available edge bandwidth in relation to core bandwidth which would ensure congestion control algorithms function properly.

I'm not arguing there is not a problem or more can't be done. I'm just arguing the doomsday congestive collapse scenario is bullshit.

solution (2)

m4kesense (2523820) | more than 2 years ago | (#38249238)

The bloated or big buffers causing more latencies than necessary only if it is designed with a single queue for all flows. If each flow gets a queue in the buffer and all queues are read and send out in round robin, the ping packet would not have to wait till the earlier started big file transfer which has completely filled the buffer would be through. The ping packet would practically overtake the large amount of queued bytes of the big file transfer instead of going behind it in a single queue.

Easy solution? (1)

Anonymous Coward | more than 2 years ago | (#38249364)

This government has literally spent billions through DHS to fehn internet safety. I may be simple, but that same overreaching government could give $500m to 4 private non-profits to actively deal with this and other "infrastructure" issues like IPv6, more rural broadband, urban wireless, and other issues.

It has been done in the past through Red Cross for disaster relief, in floods, earthquakes, epidemics and the like. Society impact issues.

NGO's have far more efficient resource allocation than the government and they also receive assistance from industry and citizens on a tax favorable basis.

JJ

Kind of pointless when you think about it. (0)

sgt scrub (869860) | more than 2 years ago | (#38249558)

IMHO. If ISP's would build out their networks instead of relying on buffers there would not be an argument here. My attention would be on fixing dumb wireless devices and drivers that ignore every attempt of making them play nice.

parallel cable? (0)

Anonymous Coward | more than 2 years ago | (#38250058)

we need more then one FAT cable to connect two points. the FAT ONE cable is a latency bottle-neck.
if you connect two points with many smaller cables ... latency problems go away. but it's $$MORE EXPENSIVE$$.
-
many on ramps to the one-lane highway (which goes very very fast). you have to wait on te one ramp until it's you time to go on the very-fast hghway.
if you had many more "smaller" fast highways, you wouldn't have to wait on the ramp soo long. but it's $MORE EXPENISVE$.

Classic problem (1)

Charliemopps (1157495) | more than 2 years ago | (#38250076)

This is a classic problem of economics. Publicly owned resources that are not owned by any one individual or company are very difficult for market factors to work on. A good example is fishing. The fish in a bay are not owned by any particular person, so their welfare is not in the economic interest of any particular person. It may be in a commercial fishermans long term interest to conserve the fish population and not over fish, but he's not the only fisherman. If he cuts back on his catch, other fisherman can simply catch the fish he left behind, the fishery is depleted just as if he'd exploited it, the only difference is the cut in profits he took. The other fisherman are thinking the same thing. They may all collectively want to conserve the fish but it's impossible for them to trust each other and agree to cut back on fishing. Sadly if the fishery were owned by a single person, even a terrible, fish hating monster, he would never allow the damage being done to the population that occurs when it's a public resource. A healthy amount of fish is his income and retirement, it's worth millions to him. But we can not allow such a thing to be "owned" and so we're stuck.

The same applies here. The "internet" is not owned by any particular person, as a result you have dozens of ISPs all fighting to provide the same service at the expense of all of the others. Their cutting their own throats to stay in business and have no way of trusting one another. Government regulation is woefully inadequate and will likely never catch up to technology. At the same time, the thought of a government owned system for transmitting information/data sounds horrifying given the recent actions by our elected officials.

This is unfortunately a situation that is very much like the classical Gordian Knot, and sadly I think the problem will likely be "solved" by a tyrant just like the original. The constitutional and privacy problems the solution causes will probably dwarf the congestion problems we started with.

Article in LinuxPro Magazine June 2011 (1)

seifried (12921) | more than 2 years ago | (#38250082)

Disclaimer I'm the author. I covered this in my June 2011 column: http://www.linuxpromagazine.com/Issues/2011/127/Security-Lessons-Bufferbloat/%28kategorie%29/0 [linuxpromagazine.com] direct link to the PDF http://www.linux-magazine.com/w3/issue/127/058-059_kurt.pdf [linux-magazine.com]. In a nutshell: my link latency at home is usually ~50ms to seifried.org, but with one single outbound file transfer to saturate my uplink ping times go to over 1000ms (1 second) reliably (which completely breaks VOIP/games/etc.).

How can I improve my own connection? (3, Interesting)

Edgester (105351) | more than 2 years ago | (#38250154)

What can I do with my own laptop and wifi router to make my own situation better?

Thought (0)

bongey (974911) | more than 2 years ago | (#38250264)

Thought is said buffalo bloat for a second. Really there are that many fat buffalos on the internet .

net neutrality wont help (1)

Anonymous Coward | more than 2 years ago | (#38250824)

The proposed solution of actve queue management is exactly the sort of discrimination the net neutrality folks want to forbid, no?

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...