Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Bufferbloat — the Submarine That's Sinking the Net

timothy posted more than 3 years ago | from the snagged-on-the-reef-of-ram dept.

The Internet 525

gottabeme writes "Jim Gettys, one of the original X Window System developers and editor of the HTTP/1.1 spec, has posted a series of articles on his blog detailing his research on the relatively unknown problem of bufferbloat. Bufferbloat is affecting the entire Internet, slowly worsening as RAM prices drop and buffers enlarge, and is causing latency and jitter to spike, especially for home broadband users. Unchecked, this problem may continue to deteriorate the usability of interactive applications like VOIP and gaming, and being so widespread, will take years of engineering and education efforts to resolve. Being like 'frogs in heating water,' few people are even aware of the problem. Can bufferbloat be fixed before the Internet and 3G networks become nearly unusable for interactive apps?"

cancel ×

525 comments

Correction: JIM GETTYS (4, Informative)

Anonymous Coward | more than 3 years ago | (#34789798)

http://en.wikipedia.org/wiki/X_Window_System

Really? (2, Insightful)

Anonymous Coward | more than 3 years ago | (#34789802)

Latency is bad? Bigger buffers = more latency?

Definition, please (5, Insightful)

Megane (129182) | more than 3 years ago | (#34789810)

I'm so glad the term has been defined so that I know what the hell we're talking about here. Oh wait, no it hasn't.

Okay, then I'll RTFA. Oh wait, two screens worth of text later and it still hasn't.

I'd like to change the topic now to the submarine that's sinking the English language: jargonbloat.

First link in the first article (5, Insightful)

mangu (126918) | more than 3 years ago | (#34789854)

Just start RTFAing: "In my last post I outlined the general bufferbloat problem."

Follow the link:

"Each of these initial experiments were been designed to clearly demonstrate a now very common problem: excessive buffering in a network path. I call this bufferbloat

Buffering of what? (1)

Anonymous Coward | more than 3 years ago | (#34789882)

What exactly are they buffering? Whole pages? Whole sites? Just packets? What?

And back in the late 90s, the buffering was supposed to speed things up and a few companies were IPO's or purchased by big companies (for billions of dollars) that did this.

Re:Buffering of what? (2)

nedlohs (1335013) | more than 3 years ago | (#34790244)

Buffering of packets, "network path" can't be refering to anything else.

Re:Buffering of what? (2)

SuricouRaven (1897204) | more than 3 years ago | (#34790288)

Packets.

Re:First link in the first article (1)

elrous0 (869638) | more than 3 years ago | (#34790360)

I hate it when someone feels the need to come up with a piece of meaningless jargon when "excessive packet buffering" would have been much more descriptive and required less explanation.

Re:First link in the first article (1)

Yvanhoe (564877) | more than 3 years ago | (#34790378)

actually, buffer bloat is a good enough definition of bufferbloat. Demands for definition are a bit pompous...

Re:First link in the first article (1)

AvitarX (172628) | more than 3 years ago | (#34790440)

The word bloat implies slowly growing.

This is a piece of description that "excessive packet buffering" Does not include. I also think packet buffering is pretty clear from the context

Re:Definition, please (5, Informative)

Megane (129182) | more than 3 years ago | (#34789856)

For what it's worth, TFS seems to be linking into the middle of the story, so maybe that's part of my problem. Still, it's really annoying to be told about this new problem with new jargon word, that's going to make the sky fall any day now, without knowing just what the hell it is.

The previous article seems to explain things a little better: http://gettys.wordpress.com/2010/12/03/introducing-the-criminal-mastermind-bufferbloat/ [wordpress.com]

Re:Definition, please (-1)

Anonymous Coward | more than 3 years ago | (#34790092)

Mod parent up. Timothy is an idiot who can't be bothered to link to the beginning of the story.

Re:Definition, please (5, Insightful)

Megane (129182) | more than 3 years ago | (#34790176)

Actually, I blame the submitter. It is well known that Slashdot "editors" don't edit. They merely choose the least worthless articles out of the slush pile and push the button, sometimes using copy and paste to combine two similar submissions. Even my above link was still to the middle of the story, but it explains the core concept best.

I also place a teensy bit of blame on the blogger, for not linking the first use of the word to the previous article. But he couldn't expect to get linked into the middle of the series.

Re:Definition, please (1)

c0lo (1497653) | more than 3 years ago | (#34790460)

Actually, I blame the submitter. It is well known that Slashdot "editors" don't edit.

This is how some commenters get their chance to be rated informative or insightful - part of the "rules of the game", I guess.
The more comments, the better /. stats to show to advertisers. Given the fact that's free and also given the fact that it does have the purpose of wasting geek time, I can't complain.

Besides, I really like the title of the linked blog entry "Whose house is of glasse, must not throw stones at another." - I didn't know the expression (so, no THAT worse of a linking place, not for me at least).

Re:Definition, please (1, Insightful)

shiftless (410350) | more than 3 years ago | (#34789914)

Yeah, I see this a lot with nerds. It's pretty fucking annoying when someone launches in a long winded dissertation on some obscure subject, without even bothering to put an introductory paragraph at the top giving even the briefest overview of what the fuck they're even talking about. I shouldn't have to read fifteen paragraphs just to get a basic birds-eye view of what the problem is, a framework which I can then proceed to fill in by reading into the details.

Re:Definition, please (4, Insightful)

drooling-dog (189103) | more than 3 years ago | (#34789948)

They know something you don't, they want you to know it, and they want to keep it that way for as long as possible...

Re:Definition, please (3, Informative)

mcgrew (92797) | more than 3 years ago | (#34790102)

There are two reasons I can think of why people write like that. One is they're poor communicators, the second is they want to appear intelligent.

It seems there are two kinds of stories posted here lately -- science and tech stories written for the non-nerd by non-nerds like one last week that explained what a CPU was (!), and stories like this that coin new jargon and don't explain it, or use an acronym that most folks here will misunderstand, like using BT when referring to Britich Telecom when most of us think of BitTorrent when we see BT.

Maybe I'm just getting old.

Re:Definition, please (1)

germansausage (682057) | more than 3 years ago | (#34790274)

In Canada references to the Bank of Canada in news stories have lately been abbreviated to BOC. When I read "BOC to raise interest rates" I always wonder why the Blue Oyster Cult is doing that.

Re:Definition, please (1)

nedlohs (1335013) | more than 3 years ago | (#34790200)

There's a link to the definition in the first four words of the article. Do you want every peice of writing to repeat the definitions of every term it uses?

Re:Definition, please (2)

Nursie (632944) | more than 3 years ago | (#34790028)

He's written a whole series on this over the course of months, if he doesn't explain it a long way into the series then blame the slashdot summary, not the guy doing the research/testing and telling the world about it.

Re:Definition, please (1)

sunking2 (521698) | more than 3 years ago | (#34790226)

Not having read or ever heard of this term before my take on the word is that because people have so much memory and streamed apps can have such huge buffers and are so popular you constantly have a few people pegging their network connection to initially fill their buffer even though they don't have to, needlessly congesting the network and possibly impairing those who need low latency interactive connections.

But that's just a guess.

Re:Definition, please (1)

Anne_Nonymous (313852) | more than 3 years ago | (#34790232)

A picture is worth a thousand words: bloatse.jpg

Re:Definition, please (5, Insightful)

jg (16880) | more than 3 years ago | (#34790318)

You asked, I just provided:

http://gettys.wordpress.com/what-is-bufferbloat-anyway/

Good question.

Bufferbloat is the cause of much of the poor performance and human pain using today’s internet. It can be the cause of a form of congestion collapse of networks, though with slightly different symptoms than that of the 1986 NSFnet collapse. There have been arguments over the best terminology for the phenomena. Since that discussion reached no consensus on terminology, I invented a term that might best convey the sense of the problem. For the English language purists out there, formally, you are correct that “buffer bloat” or “buffer-bloat” would be more appropriate.

I’ll take a stab at a formal definition:

Bufferbloat is existence of excessively large (bloated) buffers into systems, particularly network communication systems.

Systems suffering from bufferbloat will have bad latency under load under some or all circumstances, depending on if and where the bottleneck in the communication’s path exists. Bufferbloat encourages congestion of networks; bufferbloat destroys congestion avoidance in transport protocols such as HTTP, TCP, Bittorrent, etc. Without active queue management, these bloated buffers will fill, and stay full.

More subtlety, poor latency, besides being painful to users, can cause complete failure of applications and/or networks, and extremely aggravated people suffering with them.

Bufferbloat is seldom detected during the design and implementations of systems as engineers are methodical people, seldom if ever test latency under load systematically, and today’s memory is so cheap buffers are often added without thought of the consequences, where it can be hidden in many different parts of network systems.

You see manifestations of bufferbloat today in your operating systems, your home network, your broadband connections, possibly your ISP’s and corporate networks, at busy conference wireless networks, and on 3G networks.

Bufferbloat is a mistake we’ve all made together.

We’re all Bozos on This Bus.

Re:Definition, please (3, Funny)

Nefarious Wheel (628136) | more than 3 years ago | (#34790398)

The Evil Buffer Stuffer Strikes Again!

QoS (1)

Icyfire0573 (719207) | more than 3 years ago | (#34789812)

Isn't this something that people that setup QoS on their home router's know about? Isn't this what the program Wondershaper fixes when run on your home linux router. I think we have known about this for years.

No. (0)

Anonymous Coward | more than 3 years ago | (#34790088)

RTFA. I've been following his blog for the last few weeks as he's written about this. The problem is much more serious than most realize. In fact, I'd say most people completely misunderstand the issue.

Re:QoS (4, Informative)

Megane (129182) | more than 3 years ago | (#34790220)

After reading TFSeries, the problem is excessive buffering (as in 1-10 or more seconds worth of data) screwing up TCP/IP's automatic bandwidth detection. QoS helps a little bit by getting the important packets (especially ACKs) through, but high-bandwidth TCP connections are still going nuts when they hit a slower link with excessive buffering.

And one of the major offenders is Linux commonly defaulting to a txqueuelen of 1000.

Re:QoS (1)

SuricouRaven (1897204) | more than 3 years ago | (#34790336)

Wondershaper is able to apply a partial fix, but only to the upstream. Given that most traffic on a domestic connection is incoming, that doesn't help much.

Name wrong (2, Informative)

ebcdic (39948) | more than 3 years ago | (#34789816)

He's Jim Gettys, not Getty.

Re:Name wrong (-1)

Anonymous Coward | more than 3 years ago | (#34790248)

OMG, He is?!

Awsum, TTY in your name (5, Funny)

cerberusss (660701) | more than 3 years ago | (#34789836)

Jim Getty, one of the original X Window System developers and editor of the HTTP/1.1 spec

I'd murder four people just to have TTY in my name. Five if I could capitalize them, and postfix with a number. I'd name my son Dev.

You'd get a business card with something like Dev GeTTY1, Armadillo Avenue 64, Seattle, Washington

Re:Awsum, TTY in your name (1)

chronosan (1109639) | more than 3 years ago | (#34789906)

A quick trick to the Office of Vital Statistics and you could have your wish.

Re:Awsum, TTY in your name (0)

Anonymous Coward | more than 3 years ago | (#34789930)

A quick trick to the Office of Vital Statistics and you could have your wish.

Yes, but would he still get to commit the four murders?

Re:Awsum, TTY in your name (1)

Anonymous Coward | more than 3 years ago | (#34789936)

I hardly think that prostitution is the first avenue he should explore.

Re:Awsum, TTY in your name (3, Funny)

jmyers (208878) | more than 3 years ago | (#34789964)

So you are the reason I keep getting this in my logs "getty keeps dying. There may be a problem".

Re:Awsum, TTY in your name (3, Funny)

DikSeaCup (767041) | more than 3 years ago | (#34790100)

Oh my God, you've killed getty! You bastard!

Naming Your Son Dev (2)

djdevon3 (947872) | more than 3 years ago | (#34790044)

Easy, name him Devone = Dev1

Re:Awsum, TTY in your name (1)

dugjohnson (920519) | more than 3 years ago | (#34790376)

Not sure where you are, but in the USA you can call yourself anything you like, even change your name to it, as long as it's not for fraud. TTY PuTTY

Theoretically, could this be mitigated with ATM? (1)

blind biker (1066130) | more than 3 years ago | (#34789842)

If we all switched to ATM (Asynchronous transfer mode [wikipedia.org] ), would this issue be fixed, regardless of the size of RAM available at the endpoints? Yes, yes I realize that this would be utterly impractical; my question is theoretical.

Re:Theoretically, could this be mitigated with ATM (1)

uncledrax (112438) | more than 3 years ago | (#34789940)

If we all switched to ATM, I'd find you in your sleep and murder you.

TBH though.. MPLS sorta tries to split the difference in the 'good' ways. Especially if you drink the Kool Aid (tm) and have the budget to spend on rolling it out.

Re:Theoretically, could this be mitigated with ATM (1)

franciscohs (1003004) | more than 3 years ago | (#34790042)

Have you no sense of humor?, he MUST be kidding...

PS: if he isn't, I'll second you

Solved (0)

Anonymous Coward | more than 3 years ago | (#34789862)

"Can bufferbloat be fixed before the Internet and 3G networks become nearly unusable for interactive apps?"
> Yes, with IPv6 QoS.

Re:Solved (1)

SuricouRaven (1897204) | more than 3 years ago | (#34790350)

Impractical on the internet: Everyone would decide their traffic must be top priority. Then someone would wrote 'Windows Internet Accelerator Pro' and sell it for £8.99, so every idiot could do so.

pegged connection == latency, who'd of thunk it? (5, Insightful)

Shakrai (717556) | more than 3 years ago | (#34789874)

I read TFA and I'm not seeing the problem. He can't duplicate this issue unless he maxes out his connection and then his latency goes to hell. No shit Sherlock, that's what happens when your pipe is full and the packets have to wait in the queue to be transmitted. Am I stupid or could he avoid this issue entirely by using QoS and/or rate-limiting his connection to some amount <100% of it's maximum throughout? I have QoS at the office that keeps our connection from pegging (it's limited to around 75% on the download and 90% on upload) and have never once encountered an issue with latency or jitter. At home I only throttle the upload (to 90% of maximum) and have successfully ran VPNs, bittorrent uploads and VoIP calls all at the same time without any headaches.

Really, what's the problem here?

Re:pegged connection == latency, who'd of thunk it (1)

Aladrin (926209) | more than 3 years ago | (#34789946)

I agree. I was having the same issues on a much smaller connection until I set up QOS. Now, I never have issues with any of my stuff.

Re:pegged connection == latency, who'd of thunk it (1)

Nursie (632944) | more than 3 years ago | (#34790054)

There's much more to it than that - the connection gets maxed out too easily, or it maxes out way below where it should, the reason being that too much is buffered. Too much buffering = lots of latency = TCP/IP latency and bandwidth calculations go out the window and you can't get the transfer speed you ought to.

Or so I understand it.

Re:pegged connection == latency, who'd of thunk it (1)

thijsh (910751) | more than 3 years ago | (#34790212)

Not quite the whole story. In peak hours when your 20 Mbit ADSL drops to 2 Mbit speeds it's simply because they oversold the line that much... The problem isn't buffering but trying to use 1000% of the bandwidth, that is never going to work smoothly. Buffers don't really change it because you have a choice for either large buffers + higher latencies or small buffers + more dropped packets, your throughput will suffer about the same...

In my experience most software will handle some higher latencies just fine, but too many dropped packets will fuck almost any protocol up (most are not so sturdy when you remove a few percentage of the packets). So the choice for larger buffers is a valid one as long as the connection is this saturated.

Re:pegged connection == latency, who'd of thunk it (5, Informative)

vadim_t (324782) | more than 3 years ago | (#34790076)

Several issues:

1. People who aren't networking engineers don't know about QoS, or don't know/want to know how to configure it.
2. QoS used that way is a hack to work around an issue that doesn't have to be there in the first place
3. How do you determine the maximum throughput? It's not necessarily the official line's speed. The nice thing about TCP is that it's supposed to figure out on its own how much bandwidth there is. You're proposing a regression to having to tell the system by hand.
4. QoS is most effective on stuff you're sending, but in the current consumer-oriented internet most people download a lot more than they upload.

Re:pegged connection == latency, who'd of thunk it (3, Informative)

Dunbal (464142) | more than 3 years ago | (#34790258)

but in the current consumer-oriented internet most people download a lot more than they upload.

      Because the current consumer infrastructure forces it onto you. I would happily seed my torrents all year long, except I only have 1/12th the uploading bandwidth as I have for downloading. Since I need some of it for other things, uploading becomes impractical.

      It's easy to blame the consumer, but there's a certain model imposed on him from the start.

QoS (2)

leuk_he (194174) | more than 3 years ago | (#34790446)

QoS does generally not work beyond the first hop. Your provider most likely will drop any QoS data. Some providers wil try to make their own QoS systems (e.g. to show a low ping). However if the lantency has a great variance due to all kind of buffers any algoritm will get the bandthwidt wrong.

QoS based on network types will get it wrong. For pure browsing /downloading it is relatively simple, but for VPN Encrypted skype udp traffic, game data it will never be optimal.

And as the blogger wrote, there is not a simple solution, because the end user has a "dad the internet is slow today" mentality. Couple that with a "reinstall your windows" helldesk and the solution becomes VERY HARD.

Re:pegged connection == latency, who'd of thunk it (2)

TheThiefMaster (992038) | more than 3 years ago | (#34790108)

The problem is that maxing your connection from one site is causing everything else you do on your connection to be delayed / dropped as well, because it ends up queued behind anything that got buffered mid-transit from the first site. With a smaller buffer the large transfer would start to drop packets and back off sooner, allowing packets from other sources to "hop the queue".

Re:pegged connection == latency, who'd of thunk it (5, Interesting)

TheThiefMaster (992038) | more than 3 years ago | (#34790168)

As an extreme example, say you request a 1GB file from a download site. That site has a monster internet connection, and manages to transmit the entire file in 1 second. The file makes it to the ISP at that speed, who then buffers the packets for slow transmission over your ADSL link, which will take 1 hour. During that time you try to browse the web, and your PC tries to do a dns lookup. The request goes out ok, but the response gets added to the buffer on the ISP side of your internet connection, so you won't get it until your original transfer completes. How's 1 hour for latency?

The situation is only not that bad because:
A: Most download sites serve so many people at once and/or rate limit so they won't saturate most peoples' connections
B: Most buffers in network hardware are still quite small

Re:pegged connection == latency, who'd of thunk it (3)

TheRaven64 (641858) | more than 3 years ago | (#34790316)

Mod parent up. Half way down the comments, and this is the first post that actually explains why 'bufferbloat' is something I should care about.

Re:pegged connection == latency, who'd of thunk it (3, Funny)

TheThiefMaster (992038) | more than 3 years ago | (#34790406)

So naturally, I instantly get modded down.

Re:pegged connection == latency, who'd of thunk it (4, Funny)

suv4x4 (956391) | more than 3 years ago | (#34790128)

Really, what's the problem here?

You really don't see the problem? How can you be so naive. Maybe you're new to this. All signs show to the fact there is a problem.

Of course the problem is not obvious. The article itself says it'll completely surprise us. They know we won't believe it at first. But that's why we must believe it, or else it's Armageddon.

Would you risk an Armageddon, because of your inability to understand and see?

And that's, in short, why we must attack Iraq.

Wait, what were we talking about :P?

Re:pegged connection == latency, who'd of thunk it (0)

Anonymous Coward | more than 3 years ago | (#34790230)

If you'd have used fewer lines in yer joke, yed have gotten some mod points for Funnie.

Re:pegged connection == latency, who'd of thunk it (1)

Dunbal (464142) | more than 3 years ago | (#34790282)

It's kinda like the Fed printing another 600 billion and refusing to raise interest rates, while at the same time saying everything is fine and the economy is improving.

Re:pegged connection == latency, who'd of thunk it (1)

Coriolis (110923) | more than 3 years ago | (#34790150)

Yes, he could. What about all the non-technical users?

Re:pegged connection == latency, who'd of thunk it (1)

bytesex (112972) | more than 3 years ago | (#34790172)

It's not about *you* buffering - it's about the machine in the middle buffering. When that machine buffers instead of drops, your TCP connection will never become aware that it has to play nice and lower its transmission window.

Re:pegged connection == latency, who'd of thunk it (1)

franciscohs (1003004) | more than 3 years ago | (#34790180)

I wouldn't say you're stupid, but you're not understanding the problem right. ISPs configure high buffers on low bandwidth links so that the total throughput is higher, at the expense of latency, since, as you say, packets have to wait in queue until they go out when you max out the connection.

This is NOT the right way of doing things, buffers should be smaller, if you max out your connection, your packets will drop, this will cause either TCP (at the protocol level) or the application if there is no support at the protocol level (UDP for example) to back out and lower the transmission rate, WHILE keeping your latency at normal levels.

You'll still have some packet loss, sure, but the overall experience should be better if applications act nice.

And no, QOS is not the solution to this, QOS should work end to end on a connection and that's simply not possible for internet users. What you mean is shaping, or policing, at the interface level, but if you're doing that, you're just avoiding the filling up the buffers, so that you achieve what I described earlier.

Re:pegged connection == latency, who'd of thunk it (1)

drvar (722474) | more than 3 years ago | (#34790284)

Rate limiting is exactly what Getty suggests as a work around in a later blog post. However, most people would not be able to do that and bufferbloat can also occur elsewhere along the connection path.

Re:pegged connection == latency, who'd of thunk it (0)

Anonymous Coward | more than 3 years ago | (#34790286)

The problem, moron, is that TCP is supposed to have congestion control to prevent that. The buffers are keeping congestion control from working as intended. You shouldn't *need* QoS unless you're *prioritizing* specific traffic.

The problem is that with bufferbloat pegged connec (0)

Anonymous Coward | more than 3 years ago | (#34790334)

| No shit Sherlock that's what happens when your
| pipe is full and the packets have to wait in
| the queue to be transmitted.

That is what happens with bufferbloat. Actually it is even worse, because the buffers will confuse TCP to think the pipe is thicker than it is and fill it even more.

The point is: The internet is not supposed to have latency only because there is some bottle-neck.
The latency is supposed to be around the same all the time. Things are supposed to send data at a rate that it can reach the other side. Both is lost if you have too much buffers. Mostly because TCP measures the width of your pipe by looking at lost packages. And if some smartass network router thinks it should not drop packets if there is not enough network for everyone, then the net grinds to halt periodically, killing latency, killing bandwith, causing sparks in dropped packages, causes timeouts, and so on and so forth.

Re:pegged connection == latency, who'd of thunk it (1)

SuricouRaven (1897204) | more than 3 years ago | (#34790394)

"that's what happens when your pipe is full and the packets have to wait in the queue to be transmitted."

And his point is that said queue is so excessively long, it's screwing up TCP's congestion avoidance. Those queues mean delay. Serious delay.

Ahhh HAH! (1)

Immostlyharmless (1311531) | more than 3 years ago | (#34789908)

I knew it wasn't my age thats making me worse at online games, its all the jittering and latency caused by bufferbloat!

I think buffers are a good thing (0)

commodore64_love (1445365) | more than 3 years ago | (#34789924)

It's what lets me watch youtube even over slow Dialup or Cellphone connections (buffer the video halfway, then press play).

Buffer also prevents one of the main flaws with broadcast TV. When there's interference the video skips a second or two. "Hi this is NBC News. And today in ...... met for its first session." What? Who? Where? That never happens with internet video (which just pauses for a second, buffers the video, and then resumes).

Re:I think buffers are a good thing (1)

the_one(2) (1117139) | more than 3 years ago | (#34790072)

That's not what the article talks about. He talks about TCP package buffering, not video buffering. Queueing up tons of packages with small amounts of bandwidth gives horrible latency is the point. Not really knowledgeable about that stuff so I can't tell if he is correct in that there is a problem though.

Re:I think buffers are a good thing (0)

TheRaven64 (641858) | more than 3 years ago | (#34790152)

I think first posts are a good thing, combined with the other posts they help hold up my fence.

Re:I think buffers are a good thing (5, Interesting)

Coriolis (110923) | more than 3 years ago | (#34790206)

He's not arguing against application-level caching. He's saying that too much caching at the IP layer is confusing TCP's algorithm for deciding how fast the link between two points is. This in turn causes massive variability in how fast the data can be downloaded; or in your terms, how fast the video can be buffered (and, in fact, how much buffer the video player needs).

Re:I think buffers are a good thing (1)

SuricouRaven (1897204) | more than 3 years ago | (#34790422)

Different buffer, different level. Same word, but entirely different thing.

cringley explains (4, Interesting)

szo (7842) | more than 3 years ago | (#34789934)

Re:cringley explains (0)

Anonymous Coward | more than 3 years ago | (#34790120)

If we all ran XP and nothing but XP, there probably wouldn’t be a bufferbloat problem at all. But you can’t wear bellbottoms forever,...

There you have it! XP is the best operating system! And yes you can wear bell bottoms for ever - just wear boots and tell folks it's "boot cut" jeans.

Re:cringley explains (1)

pspahn (1175617) | more than 3 years ago | (#34790136)

Who is Cringely these days? I prefer the old one. This guy isn't nearly as entertaining.

Why not get to the point? Why make it a saga (1)

Tom DBA (607149) | more than 3 years ago | (#34789944)

I thought the times of people being paid by the word were over. There's so much to read these days. Couldn't Getty state the situation in a few sentences before taking us on a journey through lightening, ICU and tracings?

So, let me get this straight... (5, Insightful)

CFD339 (795926) | more than 3 years ago | (#34789966)

RAM is cheap.
High speed uplink is not cheap.
Peering agreements are manipulative, expensive, and sometimes extortionate.

So...

The poorly designed, poorly peered, under allocated back haul links can't handle the traffic that routers want to push through them -- but since RAM is cheap, operators just add RAM to the buffers so that when those back-haul lines slow down for a second the packets can get pushed through.

And we're blaming the buffer for the problem?

Re:So, let me get this straight... (1)

franciscohs (1003004) | more than 3 years ago | (#34790086)

It's not blaming on the buffers, it's blaming on the practice of configuring high sized buffers instead of having proper network infrastructure.

Re:So, let me get this straight... (5, Insightful)

phantomcircuit (938963) | more than 3 years ago | (#34790174)

TCP assumes that packets will do dropped when there is congestion, if they aren't the congestion control algorithms fail (hard).

Re:So, let me get this straight... (0)

Anonymous Coward | more than 3 years ago | (#34790218)

So the buffer keeps packets from being dropped right? Wouldnt the lack of a buffer require the packets to be resent, thus increasing latenc

Re:So, let me get this straight... (1)

Coriolis (110923) | more than 3 years ago | (#34790278)

Yes, because the buffer is causing false information to propagate through the network. The design of TCP is predicated on peers discovering that there's congestion between them as fast as possible, and negotiating a sensible transfer rate. It's not that all buffering is bad, but that it has diminishing returns, and eventually actually makes the situation worse.

Re:So, let me get this straight... (1)

jg (16880) | more than 3 years ago | (#34790390)

Your Linux (and Mac and Windows) laptops also suffer from bufferbloat.

As does your home router and 802.11 network.

As do some ISP's.

Don't think the problem is just in your broadband. It's all over.

Re:So, let me get this straight... (0)

Anonymous Coward | more than 3 years ago | (#34790448)

| The poorly designed, poorly peered, under
| allocated back haul links can't handle
| the traffic that routers want to push through
| them -- but since RAM is cheap, operators just
| add RAM to the buffers so that when those
| back-haul lines slow down for a second the
| packets can get pushed through.

That's what buffers are good for. And that why gettys says the solution is not just removing the buffers, but being more inteligent about them.

The buffer only helps if you can expect that in a short time you will have enough bandwidth and little enough new things to do, that your buffer will get empty again because you can send stuff faster than new stuff gets in later.

Once you are the bottle-neck and you will always (or even only the next minutes) have more to send than you can, then buffering will not help. You only let packages wait without any benefit.

You wait for the things to send to become less than your outgoing bandwith again. But because you do not drop packages, but buffer them, you are not telling anyone you are missing bandwidth. Thus TCP will not send you less packages but more, as no package loss means you can bear the current load fine. So you get more and more data at a faster rate with no hope to send it out. At some point your buffer will be full and you have to drop stuff. But by that time packages might have waited for whole seconds (an ridiculous long time for computers in the internet) in your buffer and everyone sending data over you has increased its speed so much and is sending stuff so fast, that everything comes to an aprupt stop, until everyone realizes they are sending too fast (which they will only realize after some time as they wait for acknowledges that they packages arrived which will not come). After this aprupt stop, things will restart slowly. Getting faster until again too fast for you. But again you do not tell anyone but try to buffer, causing the next breakdown and so on.

tail wagging the dog, final episode (0)

Anonymous Coward | more than 3 years ago | (#34789972)

it's a long nightmarish tale of deception & death, followed by a 'brief', 'surprise ending', whereas the tail discards the dog as excess baggage. plus, it was a fairly good movie, in addition to being an ongoing real life horror flick/sit-com. evile never sleeps. are you still calling this 'weather'?

Does he want to say (0)

Anonymous Coward | more than 3 years ago | (#34789978)

The internet will die? :(

Duh - I'm sure the cable cos. can fix for $$$ (0)

Anonymous Coward | more than 3 years ago | (#34789982)

From TFA: "Any time you have a large data transfer to or from a well provisioned server, you will have trouble."

Well there's your problem: you bought a home cable connection and actually expected to USE it. What are you, retarded?

Looks like a hype (1)

CaptainFarrell (1969762) | more than 3 years ago | (#34790012)

I think he's making up a new word. It's oversubscription. There is SLA. And it's not caused by memory prices. I can similarly claim that it's caused by terrorism as people spend more time on the Internet saturating channels because they're scared to interact in real life.

Re:Looks like a hype (5, Insightful)

ledow (319597) | more than 3 years ago | (#34790342)

You haven't read the article (or the many others around on LWN.net on the same topic). Basically, large buffers in networking gear, from DSL routers on your home network through to ISP's, mean that interactivity is *shite*. You might download Gb's but in terms of interactive applications it's useless and we're facing ever-increasing latency and problems through wanting to cope too much with errors and delays (e.g. huge buffers to keep resending instead of just letting packets drop and having TCP sort it out by retransmission). TCP windows never shrink because errors and buffered and retried so much from intermediate devices that any sort of window scaling is worthless because it doesn't *see* any packet-loss.

Same devices, smaller buffers, everything works fine and "faster" / "more responsive" all around. It actually would *save* money on new devices because you don't need some huge artificial buffer, you can just drop the occasional packet. But the problem is so deeply embedded into run-of-the-mill hardware that it's almost impossible to escape at the moment and thus EVERYONE from large businesses to home users are running on a completely sub-optimal setup because of it. Almost every networking device made in the last few years has buffers so large that they cause problems with interactivity, bandwidth control, QoS, etc. It's NOT just that a "faster connection" solves the problem - we are getting a percentage of optimal service that's steadily decreasing as buffers increase even though we're improving all the time. That's the point. And it *is* caused by memory prices because memory is so cheap that a huge thoughtless buffer costs no more than a tiny, thought-out buffer.

Only one solution. (0)

Anonymous Coward | more than 3 years ago | (#34790018)

Increase connection prices and decreases caps.

And here I was... (1)

stuckinphp (1598797) | more than 3 years ago | (#34790082)

Can bufferbloat be fixed before the Internet and 3G networks become nearly unusable for interactive apps?

And here I was wondering when 3G networks would become usable for interactive apps.

WTF? (0)

Anonymous Coward | more than 3 years ago | (#34790104)

What a load of bull. The possibility of a large buffer doesn't mean at all that everyone will overbuffer blindly even if it becomes a real problem.

Is this IT for retards?

Re:WTF? (0)

Anonymous Coward | more than 3 years ago | (#34790410)

Currently the only widely available method to congestion control is to drop packets.

If you buffer the packet, you don't drop it, so sender does not get the slow down message.

ECN (http://en.wikipedia.org/wiki/Explicit_Congestion_Notification) would solve this I think, you then can buffer the packet AND tell sender to slow down.

But for it to work, it should be widely deployed, and it isn't (many routers get confused when they get packets with ECN on, and drop them).

You have have not RTFA or not UTFA.. (5, Informative)

bmajik (96670) | more than 3 years ago | (#34790110)

What Jim is saying is that TCP flows try to train themselves to the dynamically available bandwidth, such that there is a minimum of dropped packets, retransmits, etc.

But in order for TCP to do this, packets must be dropped _fast_.

When TCP was designed, the assumptions about the price of ram (and thus, the amount of onboard memory in all the devices in the virtual circuit) were different -- namely, buffers were going to be smaller, fill up faster, and send "i'm full" messages backwards much sooner.

What the experimentation has determined is that many network devices will buffer 1 megabyte or MORE of traffic before finally dropping something and telling the tcp originator to slow down. And yet with a 1 meg buffer and a rate of 1 megabyte per second.. it will take 1 second simply to drain the buffer.

The pervasive presence of large buffers all along the tcp vc, and the non-speified or tail-drop drop behavior of these large queues means that tcp's ability to rate limit is effectively nullified, and in situations where the link is highly utilized, many degenerate behaviors occur, such that the overall link has extremely high latency, and that bulk traffic will cause interesting traffic to be randomly dropped.

Personally, I used pf/altQ on openBSD to try and manage this somewhat.. but its a dicey business.

Re:You have have not RTFA or not UTFA.. (1)

CaptainFarrell (1969762) | more than 3 years ago | (#34790170)

Which doesn't prove that we should claim we've discovered something unknown and invent a new word for each technical detail everybody knows.

Re:You have have not RTFA or not UTFA.. (1)

Skaven04 (449705) | more than 3 years ago | (#34790184)

Mod parent up -- this is a great summary and matches with my reading of the article too.

Re:You have have not RTFA or not UTFA.. (1)

zmollusc (763634) | more than 3 years ago | (#34790302)

Yeah, that is how I read it. The presence of large buffers causes the 'controlling protocols' to go haywire, thus network transfer efficiency hurtles out of the window.

Concerning Boiled Frogs (4, Informative)

wiredog (43288) | more than 3 years ago | (#34790270)

If you put a frog in a pot of water and slowly raise the temperature it will try to jump out before the water reaches a temperature that is fatal to the frog.

Re:Concerning Boiled Frogs (5, Funny)

TheRaven64 (641858) | more than 3 years ago | (#34790358)

Only if you use a real frog. You can kill a hypothetical frog in this way.

Someone with networking chops (1)

jayhawk88 (160512) | more than 3 years ago | (#34790366)

...chime in please. It seems like the solution to this is potentially all user-side, and controllable? Adjust the buffers in your devices if you can, or perhaps find a way to reduce the TCP buffer in your modern operating system?

Hmm (1)

jav1231 (539129) | more than 3 years ago | (#34790392)

I won't discount the buffer problem because I just don't know. But the single biggest contributor to "latency" when I visit a webpage is connectivity to ad sourcers. I can click to go to a site and stare at a blank screen while my status bar flickers with: "Transferring data from ads.that.you.could.care.less.about.com."

That explains Realvideo... (0)

Anonymous Coward | more than 3 years ago | (#34790436)

I've always wondered why RealVideo streams kept buffering... ad neauseam. Now we know why.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...