×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Controlling Bufferbloat With Queue Delay

Soulskill posted about 2 years ago | from the more-effective-than-harsh-language dept.

The Internet 134

CowboyRobot writes "We all can see that the Internet is getting slower. According to researchers, the cause is persistently full buffers, and the problem is only made worse by the increasing availability of cheap memory, which is then immediately filled with buffered data. The metaphor is grocery store checkout lines: a cramped system where one individual task can block many other tasks waiting in line. But you can avoid the worst problems by having someone actively managing the checkout queues, and this is the solution for bufferbloat as well: AQM (Active Queue Management). However, AQM (and the metaphor) break down in the modern age when Queues are long and implementation is not quite so straightforward. Kathleen Nichols at Pollere and Van Jacobson at Parc have a new solution that they call CoDel (Controlled Delay), which has several features that distinguish it from other AQM systems. 'A modern AQM is just one piece of the solution to bufferbloat. Concatenated queues are common in packet communications with the bottleneck queue often invisible to users and many network engineers. A full solution has to include raising awareness so that the relevant vendors are both empowered and given incentive to market devices with buffer management.'"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

134 comments

I finally understand. (-1, Offtopic)

SnapaJones (2634697) | about 2 years ago | (#39937847)

Oh, Jesus, caaaaaaan yoooou seeeeeee, what's insiiiiiide their underweeeeeeeeear?

Because they don't fuckin' use Gamemaker!

s/slower/laggier/ (5, Insightful)

diamondmagic (877411) | about 2 years ago | (#39937855)

The Internet is not getting slower. It is becoming laggier. Comeon people, learn the difference.

Re:s/slower/laggier/ (4, Insightful)

gstrickler (920733) | about 2 years ago | (#39937915)

And smaller buffers will help. Larger buffers do almost nothing to increase throughput, but they can increase latency. Having buffers isn't a problem. Having buffers that are too large is a problem.

Re:s/slower/laggier/ (2)

DarkOx (621550) | about 2 years ago | (#39939869)

Depends...

Suppose you have a router has link A connected at 10Mbs, link B at 10Mbs, and link C at 300Kbps. You have a host on the far end of A sending packets to something on the far end of C. The traffic is highly bursty. TCP does reliability end to end, so if the host on the end of C misses packets because the router discarded them that is all traffic that has to run across link A again, which cuts down the available bandwidth for A to B. If the router had a large buffer the burst of traffic from A for C might have been stored, preventing the retransmit on A. This works for bursty traffic, obviously the buffer will never flush if the A to C flow is continuous.

Buffering is still important. Its just not as simple now that the internet is less bursty. More transfers are large files, streaming media, etc, less push that e-mail message, or 5K webpage and done.

Re:s/slower/laggier/ (1)

phlinn (819946) | about 2 years ago | (#39941311)

Buffers are so large that A never receives an ACK from the destination, and resends anyways.

Buffer size is not the real problem (1)

TheLink (130905) | about 2 years ago | (#39941029)

I disagree. It's fine for buffers to be very big, practically "infinite" even. Buffer size does not have to be linked with latency at all. The reason it is currently is because most routers are doing it wrong ;).

If you really want to address latency what routers should do is keep track of how long packets have been in the router (in clock ticks, milliseconds or even microseconds) and use that with QoS stuff (and maybe some heuristics) to figure out which packets to send first, or to drop.

For example, "bulk/throughput" packets (think email) might be kept around for 100+ milliseconds. In contrast, while latency sensitive packets get priority, they might be dropped if they cannot be sent within tens of milliseconds ( so both sides can earlier detect the comms channel can't cope and possibly deal with it sooner rather than wait till the latency gets intolerable).

With my proposed approach a latency insensitive burst of big packets through a high bandwidth channel does not necessarily have to cause packet loss at all - whether from latency sensitive or latency insensitive channels. There would be enough buffer space to hold all the big packets while letting the low latency stuff through. Whereas with the smaller buffer size approach you are more likely to have unnecessary packet loss - which reduces throughput.

Even if the router doesn't do any QoS stuff it's way more useful to people who care about latency to be able to configure a router so that the max latency it will ever cause is 10 milliseconds. Especially if the router has multiple network connections with different bandwidths (e.g. 2Mbps, 100Mbps).

Re:s/slower/laggier/ (-1)

Anonymous Coward | about 2 years ago | (#39937923)

Why is this pedantry modded insightful? Isn't lag just one form of slowness?

Re:s/slower/laggier/ (1)

Anonymous Coward | about 2 years ago | (#39937941)

Well, when I think of 'slow' I think of Mb/sec. In this respect, no, the Internet has not gotten slower, in fact, it has gotten 'faster'. However, when I think of 'laggy' I think of 'time it takes to load a webpage'. and since webpages and pretty much all files have been getting larger ...

Re:s/slower/laggier/ (1)

jones_supa (887896) | about 2 years ago | (#39938555)

You could meta-argue further even that, as we recently had the story about global broadband speeds dropping [slashdot.org].

Re:s/slower/laggier/ (0)

Anonymous Coward | about 2 years ago | (#39939253)

My argument would lean heavily on the perception I have that many sites are becoming heavily dependent on large imported libraries of javascript, bloated flash interfaces, large sheer amounts of dynamic content on one page, and very heavy multimedia content density, so that it's largely not only a media format and queueing issue, but also a basic design and implementation issue. I mean, I've seen pages with cross-site requests from something like 8 domains, (generally either data mining or bloated library imports, sometimes content delivery networks, half of which are slow) and some with literally dozens of local scripts, many of which contribute little to nothing to the usability of the site in question, or make it worse, like google instant. I would contend that this wasn't common practice way back when, and hasn't ever been implemented with the drastic impact on networks in mind, or that people really don't care unless it directly affects them, which is some sort of disease running rampant through our species.

While it's easy to say we have greater raw capability to deliver data, the more of that we see getting implemented, the more abused the networks get, and the more weaknesses in underlying standards and protocols become crucial, instead of just the slight inconveniences they once were. In response, some crude traffic shaping is implemented that really picks unfairly on certain players, e.g. torrent traffic, while others, like the assloads of scanning that get done in response to the torrent traffic, don't get treated with the same severity.

It's just like in game design, where we see so much heavier influence on graphics and sounds than gameplay, that you often get horrible interface, no plot, and fairly shitty control, and then the more time goes on, the worse it gets. In spite of there being an underlying capability to improve, the situation degrades as a constant march towards the product that will literally dazzle the user right up until the box is opened, and then self-destruct, like a message for Inspector Gadget. Only it may also destroy an entire industry, and the work force that depends upon it to feed the family and pay the bills at home. (Although, I doubt this will happen for some time, since there are far too many people content with forking over what could feed two persons for over a week for mediocre or shittier games, and their horrible sequels)

Re:s/slower/laggier/ (1)

Anonymous Coward | about 2 years ago | (#39939275)

To finish off this rant, I want to shake my tiny fist of rage at Gawker Media sites, and all similar sites, which seem to require javascript to even read the text in a primarily text article. Plain text dependant upon javascript! Oh, the humanity! Since when is it okay to bastardize the DOM and sane page design concepts to the point that you can hide the all of the body text from a user that doesn't want to go around allowing every rogue script some asshole implements to run on his/her box(en)?

Re:s/slower/laggier/ (1)

notgm (1069012) | about 2 years ago | (#39937947)

if we weren't so pedantic, the buffers wouldn't have the need to store anything.

insightful modifier is insightful.

Re:s/slower/laggier/ (0)

Anonymous Coward | about 2 years ago | (#39939115)

Latency and Throughput are not the same, to the degree that an semi trailer and a 3 foot by 4 foot cardboard box are not the same. Latency describes the speed at which data is first received, indicating how much lag occured early on in the comms. (e.g. in initiating a stream, or starting to load an image) Throughput describes the average speed at which data is received, indicating how much lag is experienced throughout the comms, whether they be intermittent or not. (e.g. how many times a buffer underruns) "Pedantic" is making an entirely valid point, because in no uncertain terms, a connection can have high latency with high throughput, or conversely, low latency with low throughput.

Re:s/slower/laggier/ (0)

Anonymous Coward | about 2 years ago | (#39939161)

an semi trailer! i am quite fantabulous.

Also, to clarify, there are bottlenecks. The network path is not exactly linear in these terms. For instance, you may have a satellite link, where the signal travels such a great distance through the atmosphere, it is common to have very high latency even with a broadband connection. On the other hand, you can have low latency between two units separated by just feet, (of cabling, or airwaves) depending on what's in play, say a fast switch in front of a slow router.

Re:s/slower/laggier/ (4, Informative)

Xtifr (1323) | about 2 years ago | (#39937927)

Yup, and another error in TFS is:

According to researchers, the cause is persistently full buffers.

should be "a cause".

Lame, misleading summaries is par for the course around here, though. But look on the bright side--it helps keep us on our toes, sorting sense from nonsense, and helps us spot the real idiots in the crowd. :)

At least this one had a link to a fairly reliable source. It wasn't just blog-spam to promote some idiot's misinterpretation of the facts. Might have been nice to also provide a link to bufferbloat.net [bufferbloat.net] or Wikipedia on bufferbloat [wikipedia.org], as well, for background information, but what can you do?

Re:s/slower/laggier/ (0)

Anonymous Coward | about 2 years ago | (#39939303)

It seems so common to see this manipulative (whether intentional or not) wording these days, I believe learning about debate and logical fallacy should begin by second grade, at least in the US.

I have a deep hatred for this pattern people have of proclaiming some mutual exclusivity of blame, where causation is entirely linear, and all evidence of it is irrefutable proof upon discovery. It's tunnel vision or double vision! Prepare the lynch mobs!

Re:s/slower/laggier/ (1)

julesh (229690) | about 2 years ago | (#39939509)

To be fair, this isn't exactly the first /. article discussing bufferbloat, so presumably both submitter and editor assumed we already knew what it was.

Re:s/slower/laggier/ (1)

diamondmagic (877411) | about 2 years ago | (#39939687)

TFA is actually one of the first coherent explanations of bufferbloat I've seen. Bufferbloat.net tells me they can fix my chaotic and laggy network performance, alright, fine. But... how?

Where's the incentive? (2)

rogueippacket (1977626) | about 2 years ago | (#39937899)

Today, there is no incentive for an ISP to consider spending money on this. For their private customers, they sell QoS, which guarantees their customers a better queuing method. Extremely profitable. For consumers, it makes sense to simply continue investing in infrastructure. Adding capacity from the street to the CO not only eliminates the issue, but also allows the ISP to provide better, more profitable services. In short, we will likely see better queuing methods integrated with future routers. The may be one of them, but only time will tell, and nobody will discard all of their equipment today to get it. The issue is just too minor while capacity remains cheap and QoS profitable.

Re:Where's the incentive? (1)

skids (119237) | about 2 years ago | (#39938001)

In short, we will likely see better queuing methods integrated with future routers

Not holding my breath, given the age and demonstrated effectiveness of SFQ variants and their non-presence in modern routing platforms.

What TFA left me wondering was whether their algorithm will prove resilient to being combined with prioritization and connection-based/host-based/service-based fairness strategies and various ECN mechanisms.

Re:Where's the incentive? (0)

Anonymous Coward | about 2 years ago | (#39938251)

QoS doesn't fix the problem at all, it just separates the buffers being used for different types of traffic so they don't interfere with each other - you still get the bufferbloat issue within those buffers.

Re:Where's the incentive? (1)

SuricouRaven (1897204) | about 2 years ago | (#39938855)

Unless the peak traffic is less than the available capacity, in which case the buffer never nears full. Which can be the case for a high-priority queue used for latency-sensitive but low-rate traffic like VoIP.

Re:Where's the incentive? (5, Informative)

Ungrounded Lightning (62228) | about 2 years ago | (#39938527)

Today, there is no incentive for an ISP to consider spending money on this. For their private customers, they sell QoS, which guarantees their customers a better queuing method. Extremely profitable. For consumers, it makes sense to simply continue investing in infrastructure.

You appear to be confused about the issue. This is not about capacity and oversubscription. This is about a pathology of queueing.

The packets leaving a router, once it has figured out where they go, are stored in a buffer, waiting their turn on the appropriate output interface. While there are a lot of details about the selection of which packet leaves when, you can ignore it and still understand this particular issue: Just assume they all wait in a single first-in-first-out queue and leave in the order they were processed.

If the buffer is full when a new packet is routed, there's nothing to do but drop it (or perhaps some other packet previously queued - but something has to go). If there are more packets to go than bandwidth to carry them, they can't all go.

TCP (the main protocol carrying high-volume data such as file transfers) attempts to fully utilize the bandwidth of the most congested hop on its path and divide it evenly among all the flows passing through it. It does this by speeding up until packets drop, then slowing down and ramping up again - and doing it in a way that is systematic so all the TCP links end up with a fair share. (Packet drop was the only congestion signal available when TCP was defined.)

So the result is that the traffic going out router interfaces tends to increase until packets occasionally drop. This keeps the pipes fully utilized. But if buffer overflow is the only way packets are dropped, it also keeps the buffers full.

A full buffer means a long line, and a long delay between the time a packet is routed and the time it leaves the router. Adding more memory to the output buffer just INCREASES the delay. So it HURTS rather than helping.

The current approach to fixing this is Van Jacobson's previous work: RED (Random Early Drop/Discard). In addition to dropping packets when the buffer gets full, an very occasional randomly-chosen packet is dropped when the queue is getting long. The queue depth is averaged - using a rule related to typical round-trip times - and the random dropping increases with the depth. The result is that the TCP sessions are signalled early enough that they back off in time to keep the queue short while still keeping the output pipe full.The random selection of packets to drop means TCP sessions are signalled in proportion to their bandwidth and all back off equally, preserving fairness. The individual flows don't have any more packets drop on the average - they just get signalled a little sooner. Running the buffers nearly empty rather than nearly full cuts round-trip time and leaves the bulk of the buffers available to forward - rather than drop - sudden bursts of traffic.

ISPs have a VERY LARGE incentive to do this. Nearly-full queues increase turnaround time of interactive sessions, creating the impression of slowness, and dropping bursty traffic creates the impression of flakeyness. This is very visible to customers and doing it poorly leaves the ISP at a serious competitive disadvantage to a competitor that does it well.

So ISPs require the manufacturers of their equipment to have this feature. Believe me, I know about this: Much of the last 1 1/2 years at my latest job involved implementing a hardware coprocessor to perform the Van Jacobson RED processing in a packet processor chip, to free the sea of RISC cores from doing this work in firmware and save their instructions for other work on the packets.

Re:Where's the incentive? (0)

Anonymous Coward | about 2 years ago | (#39939343)

I read that RED had decent potential. How is VJ's RED?

Huh? (1)

ThatsNotPudding (1045640) | about 2 years ago | (#39939733)

This is very visible to customers and doing it poorly leaves the ISP at a serious competitive disadvantage to a competitor that does it well.

Assuming you are a fellow USian, what competition?

Re:Huh? (1)

jon3k (691256) | about 2 years ago | (#39940563)

If you live in the average U.S. 3rd tier (or better) city: DSL vs Cable vs 4G

Re:Huh? (1)

Cid Highwind (9258) | about 2 years ago | (#39940969)

That would be the DSL provider with long queues and lots of lag, the cable provider with long queues and lots of lag, and the "4G" operator (more like 3.25G speed) with long queues, lots of lag, signal strength issues, and a 100MB monthly cap.

If this theory of competition between the duopoly worked, cable and DSL would both have better customer service and lower rates...

Re:Where's the incentive? (2)

HighBit (689339) | about 2 years ago | (#39940899)

You appear to be confused about the issue. This is not about capacity and oversubscription. This is about a pathology of queueing.

To be fair, it's about both.

Large queues are a problem, but they can be mitigated by adding more capacity (bandwidth). It doesn't matter how deep the queue can be if it's never used -- it doesn't matter how many packets can be queued if there's enough bandwidth to push every packet out as soon as it's put in the queue.

That said, your point about AQM being a valid solution to congestion is, of course, right on:

To avoid large (tens of milliseconds or more) queue backlogs on congested links, you use Active Queue Management. The idea with AQM is, if you have to queue packets (because you don't have enough bandwidth to push everything out in under 10 or 20 milliseconds), then start dropping packets (or ECN-marking them), so TCP's congestion control algorithms kick in.

Dropping packets before they get put in the queue is known as tail-drop AQM. Tail-drop AQM is actually one of the worst ways to do AQM. RED (marking or dropping packets *before* the queue becomes full) and head-drop AQM are better for latency and throughput. However, even a simple tail-drop AQM can *drastically* reduce latency on an oversubscribed (congested) link. AQM really works, and it works quite well.

TCP attempts to divide traffic for different streams evenly among all the flows passing through it.

Well, no, it doesn't. Each stream tries to fight for its own bandwidth, backing off when it notices congestion (dropped or ECN-marked packets). That means that the first stream that is going over the congested link will use the bulk of the bandwidth, because it will already be transmitting at full speed before other streams try to ramp up. The other streams won't be able to ramp up to match the first stream, as they will constantly encounter congestion, and the first stream won't back off enough to let other streams ramp up to match it. To truly enforce fairness between streams, you need AQM technologies, such as SFQ.

ISPs have a VERY LARGE incentive to do this.

ISPs certainly use AQM on their core routers, but they have an incentive NOT to use AQM where it really matters: on the congested link between your computer and the gateway. In other words, they don't set up proper AQM on the cable modem or DSL modem.

They don't set up AQM there because they have another incentive: maximizing speed-test results. AQM by definition slows traffic down, and slower speed-test results are what customers seem to care about above all else. People don't call support to say they're seeing over 100ms of latency, they call support saying they're paying for 10mbits and they want to see 10mbits on the speed-test site.

I don't have any faith that ISPs are going to fix this any time soon. However, AQM really does make a huge different in the quality of one's internet connection. So much so that the first thing I do when setting up any new shared network (e.g. home or office network) is put a Linux box in between the cable/DSL modem and the rest of the network. There are many AQM scripts out there, but this one is mine: http://serverfault.com/questions/258684/automatically-throttle-network-bandwidth-for-users-causing-bulk-traffic/277868#277868 [serverfault.com]

My script sets up HFSC and SFQ, as well as an ingress filter, to drop packets before they start filling up the large cable/DSL modem buffers. It does a bang-up job of reducing latency; I can hardly internet without AQM in place any more.

You can do the same thing (or at least a similar thing) with some of the SoHo Linux routers running DD-WRT and the like. Most of the scripts for those focus on QoS first and AQM second (if at all), which is a huge mistake. Maybe someday we'll have off-the-shelf SoHo routers that can do *proper* AQM. Now there's a start-up idea if I ever had one.

Re:Where's the incentive? (1)

TheLink (130905) | about 2 years ago | (#39941173)

RED seems to be a primitive hack job to me.

My proposal is this: http://tech.slashdot.org/comments.pl?sid=2837433&cid=39941029 [slashdot.org]

Lastly if RED doesn't take into account packet size when it drops then it hurts lower bandwidth channels with small packets disproportionately more than the ones with big packets, and most latency sensitive applications use small packets. When communicating across say the pacific ocean, unnecessary packet loss can hurt a lot more (2 *50 milliseconds?).

Remember AT&T and their 9 second 3G ping times (4, Interesting)

Zondar (32904) | about 2 years ago | (#39937955)

Yep, same cause. They attempted to minimize packet loss by increasing the buffers in their network. The user experience was horrible.

http://blogs.broughturner.com/2009/10/is-att-wireless-data-congestion-selfinflicted.html

Re:Remember AT&T and their 9 second 3G ping ti (2)

Crypto Gnome (651401) | about 2 years ago | (#39938105)

A long time ago when the earth was greener someone promoted the concept of an internet with ZERO packet loss.

My InterTubes are BETTER because I HAVE ZERO LOSS!!!

Oddly enough such a business model turned out to be unsustainable due to
(1) it's finanically expensive (between one thing an another)
(2) doing this the less expensive way (ie by slathering on bigger buffers) introduces excessive latency (for some customer designated value of "excessive")

For the life of me I don't understand how ANYBODY can be allowed to run a company without at least vaguely understanding the concept TANSTAAFL.
  • You cannot change the laws of physics
  • Perpetual Motion never is
  • There is no Miracle Cure
  • There is no Get Rich Quick Scheme

And, finally: No that Hot Blonde Supermodel with MASSIVE Bazingas does NOT find you attractive, not in the slightest, no matter how much she may drink/snort/inject or pop.

If you want to fix throughput issues without spending lots of money or sacrificing latency then you're going to need a beter algorithm (yes folks, hard research and careful thinking).

Re:Remember AT&T and their 9 second 3G ping ti (2)

SuricouRaven (1897204) | about 2 years ago | (#39938873)

Correction: There is no get-rich-quick scheme with a high probability of success. There are a few (like the lottery) which may get you rich quick, but with only a small probability.

And why... (5, Interesting)

Anonymous Coward | about 2 years ago | (#39937973)

Is the internet getting slower? (laggier)

because the simplest pages are HUGE BLOATED MONSTROSITIES!

Between flash and ads. And every single page loading crap from all around the world as their 'ad partners', hit counters, click counters, +1 this, like this, digg this, and all the other stupid social media crap that has invaded the web. All this shit that serves no purpose other than to some marketers. And EVERY SINGLE PAGE has to have a 'comment' section and other totally useless shit tacked on as well.

Just this little page here on slashdot. With less than a dozen replies. Tops 80k so far. And that's with everything being blocked that can be.

slower? laggier? no... the signal to noise raito is sucking major ass.

Re:And why... (0)

Anonymous Coward | about 2 years ago | (#39938235)

Youtube is probably the premier consumer of bandwidth for web pages, which means the internet is slow because its full of funny cute kittens*

*actually its full of people downloading 1080p porn videos, but you get the idea.

Re:And why... (1)

jones_supa (887896) | about 2 years ago | (#39938653)

And every single page loading crap from all around the world as their 'ad partners', hit counters, click counters, +1 this, like this, digg this, and all the other stupid social media crap that has invaded the web.

Amen, bro. I hate that crap being sprinkled all over. Even without the +1 buttons there's too many pages framed with various sidebars and menus.

Re:And why... (0)

Anonymous Coward | about 2 years ago | (#39939611)

thats why a http request firewall/filter is awesome to have! requestpolicy ftw... too bad its firefox only ;_;

Re:And why... (1)

buchner.johannes (1139593) | about 2 years ago | (#39938827)

If you used to have a 56kbit modem, and now you have a 10Mbit connection, that's going up by a factor of 200. A classic html page was maybe 5kB (no images), so now it should be allowed to be 1MB large. If you had a few of images then, that would account for a youtube video now.

Re:And why... (2)

hvm2hvm (1208954) | about 2 years ago | (#39938953)

The total size is not the only thing that matters. What matters is the fact that most pages make requests to as many as 10 domains and 50 URLs when they load. That means multiple DNS requests, multiple connections, etc. There also a lot of pages that load stuff through javascript and/or css which adds another stage or two of loading.

Re:And why... (2, Informative)

Anonymous Coward | about 2 years ago | (#39939321)

What matters is not (if we put privacy aside) that 10 domains are requested, but that there are 10 (mostly) different routes to 10 different servers. If a single of these routes or servers are slow, the website loaded will load slow as well.

Re:And why... (1)

KiloByte (825081) | about 2 years ago | (#39939123)

What click counters, "+1 this" or "digg this" are you talking about? I guess you failed step 2 of installing a browser.

Active Queue Management (0)

Anonymous Coward | about 2 years ago | (#39938035)

WAT? No mention of Fry's?

Single-queue multi-cashier. Best. Checkout. System. Evar.

Re:Active Queue Management (1)

SuricouRaven (1897204) | about 2 years ago | (#39939043)

Mathematically optimal too, providing all customers/packets take equal time to process. The only problem is that in the real world it requires awkward physical queue layouts.

Re:Active Queue Management (1)

realityimpaired (1668397) | about 2 years ago | (#39939339)

Not really... Tim Horton's has been doing it for years... they just have the queue line up parallel to the banks of cash registers, and can loop it back on itself. I think they can actually have more people in queue than you would with straight lines like the grocery checkout.

Additionally, when one person does take longer to process (say they're paying for their $15 order with pennies), they don't hold up the people in line behind them, because the queue just routes around them.

Re:Active Queue Management (2)

arth1 (260657) | about 2 years ago | (#39939493)

Mathematically optimal too, providing all customers/packets take equal time to process. The only problem is that in the real world it requires awkward physical queue layouts.

Yeah, it is irritating when you arrive at the airport at five o'god in the morning to catch a flight leaving at nine, and as the only customer, you have to walk a labyrinth a mile long to get to the way too cheerful check-in assistant who is patiently waiting.
Net result: Added latency.

Then you hit the security line, which is already full, presumably by people who spontaneously spawned from the walls between check-in and security. And in the security line, this one line approach does not appear to help speed at all.

Re:Active Queue Management (0)

Anonymous Coward | about 2 years ago | (#39940515)

Here's a tip, You can duck under the velvet ropes. It's OK we promise not to tell your mom about it.

Re:Active Queue Management (1)

arth1 (260657) | about 2 years ago | (#39940959)

Tip: Even if you assume that everybody is physically able to do so, in today's US security theatre, that's likely to get the goons come running, or at the very least earn you the dreaded "s" scribbled on your boarding pass.

Summary so awful, it just hurts. (3, Insightful)

TiggertheMad (556308) | about 2 years ago | (#39938037)

We all can see that the Internet is getting slower.

Can we? It looks like it has been getting faster to me....

According to researchers, the cause is persistently full buffers,

What researchers? What buffers? Server buffers? Router buffers? Local browser buffers? Your statements are so vague as to be useless.

and the problem is only made worse by the increasing availability of cheap memory, which is then immediately filled with buffered data.

Buffering is a way of speeding servers up immensely. Memory is orders of magnitude faster than disk, and piling RAM on and creating huge caches can only help performance. I call bullshit on your entire claim. This summary is so awful, I don't even want to read whatever article you linked to.

Re:Summary so awful, it just hurts. (5, Informative)

Xtifr (1323) | about 2 years ago | (#39938111)

It is definitely a terrible summary, but the ACM article it links to is actually quite interesting. (You do know what the ACM is, don't you?) And bufferbloat has nothing to do with discs, so your objection is completely off base. It certainly would have helped if the summary had given you any idea what bufferbloat is, of course, so I understand your confusion. But it's a real thing. The problem is that the design of TCP/IP includes built-in error correction and speed adjustments. Large buffers hide bottlenecks, making TCP/IP overcorrect wildly back and forth, resulting in bursty, laggy transmission.

Re:Summary so awful, it just hurts. (1)

Anonymous Coward | about 2 years ago | (#39939991)

The checkout line analogy is somewhat flawed also. A better example may be a situation where one has to move soda from soda fountains to a thirsty restaurant crowd with picky drinking habits (and the beverages go stale quickly). One can use bigger or smaller pitchers to move the drinks, but any particular customer can only drink so much at a time. You may get lucky and have a customer take a couple drinks at once, but more likely the server will end up throwing away the almost full pitcher because the drink has gone stale. If there are multiple servers, refilling a large pitcher takes a long time and introduces delay along the entire staff of servers.

Now it may help *individual* servers to use larger and larger pitchers, but this introduces delay for all customers.

Or another analogy is to have individual customers run up to the soda fountain. Some customers carry a pitcher, others just a cup. The pitcher carrying customers take longer to fill up but they benefit by not having to return to the fountain as often. This, however, makes all the other customers wait longer. To cap it off, the pitcher toting customers often end up throwing out half the pitcher.

Or another analogy could be traffic going through a toll booth... Or ants moving sugar from place to place. Or angry hornets in a smoke-infested hornet nest looking for a way to exit while a man in a bee-suit stomps sugar-moving ants from the toll booth operator's cash register... But that last one is admittedly something of a strained analogy.

Re:Summary so awful, it just hurts. (2)

Imagix (695350) | about 2 years ago | (#39938121)

Buffering is a way of speeding servers up immensely. Memory is orders of magnitude faster than disk, and piling RAM on and creating huge caches can only help performance.

You're thinking of caching, not buffering.

Re:Summary so awful, it just hurts. (0)

Anonymous Coward | about 2 years ago | (#39938525)

Bad summary maybe but it's been covered several times in slashdot before...

Buffer in this case are network buffers, which helps smooths out network traffic. This is great when you have lots of spiked traffic, but when you get constant full data, a buffer is bad. It's bad in that, it does nothing, and due to having multiple layers of buffer in between the users, the time introduced from buffers can actually be much slower then just failing and resubmitting the packet. Basically, this is mostly a latency issue though enough buffers in between can also cause constant timeouts. There are also other issues associated with it but that is the major jist of it.

http://linux.slashdot.org/story/12/03/28/1439227/linux-33-making-a-dent-in-bufferbloat
http://linux.slashdot.org/story/11/02/26/038249/got-buffer-bloat
http://tech.slashdot.org/story/11/12/03/0218257/bufferbloat-dark-buffers-in-the-internet
http://news.slashdot.org/story/11/05/03/2051251/the-insidious-creep-of-latency-hell
http://tech.slashdot.org/story/11/01/07/0533226/bufferbloat-the-submarine-thats-sinking-the-net

Re:Summary so awful, it just hurts. (0)

Anonymous Coward | about 2 years ago | (#39938569)

The problem is most apps use TCP and TCP uses dropped packets to determine congestion, but if you buffer packets they don't get dropped and the congestion control doesn't work so the buffers fill up and you get high latency. Packets are buffered because memory is cheap and misguided people have loaded up routers with it. The solution, because you can't fix the routers, is to use a protocol with one-way delay based congestion control at the endpoints, which is what BitTorrent did.

Re:Summary so awful, it just hurts. (1)

1s44c (552956) | about 2 years ago | (#39938731)

Buffering is a way of speeding servers up immensely. Memory is orders of magnitude faster than disk, and piling RAM on and creating huge caches can only help performance. I call bullshit on your entire claim. This summary is so awful, I don't even want to read whatever article you linked to.

I call bullshit on people calling bullshit on things without putting in the effort to even try and understand the story. It's about buffering in network devices causing excessive lag as TCP/IP wasn't built to handle large amounts of invisible store and forward buffering between endpoints. Huge caches don't always help performance, it depends on the nature of the thing being cached.

'i call bullshit' is a stupid phrase too.

Re:Summary so awful, it just hurts. (0)

Anonymous Coward | about 2 years ago | (#39939701)

At least its no where near as vapid, unintelligent, and confrontational as, 'citation needed.'

You can always tell when you've found a real dick hole of an idiot, when someone goes around responding with, 'citation needed.'

Honestly, I would rather people say what they mean, such as, 'I call bullshit.', than pretend they are the smartest, smuggest guy in the room, while indigently implying, 'fuck you, prove it', behind the words, 'citation needed.' Without fail, you can always identify the dumbest, most egocentric person in the room when they go around citing, 'citation needed.'

At least someone declaring, 'I call bullshit', is making a fair and honest effort to communication their position.

Re:Summary so awful, it just hurts. (0)

Anonymous Coward | about 2 years ago | (#39938841)

Now your ignorance is there for the entire Internet to see. Are you happy now?

Re:Summary so awful, it just hurts. (0)

Anonymous Coward | about 2 years ago | (#39938867)

These researchers:
http://www.bufferbloat.net/

Pretty much any buffers that data has to travel through between origin and destination

Re:Summary so awful, it just hurts. (1)

realityimpaired (1668397) | about 2 years ago | (#39939369)

Buffering is a way of speeding servers up immensely. Memory is orders of magnitude faster than disk, and piling RAM on and creating huge caches can only help performance. I call bullshit on your entire claim

What a coincidence.... try setting your cache to 10GB and surf for a few weeks, let it fill up. Then try turning off cache in your browser, and see how much faster things load.... if you're on a remotely broadband connection (more than about 1mbit), the difference will be enormous. With a broadband connection, it's faster to fetch the page than it is to search through your cache to see if you have a copy of the page locally and then load it.

When it's so noticeable at an individual browser level, what makes you think it would be any different with your subjective browsing experience to move the cache/buffer off your LAN?

Numbers & market incentives (5, Interesting)

Logic and Reason (952833) | about 2 years ago | (#39938051)

We all can see that the Internet is getting slower.

Can we? I'd suggest that most people are unaware of any such trend, perhaps because it has happened too gradually and too unevenly. Indeed:

A full solution has to include raising awareness so that the relevant vendors are both empowered and given incentive to market devices with buffer management.

Exactly. Consumers don't know or care about low latency, so the market doesn't deliver it (that plus lack of competition among ISPs in general, but that's another kettle of fish).

We need a simple, clear way for ISPs to measure latency. It needs to boil down to a single number that ISPs can report alongside bandwidth and that non-techies can easily understand. It doesn't need to be completely accurate, and can't be: ISPs will exaggerate just like they do with bandwidth, just like auto manufacturers do with fuel efficiency, etc. What matters is that ISPs can't outright make up numbers, so that a so-called "40 ms" connection will reliably have lower average latency than a "50 ms" connection. That should be enough for the market to start putting competitive pressure on ISPs.

What kind of measure could be used for this purpose? Perhaps some kind of standardized latency test suite, like what the Acid tests were to web standards compliance? Certainly there would be significant additional difficulties, but could it be done?

Re:Numbers & market incentives (0)

Anonymous Coward | about 2 years ago | (#39938241)

Saying consumers don't know or care about latency in current times where online/internet multiplayer videogames have become so mainstream is ridiculous.

Re:Numbers & market incentives (0)

Anonymous Coward | about 2 years ago | (#39938815)

How about saying consumers don't care enough to pay for more bandwidth and QoS? Actually, the home user shouldn't even be a factor. Basing your business on "commodity" Internet access is a mistake. The average home user is a fickle, technologically incompetent whiner that costs more than he pays.

Re:Numbers & market incentives (1)

SuricouRaven (1897204) | about 2 years ago | (#39939059)

They can care without knowing, they just don't know what they care about. The end user doesn't care that 'my round-trip time even to the DNS server is over a hundred miliseconds.' The end user does care that 'Those webpages take too damn long to load!' or 'I keep dieing in those games because people jump around when I try to shoot them!'

Re:Numbers & market incentives (1)

rioki (1328185) | about 2 years ago | (#39939117)

Using a latency value for you connection is kind of nonsense, since it makes a big difference if I connect to a server in my country or on the other side of the world. (That is why CDNs even exist.) Especially since your ISP can only influence their end and maybe with with peering partners the operate.

But honestly, I think a basic rating model could be devised, similar like energy efficiency. Something with grades from A-F, based on how much latency they introduce.

Their problem setup is a speed boundary transition (3, Interesting)

tlambert (566799) | about 2 years ago | (#39938101)

The boundary they are transiting is one between a fast network and a slower network, similar to what you see at a head-end at a LATA or broadband distribution point and leaf nodes like peoples houses, or one the other end, on the pipe into a NOC with multi gigabit interconnects much bigger than the pipes into or out of the NOC.

The obvious answer is the same as it was in 1997 when it was first implemented on the Whistle InterJet: lie about the available window size on the slow end of the link so as to keep the fast end of the link from becoming congested by having all its buffers filled up with competing traffic.

In this way, even if you have tasks which would otherwise eat all of your bandwidth (at the time, it was mostly FTP and SMTP traffic), you can still set aside enough buffer space on the fast side of the router on the other end of the slow link to let ssh or HTTP traffic streams make it through. Beats the heck out of things like AltQ, which do absolutely nothing to prevent a system with a fast link that has data to send you crap-flooding the upstream router so that it has no free buffers to receive any other traffic, and which it can't possibly hope to shove down the smaller pipe at the rate it's coming in the large one.

Ideally this would be cooperatively managed, as was suggested at one point by Jeff Mogul (which is likely barred due to the lack of a trust relationship between the upstream and downstream routers, if nothing else). Think of it like half your router lives at each end of the link wire, instead of both sides living on one end.

It's the job of the device on the border who happens to know there's a pipe size differential to control what it asks for from the upstream side int terms of the outstanding buffer space it's possible for inbound packets to consume (and to likewise lie about the upstream windows to the downstream higher speed LAN on the other end of the slow link).

I'm pretty sure Julian Elischer tried to push the patches for lying about window size out to FreeBSD as oart of Whistle contributing Netgraph to FreeBSD.

While people are looking at that, they might also want to reconsider the TCP Rate Halving work at CMU, and the LRP implementation from Peter Druschel's group out of Rice University.

-- Terry

Re:Their problem setup is a speed boundary transit (1)

jg (16880) | about 2 years ago | (#39940277)

It is *any* transition from fast to slow, including your computer to your wireless link or vice versa from your home router to your computer.

Bufferbloat is an equal opportunity destroyer of time.

Paper is ambiguous about what gets dropped (3, Insightful)

Animats (122034) | about 2 years ago | (#39938165)

It's not clear from the paper whether packet dropping is per-flow, in some fair sense, or per link. There's a brief mention of fairness, but it isn't explored. It sounds like the new approach has no built-in fair queuing.

Without fair queuing, whoever sends the most gets the most data through. Windows (especially) starts up TCP connections by sending as many packets as it can at connection opening. There used to be a convention in TCP called "slow start", where new connections started up sending only two packets, increasing if the round trip time turned out to be good. That was too pessimistic. But Windows now starts out by blasting out 25 or so packets at once. This hogs the available bandwidth through everything with FIFO queues.

If the routers at choke points (where bandwidth out is much less than bandwidth in, like the entry to a DSL line) do fair queuing by flow, the problem gets dealt with there, as the excessive sending fights with itself, trailing packets on the biggest flows are sent last, and everything works out OK.

"Bufferbloat" is only a problem when a small flow gets stuck behind a big one. A flow getting stuck behind the preceding packets of the same flow is fine; you want those packets delivered. Packet reordering is better than packet dropping, although more computationally expensive. Most CIsco routers offer it on slower links. Currently, this means links below 2Mb/s [cisco.com], which is very slow by modern standards. That's why we still have kludgy solutions like RED. This new thing is a better kludge, though.

Re:Paper is ambiguous about what gets dropped (1)

sourcerror (1718066) | about 2 years ago | (#39939157)

If the routers at choke points (where bandwidth out is much less than bandwidth in, like the entry to a DSL line) do fair queuing by flow, the problem gets dealt with there, as the excessive sending fights with itself, trailing packets on the biggest flows are sent last, and everything works out OK.

Yeah, that's a good idea, and also offers a solution for QoS sensitive services like VoIP.

On the other hand when deciding for the size of the queue it also should be considered whether there are alternative routes. When no alternatives exists, a larger queue might be warranted.

Re:Paper is ambiguous about what gets dropped (1)

jg (16880) | about 2 years ago | (#39940355)

AQM's don't usually look at the contents of what they drop/mark.

We expect CoDel to be running on any bulk data queue; voip traffic, properly classified, would be in a independent queue, and not normally subject to policing by a CoDel.

While 10 years ago, a decent AQM like CoDel might have been able to get latencies down for most applications where they should be, browser's abuse of TCP, in concert with hardware features such as smart nics that send line rate bursts of packets from single TCP streams has made me believe we must also do fair queuing/classification to get the latencies (actually, jitter) where they need to be due to these "bursts" of packets arriving.

Re:Paper is ambiguous about what gets dropped (2)

jg (16880) | about 2 years ago | (#39940325)

The article's subtitle is: "A modern AQM is just one piece of the solution to bufferbloat." We certainly expect to be doing fair queuing and classification in addition to AQM in the edge of the network (e.g. your laptop, home router and broadband gear). I don't expect fair queuing to be necessary in the "core" of the network.

I'll also say that an adaptive AQM is an *essential* piece of the solution to bufferbloat, and a piece we've had no good solution to (until, we think, now).

That's why this article represents "fundamental progress".

attack much? (1)

slashmydots (2189826) | about 2 years ago | (#39938195)

This seems like such an unstable system that it's practically a security issue. Could someone, in theory, purposely send bad traffic to as many internet relays (or whatever) as possible, causing them to stall and shutting down huge chunks of the internet?

Re:attack much? (1)

1s44c (552956) | about 2 years ago | (#39938757)

This seems like such an unstable system that it's practically a security issue. Could someone, in theory, purposely send bad traffic to as many internet relays (or whatever) as possible, causing them to stall and shutting down huge chunks of the internet?

You want to DDOS core routers? You would need an insane amount of bandwidth. The only company that has enough malware on enough computers is Microsoft, they demonstracted this with SQL slammer.

Re:attack much? (0)

Anonymous Coward | about 2 years ago | (#39939527)

This seems like such an unstable system that it's practically a security issue. Could someone, in theory, purposely send bad traffic to as many internet relays (or whatever) as possible, causing them to stall and shutting down huge chunks of the internet?

In simple terms, the router can tell when traffic is being directed at it vs. traffic which is transiting through it. Any decent network will have policies in place on the router which only accept traffic directed at it specifically if there's available resources, otherwise it gets dropped.
A common example of this can be seen when running traceroutes. Often you will see one hop in a chain time out or have a very high response time, even though responses from all the routers beyond it are normal. This is because the router is dropping packets directed at the router but continuing to forward transit traffic.

Read that as Controlling Butterfloat (0)

Anonymous Coward | about 2 years ago | (#39938223)

I was wondering what a butterfloat was, how delicious it might be, and whether it was anything like butterbeer.

I was excited until re-reading.

My internets fine (4, Insightful)

rhade (709207) | about 2 years ago | (#39938357)

We all can see that the Internet is getting slower *Citation needed* Have you tried turning your modem off and on again?

You know as a species you're doing it wrong when (3, Funny)

clickclickdrone (964164) | about 2 years ago | (#39938533)

My first thought after reading the story was 'Hope whoever patents those ideas doesn't charge too much for them."

Re:You know as a species you're doing it wrong whe (1)

Xtifr (1323) | about 2 years ago | (#39939093)

Heh, I know exactly what you mean. The same thought definitely crossed my mind. Fortunately, if you read carefully, you'll see that they seem to be releasing their code as open source.

"The open source project CeroWrt is using OpenWrt to explore solutions to bufferbloat. A CoDel implementation is in the works, after which real-world data can be studied. We plan to make our ns-2 simulation code available, as well as some further results."

Not a guarantee, but it sounds promising.

Re:You know as a species you're doing it wrong whe (0)

Anonymous Coward | about 2 years ago | (#39939187)

1. Have brilliant idea
2. Patent it
3. Create open source implementation
4. Wait for widespread adoption
5. ????
6. Profit!

New Q Tech Useless if TOS Ignored (1)

sociocapitalist (2471722) | about 2 years ago | (#39938689)

If your ISP doesn't respect your TOS values then you're only ever going to get best effort service.

Changing the technology of the queuing in the network won't help because your traffic is all going into the 'whatever is left' queue at the bottom of the priority stack (or next to the bottom of the priority stack if your ISP has implemented worse than best effort queuing to control p2p and worm traffic) below the provider's own traffic (ie voice, video services they sell) and any business traffic where the business customer has paid for guarantees (for whatever that's actually worth, depending on the provider.

Re:New Q Tech Useless if TOS Ignored (1)

sociocapitalist (2471722) | about 2 years ago | (#39938713)

I should expand this to say that even if your provider respects your TOS that the other ISPs in the path of traffic probably are not respecting it.

I should also add that Joe Consumer is generally not trusted with regards to TOS and the provider(s) will protect themselves (and business customers) against accidental or deliberate mismarking of traffic (ie you setting your p2p traffic to TOS network control).

Re:New Q Tech Useless if TOS Ignored (1)

SuricouRaven (1897204) | about 2 years ago | (#39939095)

If users are allowed to set their own ToS/QoS, everything would just be set to the highest available priority. Even if the users didn't know how, there would be plenty of companies willing to set a TOS-setting program as 'InterNet Accelerator 2012' - and plenty of people willing to buy it if someone said it worked, even if they don't know how.

Alternative uses (1)

mork (62099) | about 2 years ago | (#39938857)

> But you can avoid the worst problems by having someone actively managing the checkout
> queues, and this is the solution for bufferbloat as well: AQM (Active Queue Management).

Can someone please implement this system at Heathrow to reduce the queues there?

fishy (0)

Anonymous Coward | about 2 years ago | (#39939463)

Smells like packet shaping by stealth to me. New hardware providing a base infrastructure to kill net neutrality.

REDUCE latency (0)

Anonymous Coward | about 2 years ago | (#39939485)

This paper is total BS: the only real remedy would be to REDUCE latency.

But of course, our telcos and ISPs want to disable people using internet for low-latency application, like telephone. So that they can charge for it more. And thus, they are financing research that produces results like this paper.

Better name (0)

Anonymous Coward | about 2 years ago | (#39939947)

Should have called it "AQtive Queue Magnagement".

It's not a big truck. It's a series of tubes. (1)

scire9 (1029348) | about 2 years ago | (#39940703)

And if you don't understand, those tubes can be filled and if they are filled, when you put your message in, it gets in line and it's going to be delayed by anyone that puts into that tube enormous amounts of material, enormous amounts of material.

we need roundabouts (0)

Anonymous Coward | about 2 years ago | (#39940743)

let's get rid of all the semaphores and put in roundabouts..

Well... no not really (0)

Anonymous Coward | about 2 years ago | (#39941161)

Nope. I never saw that it was going slower. The original poster immediately fails in the first line of text. Honestly it seems as fast or faster than ever lately.

So basically i didn't read the rest of the article because

YOU ARE STUPID

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...