Beta

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Linux 3.3: Making a Dent In Bufferbloat?

Unknown Lamer posted more than 2 years ago | from the drop-15-packets-in-ten-minutes-a-day dept.

Networking 105

mtaht writes "Has anyone, besides those that worked on byte queue limits, and sfqred, had a chance to benchmark networking using these tools on the Linux 3.3 kernel in the real world? A dent, at least theoretically, seems to be have made in bufferbloat, and now that the new kernel and new iproute2 are out, should be easy to apply in general (e.g. server/desktop) situations." Dear readers: Have any of you had problems with bufferbloat that were alleviated by the new kernel version?

cancel ×

105 comments

Sorry! There are no comments related to the filter you selected.

Yes. (4, Funny)

busyqth (2566075) | more than 2 years ago | (#39497149)

I've had all sort of trouble with bloat of all kinds since I turned 40.
You name it, it's become bloated: buffers, bellies, butts, pretty much everything.

Re:Yes. (2)

couchslug (175151) | more than 2 years ago | (#39499947)

Everything gets fatter, hairier, and closer to the ground....

Re:Yes. (2)

dougmc (70836) | more than 2 years ago | (#39501013)

Everything gets fatter, hairier, and closer to the ground....

The top of my head is less hairy, and about the same width and distance from the ground as it was 20 years ago ...

Re:Yes. (2)

Paul Fernhout (109597) | more than 2 years ago | (#39501781)

Fill your main input buffer with lots of vegetables and you may see the other buffers shrink. :-)

What is bufferbloat? (5, Informative)

stillnotelf (1476907) | more than 2 years ago | (#39497199)

TFS doesn't mention, and it's hardly an obvious term. From TFA:

Bufferbloat...is the result of our misguided attempt to protect streaming applications (now 80 percent of Internet packets) by putting large memory buffers in modems, routers, network cards, and applications. These cascading buffers interfere with each other and with the flow control built into TCP from the very beginning, ultimately breaking that flow control, making things far worse than they’d be if all those buffers simply didn’t exist.

16550A (5, Insightful)

Anonymous Coward | more than 2 years ago | (#39497293)

In my day, if your modem had a 16550A UART protecting you with its mighty 16 byte FIFO buffer protecting you, you were a blessed man. That little thing let you potentially multitask. In OS/2, you could even format a floppy disk while downloading something thanks to that sucker.

Re:16550A (4, Insightful)

tibit (1762298) | more than 2 years ago | (#39497565)

Floppy disk formatting requires very little CPU resources. You should have had no problem receiving bytes even at 57600 baudrate into a buffer using an 8250 UART (with one byte receive buffer) all the while formatting a floppy disk, even on the original IBM PC. You'd probably need to code it up yourself, of course. I don't recall BIOS having any buffering UART support, neither do I recall many BIOS implementations being any good at not disabling interrupts.

I wrote some 8250/16550 UART driver code, and everything worked fine as long as you didn't run stupid code that kept interrupts disabled, and as long as the interrupt priorities were set up right. In normal use, the only high-priority interrupts would be UART receive and floppy controller, and sound card if you had one. Everything else could wait with no ill effects.

Re:16550A (4, Informative)

Trixter (9555) | more than 2 years ago | (#39497827)

Floppy disk formatting requires very little CPU resources. You should have had no problem receiving bytes even at 57600 baudrate into a buffer using an 8250 UART (with one byte receive buffer) all the while formatting a floppy disk, even on the original IBM PC.

...unless the serial data came in while the floppy interrupt handler was already in progress. In such a situation, the serial handler must wait until the floppy handler is finished, and depending on what the floppy handler is doing, it could take long enough that more serial data would be delayed or lost. And for those of us who tried to do things like download files directly to floppy disks on slower PCs in the 1980s, this was a regular occurrence.

The 16550A UART's 16-byte buffer meant that several bytes could come in before the serial interrupt needed to be handled again, allowing serial communications to run at full speed for longer time periods before needing to be emptied. This made a world of difference working on slower machines writing to floppies (and faster machines trying to download something in the background while in a multitasking environment).

Re:16550A (4, Insightful)

tibit (1762298) | more than 2 years ago | (#39499325)

...unless the serial data came in while the floppy interrupt handler was already in progress.

The interrupt handlers are supposed to do minimal amount of work, and relegate the rest to something called bottom half (using Linux parlance). When writing the code, you time it and model the worst case -- for example, a floppy interrupt being raised "right before" the serial input becomes available. If there are cases when it may not work, you absolutely have to have workarounds: either you can redo the floppy operation while losing some performance, or you suspend floppy access while data is coming in, etc. There's no handwaving allowed, in spite of a lot of software being designed just that way.

Re:16550A (0)

Anonymous Coward | more than 2 years ago | (#39504743)

During the floppy disk operation "format track", you must suspend all other interrupts besides realtime clock. That code is critical.

Re:16550A (1)

tibit (1762298) | more than 2 years ago | (#39505393)

Not when you are talking to PD765 or Intel 82072/7, like you would on PCs. Those run their own microcode. You prepare a DMA buffer with sector IDs for each sector in the track, then you fire off a command, and the controller does its thing in the background. It will interrupt when the track has been already formatted. When formatting, the fastest you can go is two revolutions: one for formatting, another one for the seek. Those floppy controllers don't allow you to start formatting mid-track IIRC, although feel free to give references stating otherwise if I'm wrong.

Re:16550A (0)

Anonymous Coward | more than 2 years ago | (#39504845)

True if you were just in terminal mode, browsing a BBS, but that's why Ward Christensen of IBM developed X-Modem, so that I could download all the shit I wanted from the Lunatic Fringe while still saving to floppy :-)

Re:16550A (-1, Offtopic)

Anonymous Coward | more than 2 years ago | (#39497881)

Thank you for that.

Can you also please share your penis size, and any other pieces of information that will help us understand how awesome and superior you are?

Re:16550A (0, Flamebait)

Anonymous Coward | more than 2 years ago | (#39497939)

It wouldn't take much superiority to be superior to you, asshole.

Re:16550A (0, Troll)

Anonymous Coward | more than 2 years ago | (#39498709)

Nor to you, asshole.

(fractal progression to an infinite of -1's begin here)

Re:16550A (1)

tibit (1762298) | more than 2 years ago | (#39499349)

I'm merely stating facts. FIFOs in UARTs and other I/O devices help with performance: you only pay one interrupt entry/exit overhead per FIFO access, not per every byte. A lot of software is poorly written, that's a fact, so just because with as-supplied MS-DOS and BIOS things wouldn't work right doesn't mean that hardware was entirely to blame.

Re:16550A (1)

Ed Avis (5917) | more than 2 years ago | (#39507419)

Floppy disk formatting requires very little CPU resources.

Unless you use the amazing 2MGUI which lets you cram more space onto floppy disks, formatting HD 3.5inch floppies to over two megabytes. It drives the floppy directly somehow, chewing up CPU time - which means it works under DOS and WinDOS only.

(There are other programs such the companion 2M which don't require CPU time but 2MGUI is the record holder for highest capacity.)

Re:16550A (1)

tibit (1762298) | more than 2 years ago | (#39508935)

There is no such thing as driving a floppy "directly somehow". All it can do is put bytes into a DMA buffer, and then put bytes into the command register area, and the the controller do its thing. 2M was written for DOS, it needs to directly talk to the floppy controller, that's why you need to run it under DOS, not Windows. I doubt it chews up CPU time for anything but polling, it has nothing better to do anyway as DOS is a single tasking OS. While you're formatting a floppy it wouldn't be very useful to let the command processor continue, of course the formatter could be a TSR, but then it'd need to have very extensive hooks into the OS to prevent any access to the floppy while it's being formatted.

It's fairly trivial to get the same effect on Linux, it has a flexible enough floppy driver IIRC. All you need is to set up the sector IDs in a certain order (for speed), and to use larger sectors and smaller gaps. All this is supported by the floppy controller, and the floppy controller does all the heavy lifting. You can get rid of sector gaps if you enforce rewriting of the entire track at once, that's reasonable anyway if you have a decent block buffers and elevator between the user and the device -- like you do on Linux!

Re:16550A (0)

Anonymous Coward | more than 2 years ago | (#39498071)

or hide a virus in it to activate it again when Virus scanners passed by...... Ahhh.. good times....

Re:What is bufferbloat? (4, Interesting)

tayhimself (791184) | more than 2 years ago | (#39497435)

The acm did a great series on bufferbloat
http://queue.acm.org/detail.cfm?id=2071893 [acm.org] and http://www.bufferbloat.net/projects/bloat/ [bufferbloat.net]

Re:What is bufferbloat? (0)

Anonymous Coward | more than 2 years ago | (#39497533)

It's also been a pet peeve [cringely.com] of Robert X. Cringely for a while now. In his 2011 tech predictions he said bufferbloat would come to be percieved as a growing problem and steps would have to be taken to deal with it.

Re:What is bufferbloat? (1)

Lennie (16154) | more than 2 years ago | (#39501131)

One way to combat the problems of bufferbloat is for most used/of the websites to add support for SPDY. Using one TCP-connection per website instead of 6 connections per domain with domain sharding of 6 domains helps to reduce the problems. Obviously that doesn't solve P2P.

The Apache module mod_spdy is in beta, the nginx developers mentioned on Twitter they expect to have something in May.

Firefox 11 and Chrome already support it (they use the same SSL/TLS library so it was probably easier to port to Firefox than any other browser. The library was developed by Netscape at the time, I believe,). But it is disabled by default in Firefox 11 as it is the first release with SPDY. That will probably change in Firefox 12 or Firefox 13.

Having more than 50% of the browser and server marketshare support SPDY.

The recent OpenSSL stable library release 1.0.1 also supports NPN (Next Protocol Negotiation) which is also needed by those servers.

Re:What is bufferbloat? (1)

Zebedeu (739988) | more than 2 years ago | (#39506451)

But it is disabled by default in Firefox 11 as it is the first release with SPDY. That will probably change in Firefox 12 or Firefox 13.

So, next week then.

Hm (2)

Mitchell314 (1576581) | more than 2 years ago | (#39497305)

Has there been widespread empirical analysis of bufferbloat? Particularly by device manufactures?

Re:Hm (4, Insightful)

Anonymous Coward | more than 2 years ago | (#39497469)

Has there been widespread empirical analysis of bufferbloat?

No - it is a meme started by one guy ( he of X11 protocol fame ) and there is still quite a sceptical audience.

If your TCP flow-control packets are subject to QoS prioritisation ( as they should be ) then bufferbloat is pretty much moot.

Re:Hm (5, Insightful)

nosh (213252) | more than 2 years ago | (#39497593)

People might be sceptical how big the problem is, but the analysis itself and the diagnosis is sound. Most people are only suprised they did not think about it before.

The math is simple: If you have a buffer that is never empty, every package will have to wait in the buffer. If you have a buffer full all the time, it serves no purpose but only defers every packet. And given that RAM got so cheap that buffers in devices grew so much more than bandwidth, you now often have buffers big enough to hold packets needing full seconds to send them all. Such a buffer running in always-full mode means high latency for no gain.

All additional factors of TCP going harvoc when latency is too high and no longer being able to compute how fast to optimally send if no packages get dropped only make the situation worse, but the basic is simple: A buffer always full is a buffer only having downsizes. The more the bigger it is....

Re:Hm (1)

YesIAmAScript (886271) | more than 2 years ago | (#39498273)

Why would my buffer be never empty? Just because you have more buffers doesn't mean you process anything slower than you have to. It just means if you can't get around to something immediately, you can catch up later.

The problems with bufferbloat seem to me to be at the least greatly exaggerated and poor examples like yours are just one reason why.

Re:Hm (5, Informative)

RulerOf (975607) | more than 2 years ago | (#39498833)

Why would my buffer be never empty? Just because you have more buffers doesn't mean you process anything slower than you have to. It just means if you can't get around to something immediately, you can catch up later.

That's exactly the problem. TCP relies on packets being dropped in order to manage connections. When buffers are instead allowed to fill up, delaying packets instead of outright dropping them, the application relying on those packets experiences extremely high latency instead of being rate-limited to fit inside of the available bandwidth.

The problem has come to pass because of how counterintuitive this really is. It's a GOOD THING to discard data you can't transfer RIGHT NOW, rather than wait around and send it later.

I suppose one of the only analogs I can think of might be the Blackbird stealth plane. Leaks like a sieve on the ground, spitting fuel all over the place, because at altitude the seals expand so much that they'd pop if it hadn't been designed to leak on the ground. Using gigantic packet buffers would be like "fixing" a Blackbird so that it didn't leak on the runway.

Re:Hm (5, Funny)

Kidbro (80868) | more than 2 years ago | (#39501585)

A stealth plane analogy. I didn't see that coming!

(very informative post, btw - thank you:)

Re:Hm (1)

gottabeme (590848) | more than 2 years ago | (#39505253)

The SR-71 is not a stealth aircraft.

Re:Hm (1)

julesh (229690) | more than 2 years ago | (#39506737)

The SR-71 is not a stealth aircraft.

http://en.wikipedia.org/wiki/Lockheed_SR-71_Blackbird#Stealth_and_threat_avoidance [wikipedia.org]

It was an early, not entirely succesfull, attempt at one. It does have a radar cross section significantly smaller than its actual size, which I think qualifies it for the title, even if other more recent designs are much better at it.

Re:Hm (1)

gottabeme (590848) | more than 2 years ago | (#39512969)

I suppose it's ultimately a matter of opinion, but I don't think it qualifies. The SR-71 was not intended to sneak past radars undetected. And when it's cruising at Mach 3 at 80,000 feet, it will have a large RCS to radars on the ground.

The SR-71's flyovers were not supposed to be secret from the nations they flew over--that's why it was designed to outrun SAMs.

In contrast, the F-117 and B-2 are explicitly intended to fly past radars undetected. That's stealth.

Re:Hm (0)

Anonymous Coward | more than 2 years ago | (#39498853)

Because your data will come in at maximum speed. The maximum speed at first will first be whatever it takes to fill the buffer. After that, it'll be whatever speed the data is being consumed on the other side of the buffer. This isn't an endpoint problem as much as a bridge/switch problem. A bridge or switch will usually have data rates faster on the transmit side than the receive side. Your cable modem, your wifi hotspot, and pretty much every link after something like a Google will have an in rate faster than an out rate. You, the piddle cable modem running at 20mbps or, worste yet, a laptop sitting on some slow wifi will be the limiting rate for the entire chain back to, say, a Google.

If you can keep up with the rate that google is sending data, then great! Otherwise, there will be a buffer, somewhere in the chain, that is full. Just hope that it's not too huge.

QOS helps.

See the comment after this one...it's full of interesting.

Re:Hm (4, Informative)

Anonymous Coward | more than 2 years ago | (#39499473)

Let's assume we do not have TCP first. Assume you have one slowest connection, like the cable from your house to the internet, with both the internet and the net within your house can handle all the load instantly that cable can generate.

Let's take a look at the case where one person in the house (or one program of the person) has a long download running (like serveral hours). If nothing else is happening, you want to utialize the full cable. Let's assume further that by some way he sending server actually get the speed right, so sends exactly as much as the cable to your house can handle. So far, so go.

Now some other program or someone else takes a look at some sites or downloads a small file that takes one second over the cable (if it was only this). This causes some additional traffic, so you get more data than your cable can handle. Your ISP has a buffer at its side of the cable, so the packets will end up in the buffer. But the side you are downloading from still sends as much as the cable, so if the buffer before the cable is big enough, you have exactly one second worth of pakets in your data. The download is still running, nothing else is running, the buffer keeps exactly one second of data in the buffer. Everything still arrives, but everything arrives one second later. There is no advantage of the buffer here, everything is still slowed down by a full second. If you would have dropped one second of data, the server would first have to retransmit this, so being a second late. Thus without the buffer, essentially everything would still arrive at the same time, but any other requests going over the same line would be there immidietly and not a full second later.

Now you won't have something that exactly sends the amount of data your cable can handle. But it will try (everything of course tries to send stuff as good as it can). If they manage to somehow messure your problem and send slower so the buffer empties again, the buffer makes sense. If they can come up with enough data for your buffer to never come empty, your buffer is too big and only creating problems.

For bigger problems now enter TCP (or anything else that wants to introduce realiability to the internet). Some packets might get lost, so you need to retransmit packets if they get lost. For this you wait a bit on the packet and if it does not arrive, you ask for it to be resent. You cannot wait very long for packets, as the user does not like waiting for a long time if it was part of something interactive. So if a packet only arrives a longer time late, the computer will already have sent a request to have the packet resent. So if you have buffers big enough to cache say a whole second of data, and the buffers are even there both ways, then the buffers might already contain multiple requests to resend the same packet and thus also multiple copies of the packet send out (remember, a request to resend a packet might have got lost, too). So while buffers big enough avoid packets being resent, buffers too big cause uncessesary resends.

Now enter the specifics of TCP. Sending as fast is possible is solved in TCP by getting faster and faster till you are too fast (detected by too many packets getting lost) and then getting slower again. If you have buffers around that can cope a big amount of data, one side can send too fast quite a long time, while everything will arrive still (sometimes a bit later, but it does). Sot it gets faster and faster and faster. The latency will get bigger and bigger. Finally the buffer will be full, so packets need to be dropped. But you usually drop the packets arriving latest, so once this moment takes place, there can still be a long time before the other side realizes stuff is missing (like, say a whole second, which is half an eternity for a positronic android ^H^H^H^H^H a computer) so the sending side was still speeding the whole time. Now all TCP connections running over that buffer will collapse, causing all of them to go back to a very slow speed and you lost a whole of packets, much more than if you did not have a buffer at all (some of them even multiple times).

Re:Hm (1)

19thNervousBreakdown (768619) | more than 2 years ago | (#39500963)

We did think of this before, though. I remember when I was setting up my first FreeBSD firewall around 2001, and I was annoyed that I couldn't figure out QoS well enough to mitigate it. I noticed that RTT went to shit when the link was saturated, did some quick googling, and read about it on another website, where it was already old news in an outdated HOWTO.

Honestly, the idea that this is some sort of amazing thing some guy noticed is silly--everybody noticed it, and the only exceptional thing is how exceptionally stupid the people who put those massive buffers in had to be if they didn't anticipate the issues. They probably did, and just didn't care because they were designing to a crappy metric, and then it became standard practice. The problems that would be caused should have been obvious to anybody who was designing network stuff, they were obvious to me, a kid who was just figuring out TCP, and they were obvious to whoever it was that was writing the HOWTO I read.

Re:Hm (5, Interesting)

Anonymous Coward | more than 2 years ago | (#39497825)

If your TCP flow-control packets are subject to QoS prioritisation ( as they should be ) then bufferbloat is pretty much moot.

TCP flow-control makes the assumption that dropped packets and retransmission requests are a sufficient method of feedback. When it doesn't receive them, it assumes everything upstream is going perfectly and keeps sending data over (in whatever amounts your QoS setup dictates, but that's not the point).

Except everything upstream is maybe not going well. Other devices on the way may be masking problems -- instead of saying "I didn't get that, resend", they use their local buffers and tell downstream nothing. So TCP flow-control is being kept in the dark. Eventually a buffer runs out and that's when that device starts asking for a huge pile of fresh data at once... which is not how it was supposed to work. So the speed of the connection keeps fluctuating up and down even though pipes seem clear and underused.

The immediate workaround is to trim buffers down everywhere. Small buffers are good... but they shouldn't grow so much as to take a life of their own, so to speak.

One thing done in Linux 3.3 to start adressing this properly is the ability to express buffer size in bytes (Byte Queue Limits (BQL)). Historically they worked with packet amounts only, because they were designed in an era where packets were small... but nowadays they get big too. It gives us better control and somewhat alleviates the problem.

The even better solution is to make buffers work together with TCP flow-control. That would be Active Queue Management (AQM), which is still being developed. It will basically be an algorithm that decides how to adapt use of buffers to traffic bursts. But a good enough algorithm has not been found yet (there's one but it's not very good). When it will be found it will need testing, then wide-scale deployment and so on. That might still take years.

Re:Hm (0)

Anonymous Coward | more than 2 years ago | (#39498011)

If only we could scrap the whole mess and design a solution for the problems we face currently, instead of continuing to use the solution to the problems they faced 40 years ago. Wishful thinking, I know...

Re:Hm (1)

John Courtland (585609) | more than 2 years ago | (#39498383)

I think it's incredibly naïve to believe that we can, in one atomic action, rip out and replace tcp/ip (or whatever other technology) with something that is "better" for whatever value of the word "better" you assign it to have. An incredible amount of work and research has gone into making things work the way that they do, and not only do they work pretty well, but upgrading them to fix issues like this buffer bloat thing is not some Manhattan Project-esque undertaking, like reengineering the internet would be.

Re:Hm (1)

skids (119237) | more than 2 years ago | (#39499207)

Already designed. Called ATM AAL5 with early cell discard over an ATM ABR traffic contract. Rejected by the marketplace (minus some niches where its still used.)
Didn't help that the vendors were sluggish implementing the ABR feature, and that the industry didn't realize they needed a loss-leader business strategy to take on IP/ethernet, so they didn't aggressively pursue low-cost hardware or bridge-to solutions like CIF(f.k.a FATE) until it was too late.

(And there have been other attempts as well.)

Really it's rather shameful the paucity of queueing algorithms available in modern day "leading edge" routing platforms.

Re:Hm (1)

dkf (304284) | more than 2 years ago | (#39500897)

If only we could scrap the whole mess and design a solution for the problems we face currently, instead of continuing to use the solution to the problems they faced 40 years ago. Wishful thinking, I know...

There have been many competitors to TCP/IP, but they've all fallen by the wayside because TCP/IP worked better in practice. (I remember OSI networking, but not fondly.) The key is that the internet scales better than the others, and that's made it possible for far more people to be connected to it and that in turn makes it by far the most attractive network to work with in the first place. The killer app of the internet was DNS, and especially its implementation as BIND...

Re:Hm (1)

Anonymous Coward | more than 2 years ago | (#39498043)

The even better solution is to make buffers work together with TCP flow-control. That would be Active Queue Management (AQM), which is still being developed.

Forgive the pessimism, but the reaction which suggests itself is, What Could Possibly Go Wrong by adding yet another layer of cruft?

Re:Hm (0)

Anonymous Coward | more than 2 years ago | (#39498695)

The even better solution is to make buffers work together with TCP flow-control. That would be Active Queue Management (AQM), which is still being developed.

Forgive the pessimism, but the reaction which suggests itself is, What Could Possibly Go Wrong by adding yet another layer of cruft?

In my day, we just shoved packets through a pipe without any protocols...

Re:Hm (0)

Anonymous Coward | more than 2 years ago | (#39499727)

In my day, we just shoved data through a pipe without any protocols...

There, fixed that for you.

Re:Hm (0)

Anonymous Coward | more than 2 years ago | (#39506733)

[...]What Could Possibly Go Wrong by adding yet another layer of cruft?

Well, if we want buffers we'll have to come up with a solution to manage them better. It's either that or going back to tiny buffers like they had in OSs 10-20 years old and phone modems.

But it's not exactly a layer of "cruft". It's a subtle problem that cropped up due to the specific ways we use Internet today, and it has to be addressed at some point.

I have my own scepticism, but for different reasons. I look at IPv6, a solution to a much more obvious problem, and how we all waited until the very last minute to implement it.

Re:Hm (0)

Anonymous Coward | more than 2 years ago | (#39500107)

"TCP flow-control makes the assumption that dropped packets and retransmission requests are a sufficient method of feedback. "

Nope, reordered packets give the same signals, and can that can be used for flow control without sacrificing bandwidth utilization. In combination with QOS, you have your solution. No big deal really.

Re:Hm (0)

Anonymous Coward | more than 2 years ago | (#39506833)

I'm afraid you give "QoS" too much credit. Traffic shaping is limited to a single node, and only to outgoing traffic. It cannot predict incoming traffic, and outgoing traffic (and shaping) depend on feedback, which it does not get -- no reordering, no retransmission, nothing. It just sees data going out and assumes all is well. Basically, QoS is completely out of the loop.

To address bufferbloat you must have buffers all along the data path dinamically (and smartly) adapt to traffic, and let the upstream nodes know about it too. Currently there's no provision for this, all nodes attempt to cope "blindly", with fixed predetermined buffer sizes, and to make things worse there's no cooperation, they don't tell anybody how they're doing.

Similar database buffer bloat (4, Interesting)

Envy Life (993972) | more than 2 years ago | (#39498535)

There is a similar, and well known situation that comes up in database optimization. For example, the Oracle database has over the years optimized its internal disk cache based on its own LRU algorithms, and performance tuning involves a combination of finding the right cache size (there is a point where too much causes performance issues), and manually pinning objects to the cache. If the database is back-ended by a SAN with its own cache and LRU algorithms, you wind up with the same data needlessly cached in multiple places and performance statistics reported incorrectly.

As a result I've run across recommendations from Oracle and other tuning experts to disable the SAN cache completely in favor of the database disk cache. That, or perhaps keep the SAN write dache and disable read cache, because the fact is that Oracle knows better than the SAN the best way to cache data for the application. Add in caching at the application server level, which involves much of the same data, and we have caching of the same information needlessly cached at many tiers.

Then, of course, every vendor at every tier will tell you that you should keep their cache enabled because caching is good and of course it doesn't comflict with other caching, but reality is that caching is not 100% free... there is overhead to manage the LRU chains, do garbage collection, etc. So in the end you wind up dealing with a very similar database buffer bloat issue to Cringely's network buffer bloat. Let's not discount the fact that many serverdisk communications are migrating toward similar communications protocols as networks (NAS, iSCSI, etc). Buffer bloat is not a big deal at home or even a mid-sized corporate intranet, but for super high speed communications like on-demand video, and mission critical multi terrabyte databases, these things matter

Re:Similar database buffer bloat (0)

Anonymous Coward | more than 2 years ago | (#39508801)

There is a similar, and well known situation that comes up in database optimization. For example, the Oracle database has over the years optimized its internal disk cache based on its own LRU algorithms, and performance tuning involves a combination of finding the right cache size (there is a point where too much causes performance issues), and manually pinning objects to the cache. If the database is back-ended by a SAN with its own cache and LRU algorithms, you wind up with the same data needlessly cached in multiple places and performance statistics reported incorrectly.

For caches, you want every object that is located in a lower level cache to also be in any higher level cache in the hierarchy. This is the cache inclusion principle and it is what prevents you from having inconsistent data in different caches. If you didn't have the same data in the different levels of cache, then your caches would be broken.

   

Re:Hm (-1)

Anonymous Coward | more than 2 years ago | (#39499397)

If your TCP flow-control packets are subject to QoS prioritisation ( as they should be ) then bufferbloat is pretty much moot.

You clearly have no idea what the hell you just said. You just defined *part* of the problem. But I guess its easier just to pretend you know everything and actually contribute something to the world.

I swear the people of slashdot get dumber every day. Sorry, but at BEST, your post is troll.

Re:Hm (1)

Bengie (1121981) | more than 2 years ago | (#39512689)

"If your TCP flow-control packets are subject to QoS prioritisation ( as they should be ) then bufferbloat is pretty much moot."

Are you saying backbone routers should implement QoS?

What about how TCP naturally harmonizes when too many connections start to build?

QoS doesn't solve the latency issue, it just pushes the latency down to the "lower priority" streams. It still doesn't solve the issue when thousands of TCP connections harmonize and ramp up all at the same time and fill a buffer until packet loss occurs for all streams, then suddenly all of the TCP connections collapse and the link went from over-utilized to under-utilized.

QoS is just a band-aid.

Yes... (1)

nweaver (113078) | more than 2 years ago | (#39497761)

Yes there has.

Unfortunately, the analysis is "its almost all bad". We have seen with Netalyzr some network kit that had properly sized buffers, sized in terms of delay rather than capacity, but the hardware in question (an old Linksys cable modem) was obsolete and when I bought one and plugged it into my connection, I got into the cable company's walled garden of 'your cable modem is too obsolete to be used'.

We would encourage all device manufacturers to test their devices with Netalyzr, it can find a lot of bugs, and we would be glad to assist in the testing process.

Re:Hm (1)

Anonymous Coward | more than 2 years ago | (#39497765)

No, because buffer bloat is nonsense, buffers exist because they are necessary for reliable operation. If buffer use averages more than one packet it is because input bandwidth is greater than output bandwidth, the alternative is data loss.

The buffer bloat guy wants every bit echoed immediately, then the problem will be the propagation of fragments of garbage.

Buffers on the endpoints vs. in the network (4, Informative)

tepples (727027) | more than 2 years ago | (#39498085)

the alternative is data loss

TCP was designed to work around this by putting predictably sized retransmit buffers on the endpoints, and then the endpoints would scale their transmission rate based on the rate of packet loss that the host on the other end reports. Bufferbloat happens when unpredictably sized buffers in the network interfere with this automatic rate control.

Re:Buffers on the endpoints vs. in the network (2)

Lennie (16154) | more than 2 years ago | (#39501173)

Not only on the network, but also in your OS networking stack and networkcard and wifi drivers.

That is a large part of what they are fixing in Linux.

It is obvious why wifi drivers might have large buffers, retransmission is much more common than on the wire.

Re:Buffers on the endpoints vs. in the network (0)

Anonymous Coward | more than 2 years ago | (#39501371)

Tcp needs fixing. Packet loss these days is more likely a result of a bad wireless link.

Ecn is the way forward. Buffers are fine.

Re:Hm (2)

Dog-Cow (21281) | more than 2 years ago | (#39499259)

You are the most ignorant poster on slashdot for the week. Congratulations.

(Oh, you're also willfully so, which makes you the single most stupid poster, as well. Congratulations.)

Unstable (0)

jcreus (2547928) | more than 2 years ago | (#39497321)

3.3 is odd (thus unstable?); does anyone recommend actually installing it (in my case, Ubuntu)? Are there considerable advantages versus its drawbacks/unstability? Thanks.

Re:Unstable (1)

jcreus (2547928) | more than 2 years ago | (#39497347)

My fault, the version numbering system changed since 2.6.x (wikipedia before you post!). Anyway, I think the question is still valid.

Re:Unstable (0)

Anonymous Coward | more than 2 years ago | (#39497363)

I don't think this old numbering scheme (odd/unstable vs. even/stable) is still in use nowadays

Yes, I'm getting off your lawn

Re:Unstable (0)

Anonymous Coward | more than 2 years ago | (#39497403)

3.3 has introduced some wierdness for me. Maybe not all the kernels fault, but the update was what brought them out.

https://bugzilla.redhat.com/show_bug.cgi?id=806538
https://bugzilla.redhat.com/show_bug.cgi?id=806544
https://bugzilla.xfce.org/show_bug.cgi?id=8596
https://bugzilla.xfce.org/show_bug.cgi?id=8598

Re:Unstable (0)

Anonymous Coward | more than 2 years ago | (#39499883)

Since 2.6, that's been dropped. Development now occurs in the stable branch, and the version numbers don't serve much of a purpose other than to show that 3.2 is older than 3.3. I'd go with whatever your distro's stable branch considers stable.

Doesn't v3.3 have to first be installed on ... (3, Insightful)

Nutria (679911) | more than 2 years ago | (#39497455)

... routers and gateways to have any effect?

I state the obvious because who's already installing it on any but home routers so soon after release?

Re:Doesn't v3.3 have to first be installed on ... (1)

Anonymous Coward | more than 2 years ago | (#39497597)

Yes, however at least the newer devices going forward can have it built in. So it is not going to be fixed 'overnight'. It will take years to fix.

Re:Doesn't v3.3 have to first be installed on ... (0)

Anonymous Coward | more than 2 years ago | (#39498461)

I think the point being that this patch doesn't really have any relevance to the core routers where is is really a problem

Re:Doesn't v3.3 have to first be installed on ... (1)

slydder (549704) | more than 2 years ago | (#39498543)

No. 3.3 adds the ability to control YOUR buffer size based on packet size. This is meant to ensure that your buffer doesn't become larger than it needs to be. It would be nice to see this upstream as well, but that will take time. And as for your in-house router; you should be runing something where you control the kernel anyway. At least I do.

Re:Doesn't v3.3 have to first be installed on ... (0)

Anonymous Coward | more than 2 years ago | (#39501655)

sfq and sfqred + bql make desktops - particularly voip - less jittery.

make servers less likely to overflow the downstream buffers, too.

That appears to be the question asked by the poster.

Yes, routers have the most major problems, but the technologies in 3.3 have to run on them, and there seem to be benefits from this stuff on normal machines.

Haha what? (2)

HarrySquatter (1698416) | more than 2 years ago | (#39497579)

Umm, it was only released 9 days ago. Do you really think every server, router, gateway, etc. is upgraded through magic days after a new kernel version is released? Considering most devices will probably never have their devices updated don't you think it's a bit early to be asking this?

Re:Haha what? (0)

Anonymous Coward | more than 2 years ago | (#39499053)

Umm, it was only released 9 days ago. Do you really think every server, router, gateway, etc. is upgraded through magic days after a new kernel version is released?

Yes, I do. Why do you ask?

Re:Haha what? (0)

Anonymous Coward | more than 2 years ago | (#39503457)

Dont forget that 'kernel' automatically means every related OS will gain an update - because, they fixed 'The Kernel'.

WARNING: links to Cringely article (1)

Rogerborg (306625) | more than 2 years ago | (#39497971)

Like the apocryphal monkey throwing darts at the stocks page, Cringely does get things right occasionally, but not because he actually understands or is capable of explaining them.

It can't happen on my system because of Firefox (0)

Anonymous Coward | more than 2 years ago | (#39497999)

It steals all of the memory.

Seriously. My 10.0.2 Firefox on Debian Squeeze often grows to over 1GB RSS and 1.5GB VSZ in a day or so. And then it becomes extremely sluggish. Closing windows or tabs does not help. It will run my system out of memory. Where are the built-in memory usage stats, especially for extensions?

Why isn't 2GB of physical memory sufficient for laptop running the latest firefox?

I run no Flash. Addons include only adblock+, noscript and ghostery.

I had this problem too (1)

roguegramma (982660) | more than 2 years ago | (#39498097)

Then I discovered it was mostly firebug with the network log turned on that ate the memory with every ajax request made by setInterval.

Re:It can't happen on my system because of Firefox (0)

Anonymous Coward | more than 2 years ago | (#39498237)

about:memory

Re:It can't happen on my system because of Firefox (1)

JTD121 (950855) | more than 2 years ago | (#39498249)

about:memory

Though the implementation they have now isn't granular enough to set loose specific sets of memory from a plug-in here, or a process there. Just a 'minimize memory' button at the bottom.

As for the OP, I'm gonna go with 'no'.

Re:It can't happen on my system because of Firefox (0)

Anonymous Coward | more than 2 years ago | (#39498807)

about:memory

Thanks a bunch! It's been a challenge coming up with ways to track memory use and I'd missed that in my searches. I've been logging and graphing VSZ and RSS over time. Perhaps I could log those as well.

Re:It can't happen on my system because of Firefox (1)

Anonymous Coward | more than 2 years ago | (#39498809)

See? Any internet related technology article can be used to troll Firefox.

You're wrong, however. The buffers being bloated aren't available to Firefox the way you think they are. We're talking about buffers on network cards not buffers in main memory where Firefox supposedly kills your kittens.

Look, if Firefox hurts you so bad that it's created a compulsive behavior to troll even unrelated articles you should 1. Stop using it, and 2. get help for your compulsive disorder. It's unhealthy for you to carry such hate around with you everywhere you go. Just move on. Get help if you have trouble moving on. There's nothing wrong with that, most people do suffer from some form of psychological distress. Some deal with it in ways that don't effect their ability function in society while others need help.

Re:It can't happen on my system because of Firefox (1)

Lennie (16154) | more than 2 years ago | (#39501413)

Firefox 11 pretty much fixes most outstanding bugs, they are a few in Firefox 12. They've are now busy with the top 100 addons and over 50% of the leaky addons have been fixed.

I've never understood this problem. (4, Insightful)

LikwidCirkel (1542097) | more than 2 years ago | (#39498041)

It seems to me that people blame cheap memory and making larger buffers possible for this problem, but no - if there is a problem, it's from bad programming.

Buffering serves a purpose where the rate of receiving data is potentially faster than the rate of sending data in unpredictable conditions. A proper event driven system should always be draining the buffer whenever there is data in it that can possibly be transmitted.

Simply increasing the size of a buffer should absolutely not increase the time that data waits in that buffer.

A large buffer serves to minimize potential dropped packets when there is a large burst of incoming data or the transmitter is slow for some reason.

If a buffer actually adds delay to the system because it's always full beyond the ideal, one of two things is done totally wrong:
a) Data is not being transmitted (draining the buffer) when it should be for some stupid reason.
b) The characteristics of the data (average rate, burstiness, etc.), was not properly analyzed and the system with the buffer does not meet its requirements to handle such data.

In the end, it's about bad design and bad programming. It is not about "bigger buffers" slowing things down.

Re:I've never understood this problem. (1)

Anonymous Coward | more than 2 years ago | (#39498647)

It seems to me that people blame cheap memory and making larger buffers possible for this problem, but no - if there is a problem, it's from bad programming.

Buffering serves a purpose where the rate of receiving data is potentially faster than the rate of sending data in unpredictable conditions. A proper event driven system should always be draining the buffer whenever there is data in it that can possibly be transmitted.

Simply increasing the size of a buffer should absolutely not increase the time that data waits in that buffer.

A large buffer serves to minimize potential dropped packets when there is a large burst of incoming data or the transmitter is slow for some reason.

If a buffer actually adds delay to the system because it's always full beyond the ideal, one of two things is done totally wrong:

a) Data is not being transmitted (draining the buffer) when it should be for some stupid reason.

b) The characteristics of the data (average rate, burstiness, etc.), was not properly analyzed and the system with the buffer does not meet its requirements to handle such data.

In the end, it's about bad design and bad programming. It is not about "bigger buffers" slowing things down.

The issue is that TCP/IP and similar protocols are designed assuming that when the (small) buffers are full, then the packets get lost/rejected and are resent. With large buffers, this assumption is no longer valid. With only one sender and receiver, this appears to make no difference; however, when multiple connections are ongoing with different volumes and latency requirements, the situation is more complex than your mental model would suggest.

You missed the feedback loop (TCP flow contol) (5, Informative)

Richard_J_N (631241) | more than 2 years ago | (#39498719)

Unfortunately, I think you haven't quite got this right.

The problem isn't buffering at the *ends* of the link (the two applications talking to one another), rather, it's buffering in the middle of the link.

TCP flow control works by getting (timely notification of) dropped packets when the network begins to saturate. Once the network reaches about 95% of full capacity, it's important to drop some packets so that *all* users of the link back off and slow down a bit.

The easiest way to imagine this is by considering a group of people all setting off in cars along a particular journey. Not all roads have the same capacity, and perhaps there is a narrow bridge part way along.
So the road designer thinks: that bridge is a choke point, but the flow isn't perfectly smooth. So I'll build a car-park just before the bridge: then we can receive inbound traffic as fast as it can arrive, and always run the bridge at maximum flow. (The same thing happens elsewhere: we get lots of carparks acting as stop-start FIFO buffers).

What now happens is that everybody ends up sitting in a car-park every single time they hit a buffer. It makes the end-to-end latency much much larger.

What should happen (and TCP flow-control will autodetect if it gets dropped packet notifications promptly) is that people know that the bridge is saturated, and fewer people set off on their journey every hour. The link never saturates, buffers don't fill, and nobody has to wait.

Bufferbloat is exactly like this: we try to be greedy and squeeze every last baud out of a connection: what happens is that latency goes way too high, and ultimately we waste packets on retransmits (because some packets arrive so late that they are given up for lost). So we end up much much worse off.
A side consequence of this is that the traffic jams can sometimes oscillate wildly in unpredictable manners.

If you've ever seen your mobile phone take 15 seconds to make a simple request for a search result, despite having a good signal, you've observed buffer bloat.

Re:You missed the feedback loop (TCP flow contol) (0)

Anonymous Coward | more than 2 years ago | (#39514575)

I believe you're confusing flow control with congestion control.

Flow control is to prevent overwhelming the receiver (sliding windows, etc). Congestion control is to prevent overwhelming the network (AIMD, etc).

Re:I've never understood this problem. (0)

Anonymous Coward | more than 2 years ago | (#39498959)

Typical LAN have 100mbps links or more, typical WAN have much less. The one in the middle WILL buffer, bad programming or not. TCP can help a little, but it requires that connections last a bit more than mere seconds to enter a stationary state. Which in the day of Web and Email, almost never happens.

Re:I've never understood this problem. (0)

Anonymous Coward | more than 2 years ago | (#39499561)

You are right, the problem is not too big buffers directly, it is bad algorithms. But given bad algorithms (partly because people are lazy, partly because writing algorithms being good in most cases is hard) larger buffers make the situation worse. And once those bad algorithms and the large buffers make it large enough to hit large enough latencies, TCP will be confused, making the problem getting even worse.

Re:I've never understood this problem. (-1)

Anonymous Coward | more than 2 years ago | (#39500015)

The problem is, there are so many people, like you, who clearly have absolutely no idea what the hell they are talking about. By definition, the more data you have queued, given a constantly consumption rate, will take longer to service. You don't even have basic programming knowledge and yet you think you understand very complex networking issues? Holy shit you are an arrogant idiot.

Why are people on slashdot so damn stupid these days?

The fact you then pinn this on "bad programming" while ignoring lots of good research which all says you're an idiot, while providing no information other than your idiocy as a counter point, wonderfully proves just how big of an idiot you really are. Bluntly, you are a moron. Sit down. Shutup. And read some of the posts from people like me who actually do know what the hell we're talkinga bout. You make everyone dumber for contributing. Horay for stupidity.

Re:I've never understood this problem. (1)

RVT (13770) | more than 2 years ago | (#39501313)

Oh come on! The guy says he never understood the problem and then goes on to prove it to us.
For that he gets modded 'Insightful'?

That's low even for /.

oversimplified PR noise ignores decade of research (4, Interesting)

carton (105671) | more than 2 years ago | (#39498427)

The bufferbloat "movement" infuriates me because it's light on science and heavy on publicity. It reminds me of my dad's story about his buddy who tried to make his car go faster by cutting a hole in the firewall underneath the gas petal so he could push it down further.

There's lots of research on this dating back to the 90's, starting with CBQ and RED. The existing research is underdeployed, and merely shortening the buffers is definitely the wrong move. We should use an adaptive algorithm like BLUE or DBL, which are descendents of RED. These don't have constants that need tuning like queue-length (FIFO/bufferbloat) or drop probability (RED), and they're meant to handle TCP and non-TCP (RTP/UDP) flows differently. Linux does support these in 'tc', but (1) we need to do it by default, not after painful amounts of undocumented configuration, and (2) to do them at >1Gbit/s ideally we need NIC support. FWIH Cisco supports DBL in cat45k sup4 and newer but I'm not positive, and they leave it off by default.

For file sharing, HFSC is probably more appropriate. It's the descendent of CBQ, and is supported in 'tc'. But to do any queueing on cable Internet, Linux needs to be running, with 'tc', *on the cable modem*. With DSL you can somewhat fake it because you know what speed the uplink is, so you can simulate the ATM bottleneck inside the kernel and then emit prescheduled packets to the DSL modem over Ethernet. The result is that no buffer accumulates in the DSL modem, and packets get layed out onto the ATM wire with tiny gaps between them---this is what I do, and it basically works. With cable you don't know the conditions of the wire so this trick is impossible. Also, end users can only effectively schedule their upstream bandwidth, so ISP's need to somehow give you control of the downstream, *configurable* control through reflected upstream TOS/DSCP bits or something, to mark your filesharing traffic differently since obviously we can't trust them to do it.

Buffer bloat infuriates me because it's blitheringly ignorant of implemented research more than a decade old and is allowing people to feel like they're doing something about the problem when really they're just swapping one bad constant for another. It's the wrong prescription. The fact he's gotten this far shows our peer review process is broken.

Re:oversimplified PR noise ignores decade of resea (5, Interesting)

jg (16880) | more than 2 years ago | (#39498867)

You are correct that replacing one bad constant with another is a problem, though I certainly argue many of our existing constants are egregiously bad and substituting a less bad one makes the problem less severe: that is what the cable industry is doing this year in a DOCSIS change that I hope starts to see the light of day later this year. That can take bloat in cable systems down by about an order of magnitude, from typically > 1 second to of order 100-200ms; but that's not really good enough for VOIP to work as well as it should. The enemy of the good is the perfect: I'm certainly going to encourage obvious mitigation such as the DOCSIS changes while trying encourage real long term solutions, which involve both re-engineering of systems and algorithmic fixes. There are other places where similar "no brainer" changes can help the situation.

I'm very aware of the research over a decade old, and the fact that what exists is either *not available* where it is now needed (e.g. any of our broadband gear, our OS's, etc.), and *doesn't work* in today's network environment. I was very surprised to be told that even where AQM was available, it was often/usually not enabled, for reasons that are now pretty clear: classic RED and derivatives (the most common available) require manual tuning, and if untuned, can hurt you. As you, I had *thought* this problem was a *solved* problem in the 1990's; it isn't....

RED and related algorithms are a dead end: see my blog entry on the topic: http://gettys.wordpress.com/2010/12/17/red-in-a-different-light/ and in particular the "RED in a different light" paper referenced there (which was never formally published, due to reasons I cover in the blog posting). So thinking we just apply what we have today is *not correct*; when Van Jacobson tells me RED won't hack it (which was originally designed by Sally Floyd and Van Jacobson) I tend to believe him.... We have an unsolved research problem at the core of this headache.

If you were tracking kernel changes, you'd see "interesting" recent patches to RED and other queuing mechanisms in Linux; this shows you just how much such mechanisms have been used, that bugs are being found in this day and age in such algorithms in Linux: in short, what we have had in Linux has often been broken, showing little active use.

We have several problems here:
      1) basic mistakes in buffering, where semi-infinite statically sized buffers have been inserted in lots of hardware/software. BQL goes a long way toward addressing some of this in Linux (the device driver/ring buffer bufferbloat that is present in Linux and other operating systems).
      2) variable bandwidth is now commonplace, in both wireless and wired technologies. Ethernet scales from 10Mbps to 10 or 40Gps.... Yet we've typically had static buffering, sized for the "worst case". So even stupid things like cutting the buffers proportionately to the bandwidth you are operating at can help a lot (similar to the DOCSIS change), though with BQL we're now in a better place than before.
      3) the need for an AQM that actually *works* and never hurts you. RED's requirement for tuning is a fatal flaw; and we need an AQM that adapts dynamically over orders of magnitude of bandwidth *variation* on timescales of tens of milliseconds, a problem not present when RED was designed or most of the AQM research of the 1990's done. Wireless was a gleam in people's eyes in that era.

I'm now aware of at two different attempts at a fully adaptable AQM algorithms; I've seen simulation results of one of those which look very promising. But simulations are ultimately a guide (and sometimes a real improving insight): running code is the next steps, and comparison with existing AQM's in real systems. Neither of these AQM's have been published, though I'm hoping to see either/both published soon and their implementation happening immediately thereafter.

So no, existing AQM algorithms won't hack it; the size of this swamp is staggering.
                                                                                                                  - Jim

Re:wifi forward error correction (3, Interesting)

Richard_J_N (631241) | more than 2 years ago | (#39499427)

There is one other problem: TCP assumes that dropped packets mean the link is saturated, and backs off the transmit rate. But Wireless isn't like that: frequently packets are lost because of noise (especially near the edge of the range). TCP responds by backing off (it thinks the link is congested) when actually it should be trying harder to overcome the noise. So we get really really poor performance(*).

In this case, I think the kernel should somehow realise that there is "10 MB of bandwidth, with a 25% probability of packets returning". It should do forward-error correction, pre-emptively retransmitting every packet 4x as soon as it is sent. Of course there is a huge difference between the case of lots of users on the same wireless AP, all trying to share bandwidth (everyone needs to slow down), and 1 user competing with lots of background noise (the computer should be more aggressive). TCP flow-control seems unable to distinguish them.

(*)I've recently experienced this with wifi, where the connection was almost completely idle (I was the only one trying to use it), but where I was near the edge of range from the AP. The process of getting onto the network with (DHCP) was so slow that most times it failed: by the time DHCP got the final ACK, NetworkManager had seen a 30 second wait, and brought the interface down! But if I could get DHCP to succeed, the network was usable (albeit very slow).

Re:wifi forward error correction (1)

Follis (702842) | more than 2 years ago | (#39501021)

DHCP isn't sent via TCP. It uses UDP broadcast.

Re:wifi forward error correction (1)

Richard_J_N (631241) | more than 2 years ago | (#39501487)

Yes...which is why DHCP shows the problem even more severely. DHCP needs 4 consecutive packets to get through OK, and when the environment is noisy, this doesn't happen. But the same happens for TCP, mitigated (slightly) by TCP having a faster retransmit timeout.

My point still stands:
    Symptom: packet loss.
    Common cause: link saturation.
    Remedy: back off slightly, and hope everyone else also notices.

    Symptom: packet loss (indistinguishable from the above)
    Less common cause: RF interference because the AP is near the edge of range.
    Remedy: try really hard, and flood the link with repeats to get at least some packets through.

In the wireless example, we still have a dedicated 10 Mbit link, it's just really unreliable. Being profligate with packets might get me 25% of 8 Mbit/s; being conservative with packets, would back off to perhaps 90% of 0.01 Mbit/s.

Re:wifi forward error correction (1)

Junta (36770) | more than 2 years ago | (#39503805)

DHCP needs 4 consecutive packets to get through OK,

Eh? It needs 4 packets to get through, but doesn't require them all to be consecutive. Lose a dhcp request and the client will retransmit without going back to discover...

Re:wifi forward error correction (2)

DamnStupidElf (649844) | more than 2 years ago | (#39501447)

TCP has SACK to handle moderate link layer packet loss, and at a certain point link layer packet loss is the link layer's fault and up to the link layer to solve via its own retransmission/forward error correction methods.

Re:wifi forward error correction (1)

snookums (48954) | more than 2 years ago | (#39502951)

There is one other problem: TCP assumes that dropped packets mean the link is saturated, and backs off the transmit rate. But Wireless isn't like that: frequently packets are lost because of noise (especially near the edge of the range). TCP responds by backing off (it thinks the link is congested) when actually it should be trying harder to overcome the noise. So we get really really poor performance(*).

In this case, I think the kernel should somehow realise that there is "10 MB of bandwidth, with a 25% probability of packets returning". It should do forward-error correction, pre-emptively retransmitting every packet 4x as soon as it is sent. Of course there is a huge difference between the case of lots of users on the same wireless AP, all trying to share bandwidth (everyone needs to slow down), and 1 user competing with lots of background noise (the computer should be more aggressive). TCP flow-control seems unable to distinguish them.

Shouldn't this be handled at the datalink level by the wireless hardware? If there's transmission errors due to noise, more bits should be dedicated to ECC codes. The reliability is maintained at the expense of (usable) bandwidth and the higher layers of the stack just see a regular link with reduced capacity.

Re:wifi forward error correction (1)

Richard_J_N (631241) | more than 2 years ago | (#39503171)

Shouldn't this be handled at the datalink level by the wireless hardware? If there's transmission errors due to noise, more bits should be dedicated to ECC codes. The reliability is maintained at the expense of (usable) bandwidth and the higher layers of the stack just see a regular link with reduced capacity.

Yes, it certainly should be. But it often isn't.

Incidentally, regular ECC won't help here: adding 1kB of ECC to 1KB of packet doesn't help against a 1ms long burst of interference, which obliterates the whole packet.

Re:wifi forward error correction (1)

hechacker1 (1358761) | more than 2 years ago | (#39504289)

The solution for wireless could be a TCP congestion control change, such as Westwood+ which accounts for bandwidth by delay rather than dropped packets.

But even better is a simple proxy setup. The proxy handles the request at the AP for the client, and retransmits can occur over the much faster wireless link.

It's mostly a cost issue, since only recent APs are powerful enough to run a local caching proxy.

Re:oversimplified PR noise ignores decade of resea (0)

Anonymous Coward | more than 2 years ago | (#39498877)

RTFA. Fourth link.

They (the modern supposed bufferbloat conspiracy) talk about exactly the things you're raving about.

Re:oversimplified PR noise ignores decade of resea (0)

Anonymous Coward | more than 2 years ago | (#39498933)

I don't have mod points, but wanted to let you know that I found your post very interesting. Thanks.

Re:oversimplified PR noise ignores decade of resea (3, Informative)

nweaver (113078) | more than 2 years ago | (#39499911)

Buffer bloat infuriates me because it's blitheringly ignorant of implemented research more than a decade old and is allowing people to feel like they're doing something about the problem when really they're just swapping one bad constant for another. It's the wrong prescription. The fact he's gotten this far shows our peer review process is broken.

Actually, this focus is driven very much by a technical approach. We know it is a problem in the real world due to wide spread, empirical measurements. Basically, for most users, the Internet can't "Walk and chew gum": interactive tasks or bulk data work just fine, but combining bulk data transfer with interactive activity results in a needless world of hurt.

And the proper solution is to utilize the solutions known in the research community for a decade plus, but the problem is getting AQM deployed to the millions of possible existing bottlenecks, or using 'ugly-hack' approaches like RAQM where you divorce the point of control from the buffer itself.

Heck, even a simple change to FIFO design: "drop incoming packets when the oldest packet in the queue is >X ms old" [1], that is, sizing buffers in delay rather than capacity, is effectively good enough for most purposes: I'd rather have a good AQM algorithm in my cable modem but, without that, a simple sized in delay buffer gets us 90% there.

[1] X should be "measured RTT to the remote server", but in a pinch a 100-200ms number will do in most cases.

Re:oversimplified PR noise ignores decade of resea (1)

Bengie (1121981) | more than 2 years ago | (#39513255)

"The bufferbloat "movement" infuriates me because it's light on science and heavy on publicity."
Of the articles I've read on it, they've been VERY heavy on science.

"merely shortening the buffers is definitely the wrong move"
Who is saying this? The issue that I have read about talks about the HUGE difference in performances of different links. If you have a 10Gb card and have a 1Mb link, the buffers are grossly different in size. To fix TCP, we can't look at packet-loss, we need to look at latency.

The problem is "how" you discover the base latency for a path without the routers telling you and how that algorithm interacts with many streams. There are many cases TCP needs to cover. There are decent algorithms that are better than the current TCP, but no one wants to mass-deploy those changes before it becomes standard.

"With cable you don't know the conditions of the wire so this trick is impossible."
You haven't worked with DOCSIS3.0+channel-bonding+CDMA. I get less latency and less jitter on my DOCSIS3 connection to my ISP than my mom's fiber connection. How your ISP implements your connection makes a HUGE difference.

Your main argument is great, they're actually the SAME argument these "bufferbloat" evangelists are preaching. So you are also part of the "bufferbloat movement" that you talked down about in your first sentence.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?
or Connect with...

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>