Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Got (Buffer) Bloat?

Roblimo posted more than 3 years ago | from the good-buffering-is-better-than-licking-a-cake-batter-bowl dept.

Networking 121

mtaht writes, "After a very intense month of development, the Bufferbloat project has announced the debloat-testing git kernel tree, featuring (via suggestion of Van Jacobson) a wireless network latency smashing algorithm, (called eBDP), the SFB and CHOKe packet schedulers, and a slew of driver level fixes that reduce network latency across the Linux kernel by over 2 orders of magnitude. Got Bloat?"

cancel ×

121 comments

first posty??? (-1)

Anonymous Coward | more than 3 years ago | (#35322524)

have a frosty piss all y'all!!

what it is (1, Insightful)

leaen (987954) | more than 3 years ago | (#35322566)

It is good to write in summary what program nobody hear before actually do

Re:what it is (5, Informative)

firstnevyn (97192) | more than 3 years ago | (#35322612)

It's about the downside of memory becoming cheap causing latency problems with congestion control mechinisms that rely on the endpoints being able to inform the sender when it's sending too fast.

Jim Getty's research blog entry [wordpress.com] explains the problem in detail.

Re:what it is (3, Informative)

Anonymous Coward | more than 3 years ago | (#35323006)

oblig. car analogy, by Eric Raymond no less:
    https://lists.bufferbloat.net/pipermail/bloat/2011-February/000050.html [bufferbloat.net]

== Packets on the Highway ==

To fix bufferbloat, you first have to understand it. Start by
imagining cars traveling down an imaginary road. They're trying to get
from one end to the other as fast as possible, so they travel nearly
bumper to bumper at the road's highest safe speed.
[snipped]

Re:what it is (0)

Anonymous Coward | more than 3 years ago | (#35322626)

...but what is it?

What bufferbloat is (5, Informative)

Sits (117492) | more than 3 years ago | (#35322636)

My understanding may not be correct but:

Bufferbloat (I first came across the term bufferbloat in this blog post by Jim Gettys [wordpress.com] ) is the nickname that has been given to the high latency that can occur in modern network connections due to large buffers in the network. An example could be the way that a network game on one computer starts to stutter if another computer starts to use a protocol like bittorrent to transfer files on the same network connection.

The large buffers seem to have arisen from a desire to maximise download throughput regardless of network condition. This can give rise to the situation where small urgent packets are delayed because big packets (which perhaps should not have been sent) are queued up in front of them. The system sending the big packets is not told to stop sending them so quickly because its packets are being delivered...

The linked article sounds like people have modified the Linux kernel source to allow people who know how to compile their own kernels to test ideas people have had for reducing the bufferbloat effect on their hardware and to report back their results.

Does this help explain things a bit?

Re:What bufferbloat is (4, Interesting)

MichaelSmith (789609) | more than 3 years ago | (#35322668)

high latency that can occur in modern network connections due to large buffers in the network

Nobody ever explained this to me but I was using ping to measure latency on a network where I was actually most interested in ssh. Ping times went something like 10ms, 50ms, 90ms, 130ms... up to about 500ms, then started again at 10ms, 50ms and so on. Maybe some of my pings shared a buffer with a large, periodic data transfer and when that transfer filled a buffer somewhere my latency dropped.

I am pretty sure the people actually operating the WAN in question had no idea what was going on either.

Re:What bufferbloat is (0)

Anonymous Coward | more than 3 years ago | (#35324712)

It's called Traffic Shaping and Scheduling. In Linux, look up command `tc`.

Regarding buffers, I certainly did notice that buffers are increasing in Linux. Recently I was able to `scp` a 1.5MB file "instantly", yet, my upload is only 40kB/s. Yes, the file was stuck in the buffer for half a minute before the process could complete. In the past, the buffer would fill much more quickly.

Re:What bufferbloat is (5, Informative)

Sits (117492) | more than 3 years ago | (#35322688)

I should have also linked to a definition of bufferbloat by Jim Gettys [wordpress.com] . For the curious here's a page of links to bufferbloat resources [bufferbloat.net] and a 5 minute animation that shows the impact of large buffers on network communication (.avi) [bufferbloat.net] .

Re:What bufferbloat is (1)

mehemiah (971799) | more than 3 years ago | (#35323164)

thank you

Re:What bufferbloat is (1)

mtaht (603670) | more than 3 years ago | (#35323526)

oh, great... point to a HUGE animation on the same website that is being slashdotted... thx...

Better link to the animation (1)

Sits (117492) | more than 3 years ago | (#35324642)

Sorry Dave. A Coral cache version of the animation [nyud.net] .

Re:Better link to the animation (0)

Anonymous Coward | more than 3 years ago | (#35326184)

thx for finding a safer place for that bit!

I note we (with a thoroughly debloated set of servers) survived the day without a hitch.

Proof that the concept works... although the slashdotting was kind of in slow motion... I look forward to seeing how many hits we withstood.

Re:What bufferbloat is (4, Informative)

Lennie (16154) | more than 3 years ago | (#35322704)

The code changes to the Linux kernel also reduce the size and ill effects of buffers inside the kernel and drivers.

Re:What bufferbloat is (1)

Rising Ape (1620461) | more than 3 years ago | (#35322966)

My home internet connections have previously suffered from enormous buffers in the DSLAM - setting off a big download could cause ping times to increase to about 2 seconds, rendering other interactive use of the connection impossible.

Still, either this has been fixed now or more modern versions of TCP are more sophisticated, because it doesn't seem to happen any more - at least not to the same degree.

Re:What bufferbloat is (0)

Anonymous Coward | more than 3 years ago | (#35323424)

This is very interesting. When I first got my ADSL, the ISP would 'cap' the download rate on the line at a rate at, or very slightly below, the sync speed of the modem.

So, the modem would sync at 8Mb/s, but the actual throughput (even allowing for the overheads I knew about) would be slightly below that rate. However, the cap would occasionally go out of sync with the line rate. E.g. if the modem retrained at 7 Mb/s, it would immediately force the cap down to 7 Mb/s but if the modem subsequently retrained at 8 Mb/s, there would be a delay of a few hours before the cap was reset to 8 Mb/s.

No one, not even friends who were engineers at ISPs could explain the reason for it, other than it was for 'performance' reasons. So I just regarded it as some sort of mystery setting. Now, it all becomes clear - this was a workaround for the problem of the huge buffers in the DSLAM. If the DSLAM didn't throttle before it buffered, you'd end up with huge amounts of data building up in the buffer if the data rate was higher than the ADSL sync rate, with this problem of uncontrolled latency. And the reason for the lag time before raising the cap was to ensure that the line was sufficiently stable at the new rate to avoid major major packet loss.

Re:What bufferbloat is (3, Informative)

Predius (560344) | more than 3 years ago | (#35323660)

I think what you were seeing was more due to ATM overhead than the DSLAM trying to be cute with throttling. Because ADSL encapsulates everything in ATM even small IP / Ethernet frames get broken up into lots of ATM cells which can add upwards of 20% overhead. So an ADSL line trained at 8Mb/s will never provide 8Mb/s of usable throughput to the end user. Some ISPs actually advertise targeted throughput instead of train rate and set the train rate a certain percentage above the target throughput to compensate. Others just advertise train rates and have disclaimers in the fine print.

I've had my hands inside most Gen 1, 1.5 and 2nd Gen DSLAMs and never seen any with automatic throttling like you described.

(Gen 1 being units that just function at the ATM layer requiring an external system to bridge to Ethernet or IP. Gen 1.5's being upgraded Gen 1s with crude bridging, and Gen 2 being units that were designed to terminate connections directly from the ground up.)

Re:What bufferbloat is (0)

Anonymous Coward | more than 3 years ago | (#35324024)

I can assure you that the DSLAM did throttle - or at least it did on my telco's network.

When I originally got the line, I had an 8 Mb/s sync rate but I was capped at 2Mb/s download. Ruler flat cap. My ISP couldn't help, even a letter to the head of customer services, complaining how I'd called, and e-mailed a total of 57 times, asking for the cap to be removed, failed to remove the cap. They wouldn't even agree to terminate my 12 month contract early and take my business elsewhere.

It turns out, when I spoke to an engineer from another ISP (who resold my ISP's services) that the DSLAMs automatically throttle to prevent 'performance problems' and that the throttle is dynamically matched to your sync rate. However, at that time, there was a known bug in my ISP's DSLAM firmware that caused the cap to get 'stuck' at 2 Mb/s. He said this was such a frequent support call at his ISP, and that first line support had an application that would allow the support operative to manually adjust the DSLAM cap (he called it the 'RAS profile') on that line.

Leaving aside the rather poor technical support, that I was having to talk to tech support at another ISP about my ISP's problems, I did eventually find a way to fix it.

I plugged my DSL modem into the 'filtered' port of a DSL filter. It synced at 128 kbps (lol) and I repeated plugged/unlpugged the modem to simulate severe line noise for a couple of hours. This was sufficient to trigger the DSLAM firmware to adjust the cap.

I was capped at 128 kbps for nearly 10 days (due to the severity of the conditions I simulated), even though the modem continuted to sync at 8 Mbps once I connected it correctly. After 10 days, the cap returned to 8 Mbps and I got the service I paid for. I wrote to the head of customer service again, explaining how I fixed the problem. I never got a reply.

However, a couple of years later, when I was helping a family member who was using that same ISP, I had noticed that on their 'speedtest' page, they now reported the DSLAM cap rate as well as DSLAM sync rate.

Re:What bufferbloat is (1)

OeLeWaPpErKe (412765) | more than 3 years ago | (#35325730)

To be fair, it was probably the pppoe terminator that throttled your connection. It's the only device in the network that has the correct position to be able to throttle efficiently on a per-customer basis, for both the upstream and the downstream direction, as well as the information required to do this "dynamic" throttling you talk about.

The issue technical support has which leads to bufferbloat is equally simple "but your add said 10 Mbit download speed and I'm getting 9.8 according to speedtest.net". The amount of customers that scream bloody murder and "demand their rights" is ridiculous.

You should check one of the discussions on the so-called "net neutrality" to see how flexible slashdotters are on the subject of isp's delivering slightly less bandwidth than advertised (which will always be the case without huge buffers) to see the problem. Or, God forbid, an isp that demands netflix pays for a customer line instead of a peering line when it is in fact a customer, and not a peer. This results in a *slightly* longer path to netflix, and thus apparently quite a few heathens had to be rounded up and thoroughly burned, despite the fact that this obviously did not involve giving preferential treatment to anyone. Quite the reverse, in fact.

I don't see this getting fixed any time soon.

Re:what it is (4, Informative)

shish (588640) | more than 3 years ago | (#35322692)

Most traffic throttling algorithms are based on the idea that the router will say "hey, slow down" if a client overloads it -- but when the router has lots of RAM, there is a tendency for it to just keep accepting and accepting, with the client happily pushing data at full speed, while the router is queuing up the data and only moving it upstream very slowly. Because the queues end up being huge, traffic going through that router gets lagged.

Re:what it is (4, Insightful)

TheLink (130905) | more than 3 years ago | (#35322796)

IMO it's fine for buffers to be very big.

What routers should do is keep track of how long packets have been in the router (in milliseconds or even microseconds) and use that with QoS stuff (and maybe some heuristics) to figure out which packets to send first, or to drop.

For example, "bulk/throughput" packets might be kept around for hundreds of milliseconds, but while latency sensitive packets get priority they are dropped if they cannot be sent within tens of milliseconds (then the sender will faster realize that it should slow down).

Re:what it is (3, Insightful)

Brian Feldman (350) | more than 3 years ago | (#35323132)

That's a much more complex solution than "don't buffer so much damn stuff for no good reason."

Re:what it is (1)

maswan (106561) | more than 3 years ago | (#35323704)

But you do need big buffers to be able to do fast single tcp transfers! You need at least rtt * bandwidth in buffer in any place that has a faster uplink than downlink, like distribution switches for instance. And that's several megabytes, per port, in the today's gigabit ethernet world. Otherwise you're going to get bad to horrible throughput for high latency transfers.

Now, big buffers also need a decent buffer management (just trivial RED is orders of magnitudes better than "lets just fill the buffer up and then drop everything"), but going to small buffers isn't helping.

This is one of the hardest thing to get in a distribution switch, decently sized buffers. Most switches have horribly small buffers, or no documentation at all on sizes. Usually you have to go up to the chassis based ones to get something not horribly small. And if you want intelligent queue management so you can have both throughput and low interactive latency, well, I've heard Juniper makes one of those. Unfortunately at quite a bit higher price point than the cheap procurve/netgear stuff.

Re:what it is (1)

mtaht (603670) | more than 3 years ago | (#35324450)

you are conflating - to some extent - two things that a lot of people get mixed up. 1) on the TCP/ip sending and receiving side of the hosts, you already have very big, dynamic buffers in the stack for managing BDP. In this case, without very smart queue management, the TX_QUEUE and the DMA TX ring are completely "extra", and mess up the BDP calculation. There are no "extra buffers" in the TCP equations for the host side. 2) on switches and routers, large receive buffers are ok, for BURSTS, with queue management, flow control, and other AQM features. TX, not so much, and in the general case, small buffers are simply better as you need to signal end-to-end that there is indeed congestion, before catastrophy happens. There are other ways to lower the impact of flooding an individual host from a big fat server, for example multiple forms of weighted fair queueing.

Re:what it is (1)

amorsen (7485) | more than 3 years ago | (#35326088)

The problem with switches is that most switches have not merely small buffers, which would be ok, but microscopic ones. E.g. Cisco 3560G loses traffic on a gigabit port when faced with 50Mbps of bursty traffic in total coming from two ports. 10ms of buffer at 1Gbps is ~1MB, and most switches have nothing near that per port.

Who pointed it out? (1)

jimrthy (893116) | more than 3 years ago | (#35325044)

Have you done the research to see just who you're disagreeing with about this?

And why they engineered TCP the way they did?

I won't pretend that I've walked through the experiments to try to verify their conclusions. I'm not even sure I know enough to interpret to interpret But...the people shouting the warnings aren't your average Chicken Littles.

Re:Who pointed it out? (1)

BitZtream (692029) | more than 3 years ago | (#35325950)

Have you considered they still could be wrong?

Personally I've never seen a buffer in software designed they way they described. I've never heard of hardware acting that way, but as you said they certainly know more than I.

I stopped reading when they said 'it waits for the buffer to fill up until sending', which is true on a per packet level for a lot of things at a end point, in transit everything I've ever dealt with will forward packets without waiting UNTIL the output link becomes too congested to do so, THEN buffering starts happening. So the issue doesn't come into play until you are near saturation. When you hit that point, you're going to want buffering or the latency will be way worse when you start having to wait for retransmission.

Re:Who pointed it out? (1)

mtaht (603670) | more than 3 years ago | (#35326262)

I think you are referring to an early draft of an article by Eric Raymond, which was wildly incorrect on this point, which was thoroughly discussed on the bloat mailing list. See the incredibly long thread starting at: https://lists.bufferbloat.net/pipermail/bloat/2011-February/000050.html [bufferbloat.net] There is a newer, fully corrected piece coming out soon. In the meantime, consider the words of Van Jacobson, Vint Cerf, and others. Carefully. https://lists.bufferbloat.net/pipermail/bloat/2011-February/000108.html [bufferbloat.net] http://www.bufferbloat.net/projects/bloat/wiki/Quotes [bufferbloat.net]

Re:Who pointed it out? (1)

jimrthy (893116) | more than 3 years ago | (#35327428)

Of course they could be wrong!

My only point here was: are you looking at their credentials?

As I understand it, you just re-stated the problem. Which is why they engineered TCP to work the way it does in the first place

Re:what it is (1)

shentino (1139071) | more than 3 years ago | (#35323990)

Your alternative isn't as simple as you'd like when you have many self interested clients playing zero sum competition over the router's bandwidth.

Re:what it is (0)

Anonymous Coward | more than 3 years ago | (#35323330)

I think you grossly underestimate the impact and increased hardware costs on routers as a result of your proposition.

Re:what it is (0)

Anonymous Coward | more than 3 years ago | (#35324208)

I think you grossly overestimate your reading comprehension skills.

Re:what it is (1)

TheLink (130905) | more than 3 years ago | (#35324556)

1) I didn't post any estimate on that.
2) where you see a problem others may see a genuine business opportunity (I claim my proposal will give a better user experience than smaller buffers and/or stuff like RED).

Re:what it is (1)

Anonymous Coward | more than 3 years ago | (#35323360)

http://en.wikipedia.org/wiki/Active_queue_management [wikipedia.org] Implemented in most of the data-driven parts of e.g. 3G and 4G networks. Another aspect: http://www.akamai.com/ericsson [akamai.com]

Re:what it is (1)

jimrthy (893116) | more than 3 years ago | (#35325066)

AQM is one of the first steps in fixing the problem. It's still just a start. This is a big hairy monster with sharp, pointy teeth.

Re:what it is (0)

Anonymous Coward | more than 3 years ago | (#35323736)

So you want to trade latency for CPU time?

Re:what it is (1)

mtaht (603670) | more than 3 years ago | (#35326408)

Which would you rather have? CPU time or personal time?

Keeping big buffers but managing them better (2)

Geoff-with-a-G (762688) | more than 3 years ago | (#35323840)

That is indeed part of the solution Jim Gettys suggests - Active Queue Management or Random Early Detect.

The first problem is that a ton of transit systems on the Internet (like indeed a ton of systems everywhere) are effectively running the default behaviors in this respect, with no special tuning. That means FIFO with whatever queue size is available.

The second is that even if all the ISP operators decided to fix this, "QoS stuff" has the potential to run afoul of Network Neutrality. The current thinking is that they shouldn't be discriminating between "bulk/throughput" packets and others. Some would suggest discrimination between traffic types is okay, so long as you don't discriminate between traffic sources (ie prioritize all VoIP, but don't let Comcast give Comcast VoIP preference over Vonage VoIP) but implementation would be tricky - all too easy to implement a "vendor neutral" policy that coincidentally doesn't seem to identify Vonage's traffic quite right.

The simplest and most neutral solution to all this is simply to decrease the buffer size in those big default FIFO queues. Even the bulk/throughput packets won't really suffer from that - TCP is specifically designed to have packets dropped in a timely manner, rather than held in a queue for a long time. One of the problem behaviors that Jim identifies is that if your real RTT to say, a server in California, is 40 msec, but there's 4 seconds worth of delay in the buffers. TCP will send a window full of data (let's say 64K) then wait for a reply for 40, 80, maybe 120 msec. Not getting it, he sends the whole window again. And again. And again. Finally an ACK squeezes through, and the process begins again. Instead of shrinking his transmission size, he does the opposite - he sends big multiples of every packet, making the congestion worse.

Re:Keeping big buffers but managing them better (1)

TheLink (130905) | more than 3 years ago | (#35324532)

RED is random.

I'm proposing they use an AQM algorithm that isn't that stupid/random but rather based on the QoS AND _age_of_packet_. The latter I believe is important.

One can determine the QoS by fields in the packet header and/or guessing.

Guessing isn't necessarily that difficult or error prone - latency sensitive stuff uses mostly small packets (because bigger packets = higher latency). And high throughput stuff uses mostly big "max size" packets.

With my proposal if say a 1Mbps ADSL user gets a quick burst of multiple HTTP downloads (a single page often involves multiple concurrent HTTP downloads) the router can queue them up in its buffers, but this doesn't have to interfere with the user's latency sensitive game connection, nor does it have to significantly lower HTTP throughput.

Whereas if you have small buffers, and the small buffers overflowed by the bursts, you get packet drops which reduce throughput and can still also interfere with latency sensitive packets (because they get dropped it the buffers happen to be full).

Many games (e.g. WoW) use TCP (require nonlossy comms) and a missing packet means an effective delay in the magnitude of RTT + timeouts since they have to detect that the packet got lost and then resend it. If your RTT /"ping" is high, a missing packet hurts you a lot.

For example if you have a 1Mbps connection and your game ping is 200 milliseconds (server is far away), if your 256 byte latency sensitive packet is just delayed for 12 milliseconds (the time it takes to send a 1500 byte bulk http packet down a 1Mbps link - because it was in the process of being sent, it has to be sent before the low latency packet) it doesn't matter that much. But if the packet is dropped just because the buffers are full you're going to have to wait a few hundred milliseconds to get its replacement.

Most devices need to put the packet in a queue first before they can do fancy decisions on it. If the queue is full, they drop the packet. With my proposal the queues can be big so the latency sensitive packets don't have to be dropped just because of bursts.

Re:Keeping big buffers but managing them better (1)

mtaht (603670) | more than 3 years ago | (#35326430)

Please feel free to write some code. Writing a new qdisc for Linux and BSD is not very hard.

Writing a good qdisc, insanely so.

That said, I tend to feel that time-stamping more packets and doing more guessing may make sense, as does concepts in TCP vegas.

But: Before starting, read these:

http://pollere.net/Pdfdocs/bcit_6.2001.pdf [pollere.net] Kathleen Nichols - who proved to Van Jacobson that RED was wrong -

And Van Jacobson: http://pollere.net/Pdfdocs/QrantJul06.pdf [pollere.net]

Re:what it is (1)

A beautiful mind (821714) | more than 3 years ago | (#35324212)

What you propose is just dancing around the problem. The slow hop has finite throughput. You either tell the sender to send only as much as the pipe can transfer, or you force the limit on the sender by queueing things which in turn increases latency, which in turn decreases the transfer's bandwidth.

Re:what it is (0)

Anonymous Coward | more than 3 years ago | (#35324928)

If you can not drain out the queues faster than the clients add them in you are creating lag. So instead of seeing what is really going on you just see 'slowness' once and awhile.

So if say I have a router that holds 512 meg of buffer and can input 2meg a second, and output 1 meg per second with 0 QOS (which most things have) you can introduce up to 512 seconds of lag for every packet. Instead of the router saying 'hey slow down to everyone' it will only say it to some. This is not totally out of the realm of possibility.

Now as a consumer this is a good thing. As I can get HUGE downloads very quickly. But for a producer of packets this is a bad thing as it means my packets will not show up for awhile.

The buffer is hiding the actual condition of the network from the producers. The TCP throttle algs do not work if you do that. As they do not know they should slow down.

Another possibility would be to send a throttle down when stuff ends up in the queue and it is some % full. This way you can keep the queues and use them effectively. Yet everyone is not swamping you.

Re:what it is (1)

Idbar (1034346) | more than 3 years ago | (#35323880)

Yes, perhaps you have read already Jain's "Myths About Congestion Management in High-Speed Networks". A paper from the early 90s saying how mainly increased transfer speeds and cheap memory would not ease the need of better congestion control mechanism. The problem here is which one is best, how to pick it and mainly, that there's some need that routers also play a role in the problem

Particularly, with carriers throwing bandwidth at the core, this should be an interesting project for DD-WRT, since gateway routers are those getting the most impact of all. With the increase of access capacities, the next hop will be also impacted and so on.

Those using pings to see the latency, don't seem to take into account also that when a long TCP flow gets impacted by a packet loss, throttles down, so the latency AND losses impact also the available capacity to TCP clients.

Now, many Active Queue Management (AQM) mechanisms, such as Random Early Detection (RED), have been proposed. And most of them work well with TCP, but Linux recently moved to CUBIC, is there enough information about the impact of those over CUBIC?

Re:what it is (3, Informative)

mtaht (603670) | more than 3 years ago | (#35326380)

re:"Interesting problem for dd-wrt"

Agreed.
We are throwing efforts at both the mainline kernel and openwrt.
Openwrt is foundational for dd-wrt and several other (commercial) distributions of Linux on the router. I have a large set of debloated routers already, I'm just awaiting further work on the eBDP algorithm to make better....

http://www.bufferbloat.net/projects/bloat/wiki/Experiment_-_Bloated_LAGN_vs_debloated_WNDR5700 [bufferbloat.net]

re: "using pings"
httpping is a much saner approach than ping, in many cases. Get it from:

http://www.vanheusden.com/httping/ [vanheusden.com]

re: RED & AQM

SFB and CHOKEe are in the debloat-testing kernel, as is eBDP.
RED 93 isn't going to work. nRED may. Experimentation and scripts highly desired. See the bloat and bloat-devel mailing lists for discussions.

Also:

http://www.bufferbloat.net/projects/bloat/wiki/Dogfood_Principle [bufferbloat.net]

Also:

I've seen some VERY interesting behavior with tcp vegas over bloated connections.

http://www.bufferbloat.net/projects/bloat/wiki/Experiment_-_TCP_cubic_vs_TCP_vegas [bufferbloat.net]

Re:what it is (1)

shentino (1139071) | more than 3 years ago | (#35323972)

1. Does said router properly respect QoS when deciding what data gets "rushed"?

2. Does said client have to pay a premium when sending out packets with elevated QoS?

Re:what it is (1)

ischorr (657205) | more than 3 years ago | (#35325862)

It's a myth that routers have the ability to tell a client to slow down, at least in the majority of environments (particularly ones with Ethernet segments, but other network types too).

Ethernet Flow Control has very limited utility here. You'll see it kick in in a rare few congestion cases - like if a switch backplane becomes overloaded - but it is used in a very limited number of situations and definitely will not be used end-to-end on a network for a router or host to tell another host (client) to do slow down.

Instead, when buffers fill, packets will be dropped. Yes, congestion control and FLOW CONTROL are handled using *packet drops* as the signalling method. It's assumed that TCP will use one of number of drop detection methods and congestion control algorithms to recover, and that does work better than one might think, but it's still ridiculously inefficient (especially with some types of drop patterns). I'll say it again, because I've been working on this pretty much daily for 10+ years and it still amazes me - you have to DROP PACKETS for any kind of end-to-end flow control to kick in.

In Ethernet land, there are some technologies created by folks like Cisco that are collectively being called "lossless Ethernet" that will definitely help here, and provide end-to-end flow control. But I'll pretty much guarantee you don't have hardware that supports it at the moment.

Re:what it is (2)

ischorr (657205) | more than 3 years ago | (#35325894)

And you know, in my experience that's where one of the real problems - and one of the most commonly undiagnosed problems - exists. In nearly 99% of networks I've looked at where buffer overflows were occurring and drops were happening, network admins were not only unaware of the severity of packet drops and didn't understand the impact this was having on their *critical* workloads, they had no idea how to even look for it.

Re:what it is (3, Informative)

drinkypoo (153816) | more than 3 years ago | (#35322850)

Oh come on, you only have to follow four links to get to the definition [wordpress.com] . What are you, lazy?

Seriously, this is a true failure of web design. You click from the summary, then you go to the wiki, then you go to the faq, and the faq doesn't even tell you, it references a blog post.

Re:what it is (2)

mtaht (603670) | more than 3 years ago | (#35323220)

The web site has only been up for a month, and I agree with you very much that it is hard to get to the core ideas of bufferbloat from the get-go, we are incorporating information from dozens of very large blog posts, and hundreds of comments, and have been very busy (among other things) getting hardware running and kernel patches done. bufferbloat.net IS a wiki, however, and registration is open to all. If you can help improve the quality, PLEASE join and do so.

Re:what it is (1)

jimrthy (893116) | more than 3 years ago | (#35325098)

If you're one of the ones behind bufferbloat.net (or even just one of the contributors), I want to say "Thank you."

Re:what it is (1)

mtaht (603670) | more than 3 years ago | (#35326294)

Well... yes, I'm one of the first 144 on bufferbloat.net But jg defined Bufferbloat so well, that the packet traces I'd seen on my wisp6 design in South America, suddenly made complete and total sense, when I first saw his back in November. I'd had no idea what I was dealing with was actually a worldwide probem. but thank you for the thanks. I blush.

Re:what it is (0)

Anonymous Coward | more than 3 years ago | (#35323076)

How is babby formed?

Solution: Use a proper protocol (aka ISO) (1)

Anonymous Coward | more than 3 years ago | (#35322642)

Instead of using TCP/IP (bastardized version of ISO), people should start using real OSI implementations such as the ISO protocol, with 20 byte addresses and QoS level settings for each of the 7 OSI layers.

Once upon a time it was an issue of cost of h/w logic, IP was the cheaper alternative, today the difference is nil and the benefits of ISO are orders of magnitude better than IP.

bufferbloat, IP address exhaustion, etc are just a few of the reason why we should drop IP altogether.

Re:Solution: Use a proper protocol (aka ISO) (2)

MichaelSmith (789609) | more than 3 years ago | (#35322678)

The difference is that you can write an smtp server by reading in strings line by line and treating them as commands, then watch the logs and kludge it until it seems to interoperate well enough. With the OSI way of doing things you have to wear a blue tie for a start then you have to print out all the interface definition documents and spread them out on your desk and write the software to the interface.

You correctly point out that IP is cheaper, but that means all the people who work with it will be cheaper too and the product which is slightly cheaper will always win.

Re:Solution: Use a proper protocol (aka ISO) (3, Funny)

firstnevyn (97192) | more than 3 years ago | (#35322686)

The difference is that you can write an smtp server by reading in strings line by line and treating them as commands, then watch the logs and kludge it until it seems to interoperate well enough. With the OSI way of doing things you have to wear a blue tie for a start then you have to print out all the interface definition documents and spread them out on your desk and write the software to the interface.

man.. I want your desk if you can spread out all the iso interface definition documents on it and be able to read them

Re:Solution: Use a proper protocol (aka ISO) (1)

PPH (736903) | more than 3 years ago | (#35325368)

I'm not sure if this has anything to do with it (or we were just victims of slick salespeople):

Bach in the early days of networking, when I was at Boeing, we (engineering) were starting to write some client-server stuff. Every time our IT folks approached us with ISO/OSI networking products as recommendations, there always seemed to be licensing fees attached. Per seat, per process, per user, per CPU, per whatever. While the software gurus were negotiating licenses and contracts, we just said "Screw it. Give me TCP/IP, a socket and get out of my way."

There may have been some unencumbered implementations out there. But we never saw them. If it wasn't carried in by a vendor in an expensive suit, it got no respect from Boeing Computer Services*.

*Same thing happened to Linux for a while. Everyone in the computing department wanted managing $100 million product portfolios on their resume. Not burning a bunch of free Slackware distro CDs.

Re:Solution: Use a proper protocol (aka ISO) (1)

Anonymous Coward | more than 3 years ago | (#35322698)

Erm, that doesn't seem to be the problem here IMHO & as per RTFA the problem is a maladaptive response to packet loss by throwing cheap memory into dumb buffers that effectively break the whole packet loss concept. Packet loss is not the enemy of throughput it is 'big idea' behind maintaing it. But sitting in a bloated buffer is the enemy of throughput, seeing the Internet as as series of sealed pipes is the enemy of throughput, missing the point completely and connecting huge dumb buffers into you OS your router your exchange, *everything* is the enemy of throughput.

Re:Solution: Use a proper protocol (aka ISO) (1)

jimrthy (893116) | more than 3 years ago | (#35325118)

Why is this only rated a 1?

This may be the best summary of the problem that I've seen yet.

That is a battle which was lost 20 years ago (3, Interesting)

Colin Smith (2679) | more than 3 years ago | (#35322730)

A lot of our problems today would not be here if.

OSI stack instead of TCP/IP.
DCE & DFS instead of passwd/whatever + the bastard abomination which is NFS.

Meh. People are lazy and cheap. Free with the network effect always wins. The Lowest Common Denominator. It's going to take another 15 years before we are near where we were 15 years ago. But this time it will be in Java!
 

Re:That is a battle which was lost 20 years ago (0)

Anonymous Coward | more than 3 years ago | (#35324442)

what did tp4 say that would affect this behaviour

Re:That is a battle which was lost 20 years ago (2)

thanasakis (225405) | more than 3 years ago | (#35325200)

OSI stack instead of TCP/IP

Can you please elaborate?

Re:That is a battle which was lost 20 years ago (1)

russotto (537200) | more than 3 years ago | (#35325758)

A lot of our problems today would not be here if.
OSI stack instead of TCP/IP.
DCE & DFS instead of passwd/whatever + the bastard abomination which is NFS.

Meh. People are lazy and cheap. Free with the network effect always wins. The Lowest Common Denominator. It's going to take another 15 years before we are near where we were 15 years ago. But this time it will be in Java!

Did you ever use those things? I've never used the OSI stack (though I have had the misfortune of looking at some of the specs), but DCE and DFS had terrible perfomance 15 years ago, and were a bear to set up. Having never worked with the originals (Kerberos and Andrew File System), I don't know if this was a problem added in the "standardization" or if it came with the territory.

Re:That is a battle which was lost 20 years ago (1)

BitZtream (692029) | more than 3 years ago | (#35326046)

Having never worked with the originals (Kerberos and Andrew File System), I don't know if this was a problem added in the "standardization" or if it came with the territory.

I can't speak about historical implementations, but the current (and I assume most modern implementations elsewhere) implementations of Kerberos used by Microsoft and the FreeBSD project can be configured for a system with a 5 line config file that could be generated from the output of a hostname -f call if the client is otherwise configured properly (Has its domain name set properly). It does require a proper DNS setup which can be obnoxious if you try to configure it by hand, but there again, its an implementation issue. Microsoft's implementation (part of ActiveDirectory) pretty much just works out of the box and requires pretty much no effort to use.

Of course, you still have to sync user ids :) Which is also simple with nss_ldap and ActiveDirectory+Service for Unix, but thats pretty trivial and could be done with a generic script that would work for almost everyone in the world out of the box.

There may be decent UNIX implementations of Kerberos and LDAP out there as well, but Microsoft has at least one implementation that doesn't suck to actually implement for no apparent reason.

I have never dealt with AFS, but making everything on my network use kerberos was pretty easy, but until its the standard way of authenticating I wouldn't expect anyone other than MS to put real effort into making it easy for an idiot ... or just not a pain in the ass.

Re:Solution: Use a proper protocol (aka ISO) (0)

Anonymous Coward | more than 3 years ago | (#35322740)

No amount of standardized QoS is going to help when you can't trust the endpoints not to cheat. IP has an 8-bit field you can use for QoS too, but you can't tell realtime from bulk traffic to assign QoS classes.

Re:Solution: Use a proper protocol (aka ISO) (1)

dgatwood (11270) | more than 3 years ago | (#35323348)

Sure it will. You simply cap the customer at the bandwidth he/she is paying for and do QoS within the customer's allocation. Then you don't oversell the bandwidth. :-) Yeah, right. That'll happen.

Alternatively (and more realistically), you design things such that high priority packets must be sent isochronously (that is, every nth time slot) plus or minus a little jitter. A video codec will have no problem delivering frames at a constant data rate. Random bulk communications will not be able to do so, and those sockets will, after some threshold for maximum failures is exceeded, be automatically de-prioritized down to the level of bulk traffic.

You further penalize the endpoint by cutting it down to 56k modem speeds for a month if you see it making too many short-lived (<5s) connections to multiple IPs and declaring them isoch. By definition, any such connections are an abuse of the standard.

That should cover about 95% of the likely abuses.

Re:Solution: Use a proper protocol (aka ISO) (3, Interesting)

tepples (727027) | more than 3 years ago | (#35323538)

You further penalize the endpoint by cutting it down to 56k modem speeds for a month if you see it making too many short-lived (<5s) connections to multiple IPs and declaring them isoch. By definition, any such connections are an abuse of the standard.

But isn't that exactly how channel surfing would work on IP-based television?

Re:Solution: Use a proper protocol (aka ISO) (0)

Anonymous Coward | more than 3 years ago | (#35326324)

As he said, 95 percent of abuses. 95 percent of all internet traffic is watching videos after all! (I think 'reality porn' now outweighs 'skin porn' though. :D).

Re:Solution: Use a proper protocol (aka ISO) (0)

Anonymous Coward | more than 3 years ago | (#35323114)

Sure, I see all the major providers and isp's out there who have nothing to actually gain from this jumping right on that bandwagon... Let me know when comcast supports it or when apple implements it on the iphone (where they still refuse to even add a simple ipv6 stack).

Re:Solution: Use a proper protocol (aka ISO) (1)

Mr. Slippery (47854) | more than 3 years ago | (#35323442)

Once upon a time it was an issue of cost of h/w logic

No, it was an issue of the ISO specs being bloated and incomprehensible. The human cost had much more to do with their failure than the hardware cost.

Two orders of magnitude! ? (0)

Anonymous Coward | more than 3 years ago | (#35322700)

Does that mean the network becomes a hundred times faster? Oh well, at least I can dream.

Re:Two orders of magnitude! ? (3, Interesting)

Lennie (16154) | more than 3 years ago | (#35322720)

It is about reducing latency. So it helps prevent problems with VoIP failing when there is a lot of other data flowing over the same connection(s).

Some buffers are really large and it turns out some introduce latency of several seconds (!).

Re:Two orders of magnitude! ? (1)

mtaht (603670) | more than 3 years ago | (#35323430)

not just voip. Think: dns, dhcp, nd, ntp & arp.

Re:Two orders of magnitude! ? (4, Interesting)

Kjella (173770) | more than 3 years ago | (#35322776)

Most network throughput is at least 80-90% efficient already, so it won't get much faster. It will make it more responsive though, which is good if you're browsing the web, playing an online game or something else interactive.

I assume this is under load though, because on ping there's not much to be saved. On local sites I have 8-12 ms ping, on slashdot I have 140-150 ms. Since the theoretical round trip in a straight line at light speed is some 110 ms, there's not even room for a 50 ms drop. A lot of weirdness can happen under load though if stuff gets buffered up various places.

Re:Two orders of magnitude! ? (5, Informative)

mtaht (603670) | more than 3 years ago | (#35323412)

The core work where we saw latency under load drop by 2 orders of magnitude was in the wireless driver stack on Linux. Examples were the iwl driver (130ms to ~2), and the ath9k driver ( > 200ms to ~d) (and these numbers were for GOOD connections, at high wifi rates. You can get 3 orders of magnitude improvement if you are on a slow wifi connection.) There's a new rate sensitive algorithm for wireless (eBDP) that we are trying in this kernel tree. It's not fully baked yet. 802.11n wireless package aggregation is HARD. That said there's bloat in all the other wired drivers too. We are doing far too much uncontrolled buffering in the kernel - specifically the dma tx ring on many devices - for slower networks. As one example, A gigE interface, connnected to a 3Mbit cable modem - does bad, subtle, things to the stack.

Re:Two orders of magnitude! ? (1)

jimrthy (893116) | more than 3 years ago | (#35325154)

And that's exactly what this is about. That "weirdness [that] can happen under load..."

there are no situations (-1)

Anonymous Coward | more than 3 years ago | (#35322734)

everything is as presented by our loyal loving caretakers. just worry about your money. you still have the right to remain silent. that's it? that thing about starving babies whose limbs have been blown off having trouble feeding themselves? they'll adjust? they don't have any food anyway.

Do they have a solution (-1)

Anonymous Coward | more than 3 years ago | (#35322824)

for bloated developers too?

hired goons' loyalty to corepirate nazis failing? (-1)

Anonymous Coward | more than 3 years ago | (#35322826)

looks like there's some value to these creeps knowing that there's so many of us, that their days are #ed at best. that doesn't account for suicidal psychos, but we've noted that domestic hired goons are usually spineless hourly employees who 'work' so long as there's no chance of getting hurt/caught. like US?

Latency again (3, Insightful)

Twinbee (767046) | more than 3 years ago | (#35323258)

I've seen it time and time again, people just generally don't care about latency, or even deny it exists in many cases (buffer bloat is certainly one cause of latency).

Everything from changing channel on your TV remote, to a mobile phone number entry, to the frame delays you get from LCD monitors, to the soundcard delay, to the GUI widgets you click on;......... it's all over the place, and it can wreck the experience, or reduce it somewhat according to how big the delay is. Just because latency is harder to measure, that doesn't mean it isn't very important, especially when it builds up with lots of other 'tiny' delays to make one big delay.

Re:Latency again (1)

mtaht (603670) | more than 3 years ago | (#35323366)

latency issues were driving me insane upon my return to the USA. http://nex-6.taht.net/posts/Beating_the_speed_of_light_on_the_web/ [taht.net] the huge bandwidths advertised here, and the actual "speed" reminded me of a scene in the Marching Morons [wikipedia.org] , where someone steps into their hot looking sportscar, smoke and sounds come out like he is doing 100 mph, the speedo says the same, and then he looks outside the car to see the countryside slooooowly going by. Many Americans have confused "Bandwidth", with "Speed". Bandwidth = **capacity**, not speed. Latency - the time it takes to click on something and get a result - is far more important to your perception of speed. Which would you rather drive down the information superhighway? The Titanic? or a Tesla?

Re:Latency again (0)

Anonymous Coward | more than 3 years ago | (#35323566)

Which would you rather drive down the information superhighway? The Titanic? or a Tesla?

It depends on what I'm doing. If I'm gaming or surfing I want to drive the tesla. If I'm doing uploads/downloads, I want the titanic. If people are surfing my website, tesla, if they're downloading some of my code modules, the titanic. If I've just started watching a movie on netflix, then tesla, if i want to be able to free up my pipe for torrents again while I watch my movie, the titanic (ie: buffer the film on the local device).

Can't I have a Tesla with trunk space [thetruthaboutcars.com] ?

Re:Latency again (1)

timeOday (582209) | more than 3 years ago | (#35323672)

The bulk of Internet traffic in the US these days is streaming video. For that you need big bandwidth and big buffers, not low latency.

That said, I wish we could settle on an Internet-wide QOS implementation and get both. Some packets have a legitimate need to cut in line. It would be workable if ISPs advertized both 'total' bandwidth and a smaller amount of 'turbo' bandwidth, or whatever stupid name they want to use for it, which is the fraction of your bandwidth that is not over-subscribed. By setting the QOS bits you could prioritize part of your traffic.

Re:Latency again (3, Insightful)

mtaht (603670) | more than 3 years ago | (#35323796)

"The bulk of Internet traffic in the US these days is streaming video. For that you need big bandwidth and big buffers, not low latency. " Emphatically not true. ESPECIALLY for streaming video, you need a functioning feedback mechanism (tcp acks or ECN or some other mechanism) to slow down the video periodically, so that it *doesn't* overflow what buffers you have, and catastrophically drop all the packets in the queue, resulting in stuttering video.

Re:Latency again (3, Informative)

rcpitt (711863) | more than 3 years ago | (#35324766)

I deal with streaming video daily - from a producer, distributor and support point of view.

"Why does the web site load so slowly?" is the classic question - caused in many cases by the "eagleholic" having 4 live eagle nest video streams running in one window while trying to post observations and screencaps to the web site in another.

Believe me - there is ample reason to deal with the problem as most of today's home networks are used for more than just one thing at a time. Mom is watching video, sis is uploading pictures of her party to Facebook, son is playing online games and dad is trying to listen to streaming audio - and NOTHING is working correctly despite the fact that this is a trivial load for even a T1 (1.45Mbps) let alone today's high-speed cable (30Mbps down and 5Mbps up). We used to run 30+ modems and web sites and email and all manner of stuff over bonded 56K ISDN lines for pity sake - and we got better latency than the links today.

What's the problem? The latency for the "twitch" game packets has gone from 10ms to 4000ms or more - and the isochronos audio stream is jerky because it's bandwidth starved and the upload takes forever because the ACKs from FB can't get through the incoming video dump from YouTube (with its fast start window pushed from default 3 to 11 or 12) and by the time the video is half over, the link to YouTube has dropped because it took 30 seconds or more for the buffer to drain after the first push and the link had timed out.

That's the problem - you need low latency for some things at the same time you need high throughput for others - and it is possible and can be done - and IS done if things are tuned correctly. But correctly tuning the use of buffers is an art today, not a science - and the ever-changing (by 3-4 orders of magnitude) needs of today's end-point routers has pushed the limits of what AQM (automated queue management) algorithms are currently available, even if they're turned on (which in most cases they're not it seems)

Re:Latency again (1)

PPH (736903) | more than 3 years ago | (#35323402)

Yeah. But if it messes with my first post status, I want it fixed!

Re:Latency again (1)

rcpitt (711863) | more than 3 years ago | (#35324596)

And that is exactly the problem with QOS that is under the control of someone who has a stake in the outcome.

"I want everything louder than everything else" (Meat Loaf) epitomizes the net today - we have Google screwing with the fast start window and Microsoft pretty much ignoring it and setting it as large as possible in some cases (they do other things right though it seems)

The buffer bloat problem is one born of history and ignorance:

History - it used to be that we could not put enough buffer RAM into the device because it was too expensive - so we designed our algorithms to use all that was available "because there's never enough."

Ignorance - we now have a generation of network "engineers" who have grown up not having to deal with really congested networks (until very recently) and simply don't bother to turn on things like RED (Random Early Detection) in their router products or ECN (Explicit Congestion Notification) on their servers and links - or ensure that ECN is actually passed through and not zero'd. Now they don't even recognize the problem and we have to teach them (and get them to un-learn their bad habits - like "any packet loss is bad")

5 months ago my prototype of our old company's first generation embedded Linux router software, running on an old '486 chassiz, finally died. When I replaced it with a recent D-Link, my connection from my home office to the world went from quite acceptable to almost useless whenever I was up or downloading anything larger than a couple of hundred K. The buffer on the router fills up and the latency goes from 10-20ms to 4000ms (4 seconds) - and my streaming radio stops - and my xload monitors and other remote monitors on servers stop - and my nagios system checks go off thinking that the remote systems are down.

This is unacceptable - and it is a LOCAL problem. In systems where the ISP's equipment is to blame, there's little I could do but rate-limit my connection to something under the threshold of pain.

Even turning on the router's "QOS" setting (switch, not knob - no control over parameters) that should give me "better gaming results" does not eliminate the problem or do much at all.

Bufferbloat is real - but the good news is that it can be fixed if we start asking the right questions of our suppliers and get them to admit there is a problem. The Bufferbloat community is in the process of putting together test facilities to help you and the ISPs and manufacturers get definitive information on the problem; things like a mixed-mode lantency/throughput test that measures 2 or 3 different stream types at the same time instead of just raw "bits through the pipe from source to destination"

I expect that even if ISO had won the war (and I was there in the trenches at the time the war was being fought) we'd have come to this point at some time - but IMHO that would have been some time in the next century as the ISO "standards" regime (and cost) was a huge damper on development and deployment. We would not have had the digital revolution at all if it was ISO anchored.

Re:Latency again (1)

Idbar (1034346) | more than 3 years ago | (#35324026)

The problem is not just latency. It's latency AND packet losses, which can dramatically reduce the available capacity for TCP flows. Particularly, if the router is not well designed and there's no algorithm in place to counteract for a suboptimal design.

A poorly designed router can sabotage the performance TCP, causing overall slowness to your connections. Particularly, those 10Mbps you're paying for and want them to properly work.

Re:Latency again (1)

rcpitt (711863) | more than 3 years ago | (#35324820)

"Any packet loss is bad" - that's the mantra I get from network engineers - and then the idiots don't turn on ECN (Explicit Congestion Notification) or run some bad-ass piece of crap that resets the ECN that is already on the packets they're transiting - or their routers don't respect the notifications or...

(Reasonable) packet loss or ECN - pick one - and then tell your up and downstream neighbors why you picked it (hopefully ECN will find its way into near 100% deployment ASAP) and why they should respect it and follow on.

Then - when the Bufferbloat gurus get the testing systems working, test and report so we can do our jobs and let the world know good/bad setups and such.

Lead, Follow, or get the hell out of the way

Every packet is sacred (4, Funny)

RevWaldo (1186281) | more than 3 years ago | (#35324246)

Every packet is sacred.
Every packet is great.
If a packet is wasted,
TCP gets quite irate.

Let the heathen drop theirs
When their RAM is spent.
TCP shall make them pay for
Each packet that can't be sent.

Every packet is wanted.
To this we are sworn.
From real-time data from CERN
To the filthiest of porn.

Every packet is sacred.
Every packet is great.
If a packet is wasted,
TCP gets quite irate.

.

Re:Every packet is sacred (1)

mtaht (603670) | more than 3 years ago | (#35324320)

oh, that was a wonderful parody of an already wonderful parody. Thanks for that. I'd probably modify it a little bit for accuracy, if you'd let me paste a copy over to our humor page? http://www.bufferbloat.net/projects/bloat/wiki/Humor [bufferbloat.net] The bufferbloat problem is so big - in hundreds of millions of devices today and millions more in the pipeline, that if we didn't laugh sometimes, we'd explode.

Re:Every packet is sacred (1)

RevWaldo (1186281) | more than 3 years ago | (#35324512)

Glad to be of assistance, however small. Cheers!

Re:Every packet is sacred (0)

Anonymous Coward | more than 3 years ago | (#35324470)

(From Ross Olson [rossolson.com] )

Every Packet’s Sacred

With apologies to Mr.’s Palin & Jones, Cerf & Metcalf

There are POTS in the world.
There are Tokens.
There are Arcnets and fibre channels, and then
There are those that send in Morse, but
I’ve never been one of them.

I’m an Ethernet Router,
And have been since before I was spec’d,
And the one thing they say about Routers is:
The data they pass on is perf’ct.

You don’t have to be hundred meg’ T.
You don’t have to have a VPN.
You don’t have to have DHCP on. You’re
A Router the moment you’re plugedin!,

Because
Every packet’s sacred.
Every packet’s great.
If a packet is wasted,
Vint gets quite irate.

Let the Belkins ‘jack theirs -
and have a pop-up spawn.
Metcalf’ll make them pay for
Each packet that’s not passed on.

Every packet’s wanted.
Every packet’s good.
Every packet’s needed
In your neighbourhood.

RIAA, Studios, Lawyers,
Want to block yours any time,
But Vint loves those who treat their
Packets as sublime.

Every packet’s sacred.
Every packet’s great.
If a packet is wasted,
Vint gets quite irate.

Every packet’s useful.
Every packet’s fine.
the Net sends ev’rybody’s.
Mine!
And mine!
And mine!

Let the Censors spill theirs
O’er “beaver”, “breast”, and “cocaine”.
Metcalf’ll strike them down for
Each packet that’s dropped in vain.

Every packet’s sacred.
Every packet’s good.
Every packet’s needed
In your neighbourhood.

Every packet’s sacred.
Every packet’s great.
If a packet is wasted,
Vint gets quite iraaaaaate!

Re:Every packet is sacred (1)

CAIMLAS (41445) | more than 3 years ago | (#35325864)

Best. Comment. Ever. That's going on my door at work - thanks.

Re:Every packet is sacred (0)

Anonymous Coward | more than 3 years ago | (#35326028)

Actually the TCP algorithm is producing this latency when it expects packets to be dropped rather than buffered. The lack of dropped packets prevents TCP from throttling and it instead decides to keep blasting the poor router with packets. This prevents anything realtime from getting through properly because there's this huge buffer of unsent packets in the way of the more important stuff. TCP would be happy if all these routers weren't using XBOX HEUG buffers.

Most delays are due to the ethernet packet buffers (1)

uksv29 (167362) | more than 3 years ago | (#35324690)

Most delays are due to users connecting to their ADSL modem via Ethernet and not traffic managing properly.

On a congested link this can cause large delays as Ethernet normally has a 1000 packet buffer in the Linux kernel and the ADSL modem has a similar buffer. You only need a couple of heavy connections which want to go faster than the ADSL will support and those buffers start to fill up real fast. You can easily end up with latencies measured in seconds if you have a lot of connections running (say bittorrent).

There are several solutions to this but the best in my experience is to change the queuing discipline to SFQ and rate limit using HTB. This has been in the kernel for years and works extremely well. You need to limit the traffic upstream and downstream to slightly less (5% less) than the ADSL link speed. This ensures that the modem never queues traffic. Uplink you can use all sorts of fancy queuing but downlink all you can really do is policing of traffic unless you install the IMQ patch to the kernel.

I've a script which I got from somewhere a while ago, don't remember where though. I've put it at http://ams1.x31.com/~andy/ppp0-ratelimit.sh if anyone wants to look at it. It expects to work on ppp0 but can be adapted as required.

I've played a lot more recently with Linux kernel disciplines and it has produced surprising performance on congested links. One link is running mail, remote access and Internet access over a 1mbit symmetric link for about 60 users. in the morning it hits 95% link capacity at the start of work and stays there until everyone goes home but ssh sessions are fully interactive without noticeable lag all this time. Yes web browsing is a little slow but it is the same for everyone and one user can't flood the link and upset everyone else.

Linux QOS is the future, pity about the documentation

Buffer bloat is (not) an illusion... (2)

WaffleMonster (969671) | more than 3 years ago | (#35325306)

For home users with a linux router set an HTB queue /w maximum egress rate to modem a little less than than your sustained upstream rate. At least this worked for me... never had problems with saturated upstream causing huge lags after doing this.

After reading this guys buffer bloat rant I largly agree with him with some exceptions:

1. What does multiple TCP sessions have to do with circumvention of congestion avoidance? TCP congestion avoidance needs to work with lots and lots of TCP sessions at once not just one or two. HTTP 1.1 sessions need NOT be short lived. I don't see why a large number of TCP sessions can't all be subject to congestion avoidance...responding individually to the conditions they see? How does this work to effectivly bypass congestion avoidance? I've seen this talking point in a few places but noone has ever explained WHY this is so. I can see an argument based on a static suboptimal initial congestion window but HTTP 1.1 supports pipelining...

2. Two connections per server is not sufficient for browsers. TCP is a stream protocol with head of line blocking... High latency links will never use the available bandwidth properly unless they either use lots of sessions or start with massive windows which is not good for congestion. There is also a problem of ordering dependancies of resource requests within the web content. Without lots of concurrent fetches the user gets to wait longer for page loads. The presentation sounded to me like someone either not understanding necessary details of TCP and higher layer considerations or trying to have their cake and eat it too.

Lastly we don't need to replace HTTP - we need to replace TCP... HTTP over SCTP would be a much more significant improvement than any reasonable change to HTTP. No matter what you do to HTTP you still have to live with the underlying transports limitations!

Re:Buffer bloat is (not) an illusion... (1)

mtaht (603670) | more than 3 years ago | (#35326534)

Sort of in answer to both of your questions the bufferbloat.net servers are configured as follows:

http://www.bufferbloat.net/projects/bloat/wiki/Dogfood_Principle [bufferbloat.net]

trying at every point to make sure http 1.1 actually got used.

We survived today's slashdotting. Handily.

That said, your points are well made. SPDY is part of the chromium browser and looks to have some potential.

In my case, I like the idea of smarter - and eventually sctp-enabled - proxies, especially on wireless hops. See thread at:

https://lists.bufferbloat.net/pipermail/bloat/2011-February/000068.html [bufferbloat.net]

Re:Buffer bloat is (not) an illusion... (2)

jg (16880) | more than 3 years ago | (#35327318)

Re: 1. I've always thought that the congestion window to the same end-point should be shared: but that's not the way TCP implementations work, and wishing they worked that way won't make the problem go away. And, as I've shown, bufferbloat is not a TCP phenomena in any case.

Re: 2. HTTP is a lousy protocol in and of itself, and having to do it on top of TCP makes it yet harder. It is the fact that HTTP is so ugly that makes so much else difficult. And I disagree with your claim that high latency links won't use the bandwidth; in fact, lots of sessions is just making things harder. You can read our HTTP/1.1 Sigcomm performance paper.

I'll be writing more on this topic this week.

And we need to replace HTTP, and something other than TCP would be highly desirable. Personally, I'm much more fond of CCNx than any IP based transport.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...