Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Google, Microsoft Cheat On Slow-Start — Should You?

Soulskill posted more than 3 years ago | from the reply-hazy-ask-again dept.

The Internet 123

kdawson writes "Software developer and blogger Ben Strong did a little exploring to find out how Google achieves its admirably fast load times. What he discovered is that Google, and to a much greater extent Microsoft, are cheating on the 'slow-start' requirement of RFC-3390. His research indicates that discussion of this practice on the Net is at an early, and somewhat theoretical, stage. Strong concludes with this question: 'What should I do in my app (and what should you do in yours)? Join the arms race or sit on the sidelines and let Google have all the page-load glory?'"

Sorry! There are no comments related to the filter you selected.

I do. (4, Funny)

cgomezr (1074699) | more than 3 years ago | (#34351692)

Without cheating, I wouldn't get the first post.

Re:I do. (2, Funny)

Tridus (79566) | more than 3 years ago | (#34351738)

Is this possibly the first ever on-topic "first!" post?

Re:I do. (3, Informative)

Jello B. (950817) | more than 3 years ago | (#34351914)

not even close, sorry

Re:I do. (-1, Troll)

Anonymous Coward | more than 3 years ago | (#34352120)

Sorry, but you're wrong.

Re:I do. (0)

Anonymous Coward | more than 3 years ago | (#34352340)

They try to make "first!" relevant, but it's still a useless "first!" post.

Re:I do. (3, Informative)

Tubal-Cain (1289912) | more than 3 years ago | (#34354980)

Not even the first one this week [slashdot.org] .

No Cheating is the Third Rule (4, Insightful)

Oxford_Comma_Lover (1679530) | more than 3 years ago | (#34351990)

The Third rule of network design, for a moral being, is to consider the moral, ethical, and legal consequences of any atypical changes you make to your behavior.

Why the Third rule?

Because the first rule is to figure out what on earth is going on--not just in theory, but in fact. Code for the OSI model is ugly, perhaps by necessity (it has to be very fast), but it's code that is very, very easy to get wrong. It involves a lot of interacting pieces working on different levels of abstraction with other players that you don't have code control over.

The second rule is to realize when the first rule means that you shouldn't touch the stuff. Google and Microsoft have the engineering competence to mess with it--MSFT even should be messing with it, in terms of looking for ways to improve their behavior in a community-friendly way. Because they write the code that handles a huge portion of connections, and let's face it, TCP/IP just isn't designed for lots of things: AJAX or broadband, for example.

The third rule is to consider the moral and ethical and legal consequences of changes.

Only after at least these three steps should someone make changes that involve connections that go beyond the computers they control.

Re:No Cheating is the Third Rule (2, Informative)

pthisis (27352) | more than 3 years ago | (#34352088)

Because the first rule is to figure out what on earth is going on--not just in theory, but in fact. Code for the OSI model is ugly, perhaps by necessity (it has to be very fast), but it's code that is very, very easy to get wrong. It involves a lot of interacting pieces working on different levels of abstraction with other players that you don't have code control over.

TCP/IP predates the OSI model and conflicts with it in some areas; discussion of the complexities of code targeting OSI isn't directly applicable to TCP/IP implementations, though many similarities exist.

Indeed, the fact that TCP/IP has fewer layers is often cited as one reason that it succeeded (coding an implementation of TCP/IP therefore being less complex than coding a fully abstracted 7-layer OSI implementation).

Re:No Cheating is the Third Rule (2, Insightful)

teridon (139550) | more than 3 years ago | (#34356118)

Isn't it time /. got a "-1 Reply Abuse" mod? The parent reply has nothing to do with the GP. It's on topic, and maybe it deserves the "Insightful" mod -- but it's replying to the top post just to appear at the top of the page. STOP THE MADNESS!

Re:No Cheating is the Third Rule (0)

Anonymous Coward | more than 3 years ago | (#34357188)

if your reply isn't in the top 3 threads of the page, there's no point posting it, as it won't get read. If you don't use reply abuse, there's no point posting. Reply abuse FTW.

Re:No Cheating is the Third Rule (0)

wampus (1932) | more than 3 years ago | (#34358088)

And if your ad isn't making noise and getting in the way, it won't get noticed. Go kill yourself.

Misread the RFC (5, Informative)

Spazmania (174582) | more than 3 years ago | (#34351708)

RFC 3390 uses the "MUST" terminology exactly one place: when describing behavior after a packet is lost during the syn/synack. It doesn't use the phrase "MUST NOT" anywhere.

In every other respect slow-start is recommended but optional. Google is in no way breaching the standard by not using it.

Re:Misread the RFC (4, Informative)

H0p313ss (811249) | more than 3 years ago | (#34351764)

RFC 3390 uses the "MUST" terminology exactly one place: when describing behavior after a packet is lost during the syn/synack. It doesn't use the phrase "MUST NOT" anywhere.

In every other respect slow-start is recommended but optional. Google is in no way breaching the standard by not using it.

I just logged in to say exactly the same thing. Not implementing an optional variant is not cheating. Nothing to see, move along.

Re:Misread the RFC (4, Insightful)

Lunix Nutcase (1092239) | more than 3 years ago | (#34351768)

No, this was just kdawson trying to fill his FUD quota for the day. He's a little behind.

Re:Misread the RFC (5, Insightful)

da cog (531643) | more than 3 years ago | (#34351816)

Yes, and for a post complaining about cheating I am mildly annoyed that he himself cheated his way around my "filter all posts made by editor kdawson" setting by submitting his story as a normal user and then getting another editor to post it.

Re:Misread the RFC (3, Insightful)

Lunix Nutcase (1092239) | more than 3 years ago | (#34351840)

He probably knows he's being filtered by more and more people.

Re:Misread the RFC (1, Funny)

Anonymous Coward | more than 3 years ago | (#34352796)

So are you.

Re:Misread the RFC (0)

Anonymous Coward | more than 3 years ago | (#34356772)

I think most of us have built-in filtering, akin to selective perception. Maybe you should complain about the goon who forces you to read every article on Slashdot instead?

Re:Misread the RFC (2, Funny)

greed (112493) | more than 3 years ago | (#34352850)

Ah. I was a bit surprised to see this is a kdawson story for exactly that reason. Thanks.

Where's my bigger hammer?

Re:Misread the RFC (0)

Anonymous Coward | more than 3 years ago | (#34353450)

Not implementing an optional variant is not cheating. Nothing to see, move along.

Re:Misread the RFC (3, Interesting)

u38cg (607297) | more than 3 years ago | (#34356474)

I thought he'd been sacked. I don't have him filtered (I like them where I can see them) and I haven't seen his stories for ages, or indeed anyone complaining about them :)

Re:Misread the RFC (1)

Skal Tura (595728) | more than 3 years ago | (#34358058)

Maybe he cheated his way back in as well? ;)

Re:Misread the RFC (5, Informative)

fluffy99 (870997) | more than 3 years ago | (#34353150)

Not sure why you got modded informative since the original poster and your "me-too" are both wrong . RFC 3390 is an extension to RFC2581. RFC 3390 says you MAY use an IW of up to 4 segments. If you don't use this option, you fall under RFC2581 which says the IW MUST be less than or equal to 2 segments.

http://www.rfc-editor.org/rfc/rfc3390.txt [rfc-editor.org]
http://www.rfc-editor.org/rfc/rfc2581.txt [rfc-editor.org]

Re:Misread the RFC (1)

Skal Tura (595728) | more than 3 years ago | (#34358046)

Then how the average server admin can take advantage of this? ;)

Re:Misread the RFC (2, Informative)

Anonymous Coward | more than 3 years ago | (#34351792)

RFC 3390 defines the upper bound for the initial window to be min (4*MSS, max (2*MSS, 4380 bytes)), so it doesn't need to use "MUST NOT" to forbid larger initial window sizes.

Re:Misread the RFC (4, Insightful)

Spazmania (174582) | more than 3 years ago | (#34351872)

IETF uses the capitalized MUST/MUST NOT terminology for a reason. It's used anywhere an implementer could reasonably do something else but for some reason isn't allowed to. Where it isn't present, it isn't required. If the authors omitted that terminology even after referencing RFC 2119 in a standards track modification to such a widely used protocol, they did so because the entire modification is optional.

Re:Misread the RFC (3, Informative)

Anonymous Coward | more than 3 years ago | (#34351898)

Indeed the modification is optional. It explicitly says so in the RFC. However, without the modification an even smaller initial window is set by the previous definition, which comes with all the MUSTs and MUST NOTs you can throw at an implementer.

Re:Misread the RFC (1)

Spazmania (174582) | more than 3 years ago | (#34352124)

Reference please? I'm afraid I'm not up on the sequence of TCP RFCs so I don't know where to find the "previous definition."

Re:Misread the RFC (3, Informative)

Anonymous Coward | more than 3 years ago | (#34352190)

Do you always have other people do your homework?

From RFC3390 (that's the one we're discussing):
"This document obsoletes [RFC2414] and updates [RFC2581] and specifies
  an increase in the permitted upper bound for TCP's initial window
  from one or two segment(s) to between two and four segments."

I'd start with the one which RFC3390 updates.

Re:Misread the RFC (2, Informative)

fluffy99 (870997) | more than 3 years ago | (#34353172)

Learn how to use Google man!

http://www.rfc-editor.org/rfc/rfc3390.txt [rfc-editor.org] [rfc-editor.org]
http://www.rfc-editor.org/rfc/rfc2581.txt [rfc-editor.org] [rfc-editor.org]

Re:Misread the RFC (3, Insightful)

Spazmania (174582) | more than 3 years ago | (#34353218)

What, are you stupid?

"Document A doesn't say what you claim."

"Yeah, but there's a previous document which does."

"What previous document is that?"

"Hur, learn to use google dude."

Re:Misread the RFC (1)

gringer (252588) | more than 3 years ago | (#34355128)

Learn how to use Google man!

Maybe they tried, but their router rejected the connection from Google because it was sending too many packets in the initial window.

Re:Misread the RFC (1, Insightful)

Anonymous Coward | more than 3 years ago | (#34353916)

IETF uses the capitalized MUST/MUST NOT terminology for a reason. It's used anywhere an implementer could reasonably do something else but for some reason isn't allowed to. Where it isn't present, it isn't required

This is complete nonsense.

The sign of a good RFC writer is not littering a document with MUST *** termonology. After a certain threshold it gets really old and implementors begin to ignore you.

If there is a magic defined in an RFC or an algorithm used in a certain way more often than not it will NOT say that you MUST call the algorithm in this order with these special parameters. If however you don't follow the specification you should not expect your implementation to work at all.

Recommendations often have very significant side effects if they are not followed. No wording in an RFC should ever be construed as a substitute for using ones brain and understanding the underlying basis upon which the the specification was arrived.

If an outfit like Google has a better way of characterizing the link by for example keeping track of metrics obtained from recent connection histories then good for them.

If however they are just turning off congestion control because it makes "their" site faster with the justification "usually" it is not necessary then fuck them.

It seems to me the single worst thing one could do in a congested environment is add more connections with no realtime requirements in a non-congestion avoidant manner.

Until I see simulation results to the contrary (Which is Googles burdon to supply) then I will just assume any instance of ignorant circumvention of slow-start is Google being Evil.

Re:Misread the RFC (1)

petermgreen (876956) | more than 3 years ago | (#34351884)

This increased initial window is optional: a TCP MAY start with a larger initial window. However, we expect that most general-purpose TCP implementations would choose to use the larger initial congestion window given in equation (1) above.

Re:Misread the RFC (3, Informative)

Anonymous Coward | more than 3 years ago | (#34352012)

Yes, that means you're free to use the (smaller) limit from the older RFC or the new (larger) one from RFC3390. The authors expect most implementations to use the new one, which would allow Google to send 3 packets without waiting for ACKs. Google sends up to 9.

Re:Misread the RFC (1, Informative)

Anonymous Coward | more than 3 years ago | (#34351890)

From the RFC.
      This document obsoletes [RFC2414] and updates [RFC2581] and specifies
      an increase in the permitted upper bound for TCP's initial window
      from one or two segment(s) to between two and four segments.

So it was officially increased in 2002.
Maybe back then an initial window 2 - 4 segments seemed reasonable.
Maybe the official standard is due for an update.
For some reason I am not indignant about this news.

Re:Misread the RFC (3, Insightful)

Spazmania (174582) | more than 3 years ago | (#34352452)

Kay, so I've poked through the RFCs a bit...

TCP first defined in RFC 793. No slow start; implementations generally send segments up to the window size negotiated in SYN exchange which is generally the smaller of the speakers' two buffers.

Slow start first referenced in RFC 1122 (Internet host requirements) as: ''Recent work by Jacobson [ACM SIGCOMM-88] on Internet congestion and TCP retransmission stability has produced a transmission algorithm combining "slow start" with "congestion avoidance". A TCP MUST implement this algorithm.''

At this point in the process there does not appear to be an RFC specifying TCP slow start making this statement in a document that is not itself about TCP per se very dubious.

A decade later, RFC 2001 says: "Modern implementations of TCP contain four intertwined algorithms that have never been fully documented as Internet standards: slow start, congestion avoidance, fast retransmit, and fast recovery." The word "must" is subsequently used in connection with congestion avoidance but is not used in connection with slow start.

RFC2414 then revisits the question of TCP's initial window size selection referencing RFC 2001 but again declines to state that TCP "must" start with a small window.

RFC 2581 finally sets an unambiguous slow start requirement: The slow start and congestion avoidance algorithms MUST be used by a TCP sender [...] IW, the initial value of cwnd, MUST be less than or equal to 2*SMSS bytes and MUST NOT be more than 2 segments.

However, even as it does so, it goes on to comment that, "We note that a non-standard, experimental TCP extension allows that a TCP MAY use a larger initial window [...] We do NOT allow this change as part of the standard defined by this document. However, we include discussion [...] in the remainder of this document as a guideline for those experimenting with the change, rather than conforming to the present standards for TCP congestion control."

In other words, even though out of the box TCPs MUST implement slow start, it's understood that other behaviors are in use and are expected to continue.

Finally, RFC 3390 allows the out-of-the-box behavior of TCP to use a larger initial window than 2581.

Conclusion: Google still isn't cheating.

Re:Misread the RFC (0)

Anonymous Coward | more than 3 years ago | (#34352588)

It looks like your earlier insistence on a MUST requirement for there to be a violation and your later realization that there is indeed a MUST requirement to start slow seems to have caused you some cognitive dissonance. Google is not "experimenting" with a TCP implementation which violates the set standards for congestion control. Google (and Microsoft even more so) have deployed a TCP standard violating implementation in a production environment. If that isn't cheating then we might as well call the TCP RFCs optional altogether.

Re:Misread the RFC (1)

Spazmania (174582) | more than 3 years ago | (#34352728)

Had you just quoted a standards document says "Here's how it's supposed to be done and now we'll offer some suggestions for all of you who decide not to do it this way" I'm not so sure I'd be quick accuse you of being the source of any cognitive dissonance.

Re:Misread the RFC (2, Interesting)

Nick Ives (317) | more than 3 years ago | (#34353436)

We do NOT allow this change as part of the standard defined by this document.

Seems fairly unambiguous to me.

People have been gaming slow-start for yonks; I remember when you could ACK flood a server to increase your download speed. Server admins hated it because it slowed the site down for everyone else.

Re:Misread the RFC (1)

presidenteloco (659168) | more than 3 years ago | (#34352724)

Are you sure that the experimental extension being referred to in RFC2581 is not the one that was later formalized as RFC 3390, whose
larger limits are still being violated apparently by Microsoft and Google and several others.

Also, the RFC 2581 standard noted that "a NON-STANDARD EXPERIMENTAL TCP extension allows...bigger"

Experimentation might be permissible, but vast-scale operational use of the non-standard extension by Google & Microsoft cannot
be described as expermentation, and therefore is clearly not contemplated, nor countenanced, by RFC 2581.

Unless you believe that Japanese "scientific whaling" is actually thousands and thousands of experiments on whales, that is.

Re:Misread the RFC (2, Funny)

halivar (535827) | more than 3 years ago | (#34351958)

IOW, RTFRFC.

Re:Misread the RFC (1)

ChipmunkDJE (1231596) | more than 3 years ago | (#34352062)

So then I guess everybody should just skip slow-start then? If Google and Microsoft can and are having tremendous results, why shouldn't everybody? Heck, why is slow-start even still around then? Should be tossed to the wayside like a Colecovision if its optional and gets in the way of your performance...

Re:Misread the RFC (1)

Spazmania (174582) | more than 3 years ago | (#34353246)

Heck, why is slow-start even still around then?

Now that, my friend, is a VERY good question.

Re:Misread the RFC (1)

samson13 (1311981) | more than 3 years ago | (#34354034)

So then I guess everybody should just skip slow-start then? If Google and Microsoft can and are having tremendous results, why shouldn't everybody? Heck, why is slow-start even still around then? Should be tossed to the wayside like a Colecovision if its optional and gets in the way of your performance...

Slow start probably should be skipped for most well tuned websites. Most HTTP connections are short lived enough to never ramp up to the available bandwidth or saturate queues so why use an algorithm designed to keep queues small while trying to efficiently use bandwidth.

I think the slow start concept would still be useful for bulk transfer services.. If you are serving a couple of gig ISO images then you probably don't care about a bit of round trip time latency if it means you don't clobber router queues downstream. I could imagine congestion collapse would be more likely with this load.

Bittorrent should probably use slow start. Often the competition for bit torrent connections are other connections for the same torrent. If we start too fast we could impact too many of these connections causing them to back off impacting overall performance.

I'd guess that the magic numbers that were picked for slow start when the RFC was written are no longer applicable. RTT is shorter, queues are probably longer (near the edges anyway) but the queues are probably shorter in terms of time. i.e. less consequence for a dropped packet, less likely to fill a queue and less of a performance hit if we do fill the queue..

Google's choice of initial window size would be well considered. If google's tuning impacted network performance then it would be causing packet loss to their own connections causing the latency to go up due to retries..

Similarly microsoft's initial window size seems a bit ridiculous so I'd bet it is either:
1) A mistake that is causing overall lower performance to their users.
2) Course tuning that helps for the front page (so helps in general) but causes a lower performance for bigger pages.
3) They are doing some sort of window size caching and that number was cached from previous connections.
I did note that there were no retransmissions in the MS flow so that it doesn't seem like a bad guess. They don't support SACK (WTF) so that would slow things down if they lost packets.

Re:Misread the RFC (1)

msauve (701917) | more than 3 years ago | (#34352118)

Even simpler. The very first line of the abstract says "This document specifies an optional standard..." The whole thing is a "MAY."

Re:Misread the RFC (1)

hasdikarlsam (414514) | more than 3 years ago | (#34353540)

As people keep saying, you *may* use this optional standard.. as a replacement for an older standard where startup is *even slower*.

Odd how that goes.

Re:Misread the RFC (2, Informative)

James Youngman (3732) | more than 3 years ago | (#34353458)

I would have got first post... (1, Funny)

Anonymous Coward | more than 3 years ago | (#34351712)

...if it wasn't for slow start. Damn you, cwnd!

lol kdawson (5, Interesting)

Lunix Nutcase (1092239) | more than 3 years ago | (#34351728)

So kdawson couldn't post this FUD himself? He needed Soulskill to do it for him?

Re:lol kdawson (1)

canajin56 (660655) | more than 3 years ago | (#34351902)

Obviously, his answer to the question of "should you cheat, too?" is "yes", and he started by cheating his way around my preferences that exclude all kdawson articles ;)

Re:lol kdawson (0)

Anonymous Coward | more than 3 years ago | (#34352206)

I don't think this qualifies as FUD, just poppycock.

Re:lol kdawson (3, Interesting)

Morty (32057) | more than 3 years ago | (#34352676)

So kdawson couldn't post this FUD himself? He needed Soulskill to do it for him?

Considering that people cannot be objective about their own posts, I applaud kdawson for *not* posting this. Letting it go through someone else's editorial review is the right thing to do.

Re:lol kdawson (1)

The End Of Days (1243248) | more than 3 years ago | (#34354824)

Editorial review? Your uid is small enough that you should know better. Maybe you're just super subtle.

Re:lol kdawson (1)

Morty (32057) | more than 3 years ago | (#34355356)

I know it sometimes doesn't seem that way, but slashdot does have standards. The editors have written policies on how they do what they do, and they try to follow them. While slashdot editors often fail to live up to the standards that they strive for -- they tend to publish duplicate stories, press releases, trolls, advertisements, and blatant spelling errors -- they do tend to avoid the more egregious violations. The mistakes are more along the lines of sloppiness than malice. Presumably that's why we're all still here, no?

Re:lol kdawson (0)

Anonymous Coward | more than 3 years ago | (#34356316)

No, not really here because of the editorial greatness. Here more for the comments. Even with a bad article ( this one is actually pretty good), you often get a decent discussion of an intriguing subject. Again, this is more in-spite of editors than due to them. Although, this isn't really a slam on them, the better they do the job the better, but you can't deny that its the community of half sane commentators that gives slashdot its value. Any oen who complains about kdawson or the puntuaction/ speeling of sumbisions can leave or get there posts modded into oblivion.

TFA is really interesting! (3, Interesting)

courteaudotbiz (1191083) | more than 3 years ago | (#34351756)

Great, yet simple research! It's funny to see how the web servers are acting exactly as their own mother company in real life:
  • Google: Trying to be the first, tries to make a standard with some promising trick;
  • Microsoft: Bypassing all rules to be the first;
  • All the others: pretty average (I'd have expected Facebook to be more innovative on this side. Wait when they discover that this trick exists...)

Before writing a post like this... (1)

Blakey Rat (99501) | more than 3 years ago | (#34355764)

Before writing a post like this, you might want to wait a few minutes for the inevitable corrections to the inevitably wrong Slashdot story comes in. A good 50% of the stories on this site are misleading, and probably 25% of those are blatant lies.

Here's a pro-tip: if it says kdawson either as the editor *or* the submitter, it's complete bullshit. I don't think he's ever gotten a story entirely right in this whole career.

Somebody call the waaaaambulance (2, Insightful)

js3 (319268) | more than 3 years ago | (#34351766)

When the competition starts crying you know someone is doing something right. Is it just me or has there been a lot of crying lately

Re:Somebody call the waaaaambulance (2, Insightful)

Anonymous Coward | more than 3 years ago | (#34351862)

To understand the relevance of this: The slow-start protocol/algorithm is meant to avoid a situation where many packets are put on the wire which will never be received due to congestion somewhere along the path. Such packets create unnecessary network load (they're transported all the way to the choke point and then they're discarded, so they have to be retransmitted.) The referenced RFC is from 2002, so one might argue that there isn't a problem if the burst of packets remains small. After all, there are other protocols which don't even use congestion control (particularly real-time applications like VoIP and other UDP based protocols) or cause bursts of initial traffic by concurrently starting many TCP connections (Bittorrent and other peer to peer networks). However, using an overly large initial window size is indeed a violation of a very central RFC, so it should not be done.

Seems to me... (3, Insightful)

91degrees (207121) | more than 3 years ago | (#34351842)

This is reliable. It is comaptible with the spec (otherwise it wouldn't be reliable), and it's faster.

I don't think it matters whether Google "cheats" or not. I and they both want me to get the data as quickly as possible. Strict adherence to the guidelines doesn't matter to either of us and doesn't affect anyone else.

Re:Seems to me... (1)

Mongoose Disciple (722373) | more than 3 years ago | (#34351854)

This.

In the real world, expect almost everyone to prioritize "what works best" over "what the standard says" except insofar as satisfying B is necessary to achieve A.

Tragedy of the Commons (1)

presidenteloco (659168) | more than 3 years ago | (#34352082)

You do realize that if servers on the Internet start ignoring Internet standards (RFCs) as a matter of usual
practice that there is a very good chance the net will, if not grind to a halt, develop instability, the probability
of unreliability, poor performance, isolated unreachable islands etc.

This is a clear case of the tragedy of the commons. Only the general adherence to RFCs and effective
shunning mechanisms have prevented the tragedy from occurring so far.

Re:Seems to me... (2, Interesting)

mysidia (191772) | more than 3 years ago | (#34351886)

Strict adherence to the guidelines doesn't matter to either of us and doesn't affect anyone else.

The Goal of slow start is to achieve minimal loss and fairness with all flows.

Fairness does effect other people. Not using slow start is much more aggressive and can stop on other people's data flows, particularly when a shared WAN is involved, even flows that might be much more important than your casual Google search.

But this may be a bigger concern for large ISPs that oversubscribe by having hundreds of thousands of customers, and only enough bandwidth to deliver the promised data rate for a few thousand.

Re:Seems to me... (0)

Anonymous Coward | more than 3 years ago | (#34353042)

particularly when a shared WAN is involved

I thought Google *is* the shared WAN!

Re:Seems to me... (0)

Anonymous Coward | more than 3 years ago | (#34354162)

Hi,

Where you said...

But this may be a bigger concern for large ISPs that oversubscribe by having hundreds of thousands of customers, and only enough bandwidth to deliver the promised data rate for a few thousand.

I think you might have meant......

But this may be a bigger concern ALL large ISPs BECAUSE THEY oversubscribe by having hundreds of thousands of customers, and only enough bandwidth to deliver the promised data rate for a few thousand.

Re:Seems to me... (0)

Anonymous Coward | more than 3 years ago | (#34355970)

Spelling does affect other people.

Re:Seems to me... (3, Insightful)

WolfWithoutAClause (162946) | more than 3 years ago | (#34352132)

It's going to be OK, provided it's only a small amount of traffic involved. But if everyone starts sending a lot of traffic like this... boom!

In a sense Google are just saying that their search results are high priority traffic, and they've optimised it like that. Which is probably fair enough.

But if you did that to anything that creates huge numbers of connections very rapidly and then sends a lot of data, perhaps using it for peer-peer networks, the network would start to suffer collapse.

Editors shouldn't be allowed to post stories. (5, Insightful)

BitZtream (692029) | more than 3 years ago | (#34351864)

I intentionally removed kdawson and timothy from the front page on slashdot just so I wouldn't have to see their ignorant, retarded, not a fucking clue posts ...

Did they realize that no one read their tripe anymore now they have to have someone else approve it for them?

kdawson and timothy are idiots, please give me a way to automatically not see anything that has to do with those two morons. Please.

kdawson is cheating to get around the effort I put on not seeing his crap, MS and Google on the other hand are following the RFC just fine ... if anyone involved in the posting of this story had a clue about what it said or did any sort of actual research than I wouldn't have to rant about it ...

Re:Editors shouldn't be allowed to post stories. (0)

Anonymous Coward | more than 3 years ago | (#34351984)

Your gripe is with Soulskill. Just remove Soulskill and be done with it.

Re:Editors shouldn't be allowed to post stories. (1)

Sir_Lewk (967686) | more than 3 years ago | (#34352688)

kdawson writes

No, our gripe really is with kdawson.

Re:Editors shouldn't be allowed to post stories. (1, Funny)

Anonymous Coward | more than 3 years ago | (#34352342)

"please give me a way to automatically not see anything that has to do with those two morons. Please" Duct tape over eyes?

That guy never gets his app running (0)

Anonymous Coward | more than 3 years ago | (#34351882)

Seriously, that guy will never get his app running, he will spent next two years reinventing TCP/HTTP and what not, before he gets to the application layer again.

This is well known to a small community (5, Insightful)

Animats (122034) | more than 3 years ago | (#34351978)

That's been known in the TCP community for decades.

I looked at this back in my RFC 896 [faqs.org] days, when TCP was in initial development and I was working on congestion. I introduced the "congestion window" concept and put it in a TCP implementation (3COM's UNET, which predated Berkeley BSD). The question was, what should be the initial size of the congestion window? If it's small, you get "slow start"; if it's large, the sender can blast a big chunk of data at the receiver at start, up to the amount of buffering the receiver is advertising.

I decided back then to start with a big congestion window, because starting with a small one would slow down traffic even when bandwidth was available. One of the big performance issues back then was the time required to FTP a directory across a LAN, where TCP connections were being set up and torn down at a high rate. So startup time mattered. The decision to go with a smaller initial congestion window size came years later, from others. This reflected trends in router design. I wanted routers to have "fair queuing", so that sending lots of packets from one source didn't gain the sender any bandwidth over sending few packets. But routers gained speed faster than RAM costs dropped, and so faster routers couldn't have enough RAM for fair queuing. Today, your "last mile" CISCO router might have fair queuing [nil.com] . Some DOCSIS cable modem termination units have it. [cascaderange.org] But many routers are running Random Early Drop, which is a simple but mediocre approach. (The backbone routers barely queue at all; if they can't forward something fast, they drop it. Network design tries to keep the congestion near the edges, where it can be dealt with.)

Remember, every dropped packet has to be retransmitted. (Too much of that leads to congestion collapse, a term I coined in 1984. That's what the "Nagle algorithm" is about.) In a world with packet-dropping routers, "slow start" makes sense. So that was put into TCP in the late 1980s (by which time I was out of networking.)

However, the RFC-documented slow start algorithm is rather conservative. RFC 2001 says to start at one maximum segment size. Microsoft's implementations in Win95 and later start at two maximum segment sizes [microsoft.com] . In RFC 3390, from 2002, the limit was raised to 3 or 4 maximum segment sizes. (We used to worry about delaying keystroke echo too much because big FTP packets were tying up the 9600 baud lines too long. We're past that.)

But Google is sending at least 8 segments at start, and Microsoft was observed to be sending 43. Sending 43 packets blind is definitely overdoing it.

I wonder whether they're doing this blindly, or if there's more smarts behind the scenes. If their TCP implementation kept a cache of recent final congestion window sizes by IP address, they could legitimately start off the next connection with the value from the last one. So, having discovered a path that's not dropping big bursts of packets, they could legitimately start fast. If they're just doing it the dumb way, starting fast every time, that's going to choke some part of the net under heavy load.

Re:This is well known to a small community (4, Funny)

Jay Tarbox (48535) | more than 3 years ago | (#34352250)

Are you a wizard?

Re:This is well known to a small community (4, Funny)

Just Some Guy (3352) | more than 3 years ago | (#34352372)

Are you a wizard?

No; his UID is too high. Now fetch me a sandwich, son.

Re:This is well known to a small community (0)

Anonymous Coward | more than 3 years ago | (#34352610)

sudo Now fetch me a sandwich, son.

Re:This is well known to a small community (1)

Jay Tarbox (48535) | more than 3 years ago | (#34353402)

I can fetch it, but I will NOT make one for you.

Re:This is well known to a small community (5, Informative)

egoots (557276) | more than 3 years ago | (#34353012)

Are you a wizard?

No, he's John Nagle.

Re:This is well known to a small community (5, Insightful)

carton (105671) | more than 3 years ago | (#34352602)

Yes, that's my understanding as well---the point of slow start is to go easy on the output queues of whichever routers experience congestion, so if congestion happens only on the last mile a hypothetical bad slow-start tradeoff does indeed only affect that one household (not necessarily only that one user), but if it happens deeper within the Internet it's everyone's problem contrary to what some other posters on this thread have been saying.

WFQ is nice but WFQ currently seems to be too complicated to implement in an ASIC, so Cisco only does it by default on some <2Mbit/s interfaces. Another WFQ question is, on what inputs do you do the queue hash? For default Cisco it's on TCP flow, which helps for this discussion, but I will bet you (albeit a totally uninformed bet) that CMTS will do WFQ per household putting all the flows of one household into the same bucket, since their goal is to share the channel among customers, not to improve the user experience of individual households---they expect people inside the house to yell at each other to use the internet ``more gently'' which is pathetic. In this way, WFQ won't protect a household's skype sessions from being blasted by MS fast-start the way Cisco default WFQ would.

If anything, cable plants may actually make TCP-algorithm-related congestion worse because I heard a rumor they try to conserve space on their upstream channel by batching TCP ACK's, which introduces jitter, meaning the windowsize needs to be larger, and makes TCP's downstream more ``microbursty'' than it needs to be. If they are going to batch upstream on purpose, maybe they should timestamp upstream packets in the customer device and delay them in the CMTS to simulate a fixed-delay link---they could do this TS+delay per-flow rather than per-customer if they do not want to batch all kinds of packets (ex maybe let DNS ones through instantly).

RED is not too complicated to implement in ASIC, but (a) I think many routers, including DSLAM's, actually seem to be running *FIFO* which is much worse than RED even, because it can cause synchronization when there are many TCP flows---all the flows start and stop at once. (b) RED is not that good because it has parameters that need to be tuned according to approximately how many TCP flows there are. I think BLUE is much better in this respect, and is also simple enough to implement in ASIC, but AFAIK nobody has.

I think much of the conservatism on TCP implementers' part can be blamed on router vendors failing to step up and implement decades-old research on practical ASIC-implementable queueing algorithms. I've the impression that even the latest edge stuff focuses on having deep, stupid (FIFO) queues (Arista?) or minimizing jitter (Nexus?). Cisco has actually taken RED *off* the menu for post-6500 platforms: 3550 had it on the uplink ports, but 3560 has ``weighted tail drop'' which AFAICT is just fancy FIFO. I'd love to be proved wrong by someone who knows more, but I think they are actually moving backwards rather than stepping up and implementing BLUE [thefengs.com] .

and I like very much your point that cacheing window sizes per /32 is the right way to solve this rather than haggling about the appropriate default, especially in the modern world of megasites and load balancers where a clever site could appear to share this cached knowledge quite widely. but IMSHO routing equipment vendors need to be stepping up to the TCP game, too.

Re:This is well known to a small community (1)

slashdotmsiriv (922939) | more than 3 years ago | (#34353168)

Respect ...

Re:This is well known to a small community (2, Insightful)

Iron Condor (964856) | more than 3 years ago | (#34353292)

I wonder whether they're doing this blindly, or if there's more smarts behind the scenes. If their TCP implementation kept a cache of recent final congestion window sizes by IP address, they could legitimately start off the next connection with the value from the last one. So, having discovered a path that's not dropping big bursts of packets, they could legitimately start fast. If they're just doing it the dumb way, starting fast every time, that's going to choke some part of the net under heavy load.

That strikes me as still-kinda-eigthies-thinking. I guess the question is what your assumption for an unknown segment of network is: If you assume that all parts of the net are congested most of the time, then you'll want to do a fast start up only on those segments that you know can handle it (doesn't have to be an individual IP - If my ISP buffers alright and you can reach it alright then it doesn't matter how many folks are sitting downstream from them - it becomes their problem.) If, on the other hand, you have the expectation that most packets on most of the net are going to be just fine (for whatever reason; even if by sheer brute force buffering and clever back-end algorithms that figure it all out after the fact) then it makes sense to do fast start with unknown clients and omit it only on those found NOT to be able to handle it. Kinda a glass-half-full way of looking at it.

These days I'd wager that the vast (VAST!) majority of packets are part of ongoing streams - streaming Netflix over the net, torrenting the collected porn of the 80ies, that kind of thing. Which means I'm as sure as I can possibly be of something I haven't researched that the performance of the net is only in the most marginal way dependent on startup behaviour around individual connections any more. (Or better when/where it is, it is probably due to the 100 tcp connections that need to be established to view a single web page; fix that and the question of startup behaviour will just go away. Incidently, MS'es CHM concept was a step very much in the right direction...)

Re:This is well known to a small community (1)

Animats (122034) | more than 3 years ago | (#34356270)

These days I'd wager that the vast (VAST!) majority of packets are part of ongoing streams - streaming Netflix over the net, torrenting the collected porn of the 80ies, that kind of thing.

True. However, many of those streams are bandwidth-adaptive and heavily buffered.

Re:This is well known to a small community (1)

seanadams.com (463190) | more than 3 years ago | (#34355528)

If their TCP implementation kept a cache of recent final congestion window sizes by IP address, they could legitimately start off the next connection with the value from the last one.

Wouldn't it also be necessary to cache the _rate_ of transmission so you don't overflow some intermediate queue? Eg imagine your sever is on gigE, feeding into a 1 Mbps uplink, and then a loooong pipe to the client who is on the other side of the world. In this case you might want to have an initial cwnd of a few dozen packets, but if you were to fire them all out immediately at 1gbps you would lost most of them at the first hop even though it's less than the available bandwidth*delay.

So as I understand it this problem is more than just choosing the initial congestion window, it is also a matter of how fast you fill it. Normally that timing is driven by the acks coming back, but in the absence of that the sender needs to originate the timing.

New era of networks (2, Interesting)

Anonymous Coward | more than 3 years ago | (#34351980)

Slow start and congestion avoidance were designed in the time of unreliable networks. Shouldn't the TCP/IP protocol be rediscussed in the age of fiber networks?

Re:New era of networks (0)

Anonymous Coward | more than 3 years ago | (#34354238)

And the time of reliable networks began as a result.

Unreliable networks are no longer a serious problem, so the fix for unreliability can be removed, right?

Web App (1)

hey (83763) | more than 3 years ago | (#34352100)

Well, this guy discovered something but wasted time he should have been working on his web app ;)

What about TCP congestive backoff and recover? (1)

karl.auerbach (157250) | more than 3 years ago | (#34352164)

Has anyone taken a look at whether Google, Microsoft, et al are similarly pushing on the TCP congestion backoff and recovery mechanisms?

Standardization, the right way... (2, Insightful)

osu-neko (2604) | more than 3 years ago | (#34352270)

First, implement it, and show that it works in practice.

Later, standardize the proven best practices.

Google, ur doin' it rite! :D

Someone tell Linus it'll make his laptop go faster (1)

Lazy Jones (8403) | more than 3 years ago | (#34352734)

Please, I'd like to use this on our web servers too... :-P

Re:Someone tell Linus it'll make his laptop go fas (0)

Anonymous Coward | more than 3 years ago | (#34354370)

Scan man ip(1) for initcwnd. No need for patches.

Re:Someone tell Linus it'll make his laptop go fas (1)

inKubus (199753) | more than 3 years ago | (#34356902)

If you type

man ip

You will see that you can set the initial congestion window on a given route using

ip route change initcwnd NUMBER

*Where NUMBER=The maximum initial congestion window (cwnd) size in MSS of a TCP connection. I believe applications may also choose socket options although most of the time it's left to the OS. So go ahead and set it to 10 or whatever.

Results may vary (1)

EvilIdler (21087) | more than 3 years ago | (#34353290)

I'm not getting the blinding fast response time shown in the article at all :(

Google looks up my country via geo-location and feeds me a localised version (tested via the curl method in the article). This takes 0.9 seconds for me. If I directly specify google.co.uk or some other variation, I get a more reasonable 0.3 seconds. But never 85ms. Is the author sitting on a really awesome connection at work? ;)

Any "real world" complaints? (1)

shutdown -p now (807394) | more than 3 years ago | (#34353566)

I understand the theoretical problem with breaking the spec, but since it actually took this guy a packet sniffer to detect the violation, it would seem that, in practice, most (all?) clients out there are perfectly capable of processing this non-standard response. If so, then I don't see a problem, since it really is a de facto standard - and those appear all the time. The best thing they could do then is publish a new RFC to make it part of the spec, and move on.

Good for them - not a lot of choice (0)

Anonymous Coward | more than 3 years ago | (#34353700)

Yeah, yeah, IETF, open standards, "proven to work", threat of arms-race, danger of congestion collapse, what have you.
There are _almost_ insurmountable reasons IN PRINCIPLE to abhor people/companies who want to violate fundamental RFC's in this way.

But every rule has its exception. Here you are dealing with TCP. It's so broken, so backwards, so conservative (not so much in a "play it safe" sense but
in a "the gods thus spake in 1989, and their prophet Van Jacobsen likewise sayeth" sense) that at some point you have to
say enough! Continuing the metaphor, much TCP research is like the work of a religiously devout scientist --- trying to make advances,
trying to be relevant, while working in knots to avoid saying anything directly that would commit overt heresy. Similar issues arise for companies trying to work through
  IETF and the TCP orthodoxy. (Look, I know how mis-usable this line of argument could be. I hate myself a bit for making it. But it's TCP. This is a special case; a special disgrace for the IETF).

Even browsers do crazy things on this principle. We'll open 2 (or 4, or whatever, depending on how clever you are) connections to the same web server.
Why??? The pipe(s) remain the same. Ah... but but any individual connection can remain compliant. The aggregate ... not so, but TCP
doesn't constrain aggregates so one can remain in the church.

TCP needs bolder, better, research and its has failed to get it (or rather, failed to get it broadly accepted; there is lots of great stuff out there); the conservative IETF participants
with an interest in TCP are an unconstructive obstacle whose impact has been hugely negative over the last 15 years.
In the absence of this, companies like Google and Microsoft have little choice but to drive us forward using their own judgment.

Re:Good for them - not a lot of choice (1)

WaffleMonster (969671) | more than 3 years ago | (#34354698)

But every rule has its exception. Here you are dealing with TCP. It's so broken, so backwards, so conservative ...
"the gods thus spake in 1989, and their prophet Van Jacobsen likewise sayeth"

There are other protocols such as SCTP intended to address shortcommings of TCP... Yet after all these years nobody seems to care that they even exist. If TCP were as bad as your remarks suggest I would have expected more takers on the alternatives?

You comment TCP is so broken and backwards yet I don't know and you don't mention whats wrong with it?

Even browsers do crazy things on this principle. We'll open 2 (or 4, or whatever, depending on how clever you are) connections to the same web server.
Why??? The pipe(s) remain the same. Ah... but but any individual connection can remain compliant. The aggregate ... not so

TCP is a head of line blocking protocol supporting only one active stream per session.

By establishing multiple connections some can still transmit data while other TCP sessions might be idle waiting for acks. It makes a noticable difference in environments with high latency links..not so much anymore for broadband users.

Today the bigger reasons for it are just shortcomings in HTTP and browser technology stacks. If you sent everything in a single stream there is an ordering dependancy that significantly effects load time. For example if you send a large image before sending a style sheet the page loading now needs to wait for the style sheet. You could use more intelligence and huersitics to prioritize but the ideal dependancies are not always easy to resolve, deterministic or knowable a priori. Sending everything at once is low hanging fruit that for the most part works.

The aggregate ... not so, but TCP
doesn't constrain aggregates so one can remain in the church

Whats the point of even trying? You can't constrain aggregates WRT other applications, computers, access devices..etc so why pretend it makes any difference if it were possible on the TCP session level? In my view the only approach is for the session to be aware of the environment and live as cooperativly as reasonable within its constraints.

In the absence of this, companies like Google and Microsoft have little choice but to drive us forward using their own judgment

I'm concerned with the possibility of judgements slanted by each corporations narrow world views. I prefer open SDOs whos members are comprised of all stake holders take us forward.

RFC 5681 (2, Informative)

j h woodyatt (13108) | more than 3 years ago | (#34354620)

I suppose now would be a good time to point out that RFC 5681 [ietf.org] is the most current specification of the standard for TCP congestion control. Would it be asking too much for people to stay current on the RFC series before they start cracking off about standards compliance?

I'm sorry... (1)

matunos (1587263) | more than 3 years ago | (#34354940)

When did RFCs official standards at which you could "cheat"?

Consider this "cheating" Google and Microsoft's comments.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?