Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

QUIC: Google's New Secure UDP-Based Protocol

Soulskill posted about a year ago | from the set-for-a-twelve-year-beta-test dept.

Google 97

New submitter jshurst1 writes "Google has announced QUIC, a stream multiplexing protocol running over a new variation of TLS, as well as UDP. The new protocol offers connectivity with a reduced number of round trips, strong security, and pluggable congestion control. QUIC is in experiment now for Chrome dev and canary users connecting to Google websites."

Sorry! There are no comments related to the filter you selected.

Don't trust 'em (-1, Troll)

hairyfeet (841228) | about a year ago | (#44136181)

Frankly after all the NSA spying and how quick these companies were willing to jump on board? I have serious doubts in ANY new network tech not having backdoors of some sort. Maybe after some independent devs have taken the code apart and checked it with a fine tooth comb I'll trust it but as of this moment I honestly don't trust ANY of the major players to have our interests at heart.

As the old saying goes treat everything as suspect, then if it turns out its not you are pleasantly surprised and if it turns out you are right you aren't blindsided. Kinda sad that we all have to be this cynical but that is one of the nice things about the net, they can't hide their bullshit as easily as they could in the past.

Re:Don't trust 'em (4, Informative)

K. S. Kyosuke (729550) | about a year ago | (#44136211)

I have serious doubts in ANY new network tech not having backdoors of some sort.

Oh, come on. This is a network protocol. Sure, protocols *can* have flaws, but it's a very long stretch from being forced to run an unknown binary. Just implement it on your own if you're paranoid enough.

Re:Don't trust 'em (2, Informative)

jdogalt (961241) | about a year ago | (#44136593)

I have serious doubts in ANY new network tech not having backdoors of some sort.

Oh, come on. This is a network protocol. Sure, protocols *can* have flaws, but it's a very long stretch from being forced to run an unknown binary. Just implement it on your own if you're paranoid enough.

The problem with trying to implement a new protocol over tcp/ip (the internet) like Tim Berners-Lee did with the web, is that the mythical 'Open Internet' has been degraded. QUIC and webRTC reek to me of some orwellian attempt to make the lies about home servers being less worthy of net-neutrality protections than skype's servers make sense. I.e. allowing 'client to client' file transfers and video chats. All this is because of the conspiracy to deprive residential internet users the power to serve. Now, don't get me wrong, the things webRTC, and I'm guessing QUIC work around (legacy nat traversal when no simple 'open internet' directly routable path is available) are useful. But it is disingenous to look at QUIC without also looking at the fact that when Google entered the residential ISP business, they actually pushed the server persecution further, with a blanket 'prohibited from hosting any kind of server' terms of service language.

Earlier this week the FCC finally after 9 months 'served' my NetNeutrality (2000F) complaint against Google, along with the longer 53 page manifesto that now reached google via the FCC via the Kansas Attorney General('s office). Yesterday after pinging schneier@schneier.com for any insight to prepare for google's July 29th compelled response, he (or someone spoofing him) replied- "Thanks.\n\nGood Luck.".

http://cloudsession.com/dawg/downloads/misc/mcclendon_notice_of_informal_complaint.pdf [cloudsession.com]
http://cloudsession.com/dawg/downloads/misc/kag-draft-2k121024.pdf [cloudsession.com]
http://slashdot.org/comments.pl?sid=3643919&cid=43438341 [slashdot.org] (score 5 comment about the situation, with further links)

Re:Don't trust 'em (1)

jdogalt (961241) | about a year ago | (#44137153)

in case people aren't familiar with my non-drunken-but-close-enough debate style- scratch the tcp/ from the first sentence. And obviously I was just flailing against webrtc and this quic thing which smells like it is also trying to address the more general problem space of - "well, since we don't really have an open ipv6 internet that works for everyone, lets waste our lives engineering this big mess over here..."

Re:Don't trust 'em (2, Insightful)

hairyfeet (841228) | about a year ago | (#44138395)

Give it up friend, you can give 'em a page of citations but anything to do with Google or Apple gets a "doubleplus good" by default here, free thought isn't allowed.

I have to think these companies must laugh their asses off at board meetings, just looking at how many peasants will rush to defend them. Fuck that noise, i want somebody OTHER than a researcher on the payroll to tear this thing apart with a fine tooth comb before i trust jack shit from ANY of these major corps. I mean for fucks sake, hasn't ANYBODY been reading the headlines here for the past year? How many whistleblowers does it take before you figure out these corps like government green just as much as yours and will be happy to sell your asses out if the check is right?

For God's sake you'd think these people were stockholders by the way they rush to crush anything that doesn't read "Gee Bill, isn't (insert corp) great? Why it sure is Bob, why I heard they cured AGW and hunger in just an afternoon! How anybody could ever doubt (insert corp) is beyond me, they are so sweet and kind!"...If the future is corp ass kissing and flag waving, can i get a lift out of here please? i promise i have my towel ready.

Re:Don't trust 'em (0)

Anonymous Coward | about a year ago | (#44139369)

You already understand you can't change the mind of fanboys. But you and the other poster seem to miss the idea that you can make yourselves look like trolls and/or idiots to people who are neither fanboys nor the other extreme. It is just a protocol, and in terms of things Google can screw up and is a threat to your data, this is pretty far down the list. Considering there is enough crap from Google that is a legit threat to complain about, spewing so much over this issue is at best comical.

Re:Don't trust 'em (1)

hairyfeet (841228) | about a year ago | (#44140223)

But this is SUPPOSED to be a geek site, do we REALLY need to fricking spell out how simply saying "there is a spec' isn't magic fairy dust to insure it doesn't have a backdoor already? I mean are we REALLY gonna have to paste a page and a half explaining the basics, how complex crypto is, how having something "similar to" something currently being used doesn't mean it won't have serious holes, or the whole "what you get from the spec isn't always what the final product is" ala MSFT OpenDoc?

I mean good lord if the only way to shut up the fanbois is to paste a couple of paragraphs of explanation, after we've had nothing but back to fricking back NSA headlines for a year? I'll happily clean out the plasma conduits, get me outta here!

Re:Don't trust 'em (0)

Anonymous Coward | about a year ago | (#44141709)

I mean good lord if the only way to shut up the fanbois

Half the point of labeling someone as a fanboy is that they are beyond reason. You don't try to shut them up unless you have some vindictive need that is going to be borderline fanboy-ish itself. And since when has a geek site been free of fanboys, it comes with the territory.

to paste a couple of paragraphs of explanation

Speaking of a couple paragraphs, the actual specification if only a couple pages for the relevant section. If there is was a "geek site", should we aspire for people to actually look at the damn thing before running their mouth off? It is no where near anything like OpenDoc, and the crypto part is pretty straight forward and standard with a different hand shaking mechanism.

But instead you seem to think it is more important to balance Google fanboy-ism with anti-Google fanboy-ism. Instead of investigating at all, or applying any thought to it, you do what amounts to the same as warming people not to use free Google pens at a trade show because the pens might be secretly violating the privacy of what you write with them, without even looking to see they are simple mechanical pens, regardless of how good or crappy they are.

Re:Don't trust 'em (0)

Anonymous Coward | about a year ago | (#44142091)

I don't think you understand what a protocol is. If Hitler were to design a useful protocol, the world would be stupid not to use it; what would be stupid is using binaries provided by Hitler.

Have you paid any attention at all to the academic security community? (spoiler: no, you haven't.) The majority of researchers work for universities (and promote Tor!). They aren't on anyone's payroll.

Re:Don't trust 'em (-1)

Anonymous Coward | about a year ago | (#44140279)

Google is offering a "residential service", this means a high "contention ratio". Obviously Google cannot offer a 1Gbit uncontended service to EACH household, and Google's service would be adversely affected by "servers" expecting dedicated bandwidth.

Google is making a non-targeted market decision to not service "leechers" and "moochers". Buy colo.

Re:Don't trust 'em (0)

Anonymous Coward | about a year ago | (#44147073)

It is hard to trust the data mining giant. Every Android app is required to send any data they collect to google. Their whole business is about data.

Re:Don't trust 'em (4, Insightful)

SolarCanine (892620) | about a year ago | (#44136213)

What exactly can be hidden in an open protocol specification that will compromise your personally sensitive data? By design, a protocol has to be something that people can actually implement to be useful - the payloads you send via that protocol are up to you (based on your choices of which pieces of software to use, etc.)

Re:Don't trust 'em (5, Informative)

brunes69 (86786) | about a year ago | (#44136287)

Maybe if you would RTFA instead of pontificating, you would have found that the reference QUIC implementation is already open source, the specification is open, the wire specification is open, the whole thing is open. If you don't trust Google's implementation then roll your own.

Re:Don't trust 'em (0)

Jah-Wren Ryel (80510) | about a year ago | (#44136353)

the specification is open, the wire specification is open, the whole thing is open. If you don't trust Google's implementation then roll your own.

While I appreciate the sentiment, I think you are missing an important point - the specification itself could be deliberately flawed. Crypto is hard, and not just the math itself but all the infrastructure details. The number of people able to recognize a weak design (deliberate or not) is quite small. Probably a couple of orders smaller than the number of people able to re-implement a network protocol from specs.

Re:Don't trust 'em (4, Informative)

brunes69 (86786) | about a year ago | (#44136377)

They are not doing their own crypto.... they are using TLS. Again, please read the actual documents.

Re:Don't trust 'em (1)

Anonymous Coward | about a year ago | (#44136943)

Are you a cryptography expert? Because there are a _lot_ of ways to attack cryptography schemes. Length of transferred data can often be inferred very easily, for instance. As another example, sophisticated replay attacks can often significantly weaken encryption strength; perhaps in some yet unknown way. A layperson's "actual read" of the documents cannot and should not provide any reassurance to anyone. It is their availability to the cryptography community, as a whole, who will then inspect them which will ultimately lead to confidence in the who stack. Only over time, as the protocol is employed and studied, should it be trusted.

You should not be so glib regarding security and especially cryptography. It is an extremely subtle field, which you clearly do not understand very deeply.

Re:Don't trust 'em (1)

Anonymous Coward | about a year ago | (#44137399)

I think you might be the one that needs to do some reading. Their info page says that they're using something "similar to TLS" and then, in the docs, mention "the analog of" when referring to features of TLS. It doesn't sound like they're using stock TLS or DTLS, so there would be ample opportunity to make small mistakes (whether intentional or not.)

Re:Don't trust 'em (1)

Jah-Wren Ryel (80510) | about a year ago | (#44137833)

They are not doing their own crypto.... they are using TLS. Again, please read the actual documents.

Come on man that is barely relevant to what I said. I can't believe you got +5 informative for that glib drivel. There is more to the infrastructure than just TLS. If TLS was all there is to it then they wouldn't be doing anything new, would they?

Re:Don't trust 'em (3, Insightful)

Stonefish (210962) | about a year ago | (#44138433)

It's "like" TLS, as in "its none dairy but it tastes just like milk".
Google's reason for doing this is to lower their costs associated with better security. This creates a 3 way instead of a 5 way exchange for the security protocol setup. Fewer connections less load on their stuff and less stuff they have to buy.

The security landscape is littered with security implementations which tried improve existing protocols. Just type in the terms WAP and security for a story on how to take a secure starting point SSL and bugger it.
Another is Microsoft's introduction of PKINIT for keberos, kerberos is a proveably security protocol which is limitied by the entropy in a users password, MS "fixed" this with PKINIT however they initroduced replay attach vectors precisely because they wanted fewer exchanges. BTW google seems to have done a better job in this regard +1 for google, -1 for MS.

Re: Don't trust 'em (0)

Anonymous Coward | about a year ago | (#44139109)

QUIC sounds like mTLS which Microsoft already made, use, and openly documented the protocol. Seems like Google is copying an earlier concept and getting all the kudos.

Re:Don't trust 'em (1)

hairyfeet (841228) | about a year ago | (#44140281)

Thank you, and what is so wrong with not trusting a new protocol from a company that has not had the greatest record when it comes to user privacy and security, until some guys that know their stuff and are NOT on the payroll have had a chance to tear it apart and look for obvious flaws?

Now once Bruce Schneier and a few of the other heavyweight crypto guys have had a look at it, if they say its good? THEN I'll be happy to try it. To all these fanbois, would you be saying the same thing if it were MSFT or Apple? Then why does Google deserve a free pass?

Re:Don't trust 'em (1)

petermgreen (876956) | about a year ago | (#44141235)

Google's reason for doing this is to lower their costs associated with better security. This creates a 3 way instead of a 5 way exchange for the security protocol setup. Fewer connections less load on their stuff and less stuff they have to buy.

IMO It's not just direct financial costs.

Google is now into mobile in a pretty big way. A "GSM based" smartphone would typically move between.

GRPS: encryption exists but it's an old design and has security flaws that can't really be fixed due to compaitbility with legacy equipment. Fortunately the equipment needed to exploit things is expensive enough to keep most people out.
3G: better encryption systems but they can be subverted by forcing the phone to drop back to GRPS.
Public wifi networks: Either no encryption or encryption with a fairly well known shared key. Trivially easy for an attacker to set themselves up as a man in the middle.
Owners: home wifi, usually wpa encrypted nowadays but older installs may be either unencrypted or encyrpted with a broken encryption system and some people may use an insecure key to make it easier to remember.

As well as security problems some of these networks are likely to be high latency (i've seen latencies in the seconds on GRPS). So every round trip added reduces the quality of the user experiance. Getting the number of round trip times to set up a secure connection reduces the user experiance impact of increasing security which makes it an easier sell to do it.

Re:Don't trust 'em (0)

Anonymous Coward | about a year ago | (#44142213)

GPRS, 3G, Owners

Securing anything but the peer is always a waste of time and treasure. Security properties of an access technology is moot when the entire Internet is and always has been untrustworthy and insecure.

As well as security problems some of these networks are likely to be high latency (i've seen latencies in the seconds on GRPS). So every round trip added reduces the quality of the user experiance. Getting the number of round trip times to set up a secure connection reduces the user experiance impact of increasing security which makes it an easier sell to do it.

As others have mentioned you can beat down round trips to same extent as QUIC using TCP and TLS extensions without reinventing TCP.

Re:Don't trust 'em (1)

hedwards (940851) | about a year ago | (#44136547)

And yet there always seems to be somebody out there that's capable of finding the flaws that exist.

Yes, there are a relatively small number of people able to find those flaws, but it's still a large enough number of people that the flaws will be found at some point. And at any rate, the crypto has already been done, they're reusing TLS for the crypto.

Re:Don't trust 'em (0)

hairyfeet (841228) | about a year ago | (#44138431)

For all those that say "the spec is open" so is the spec for MSFT Open Doc, don't actually work IRL does it? for a spec to mean shit 1.- it has to be 100% what they are actually using (the "can you trust the compiler" problem) and 2.- You have to have somebody with enough skill (as you pointed out) to spot any flaws in the crypto that would give any bad guys or bad govs a backdoor.

I urge all those who think just having the spec alone is enough to look up "The Obfuscated C contest" and please download the source and see for yourself. in that contest you KNOW there is a backdoor, you KNOW what that backdoor does and its STILL pretty damned hard to spot where the backdoor is. there are several of those i showed to long time programmers and without telling them what flaw they should be looking for they couldn't find it.

Now THINK for a second, if amateurs trying to win a stupid contest that really doesn't offer more than bragging rights can cook up backdoors that are THAT well hidden, what could somebody like Google cook up with a blank check from the NSA? doesn't Google brag they hire the best programmers on the planet?

Backdooring Google (-1)

Anonymous Coward | about a year ago | (#44136207)

Why is it that when the NSA goes in Google's Backdoor everyone else gets F'ed?

Is QUIC simply faster? Should I buy options on Vaseline? ... or Ben-Gay?

I am soooo confused.

Corporate Surveillance State (-1)

Anonymous Coward | about a year ago | (#44136229)

Big Brother, the NSA and Google thank you for your compliance.

Dump all things Google.

The always-present question for UDP (1, Insightful)

i kan reed (749298) | about a year ago | (#44136233)

How do you stop a denial of service attack if both sides aren't required to maintain the overhead of the connection? TCP uses the overhead caused by ACK packets as a rate limiter on clients.

There are obviously high-bandwidth frameworks where you're already putting a strain on systems just by using them, where low-latency is also critical, and UDP is appropriate; video chat comes to mind, but outside of that very limited purview, what use could encrypting UDP actually do?

Re:The always-present question for UDP (3, Insightful)

Anonymous Coward | about a year ago | (#44136399)

How do you stop a denial of service attack if both sides aren't required to maintain the overhead of the connection

How do you stop it if someone does not bother to respect the rate limiter? You are assuming that someone doing something bad is going to play by the rules.

Re:The always-present question for UDP (1)

Anonymous Coward | about a year ago | (#44136445)

This is mostly covered either in the QUIC FAQ or the design doc.

But, attempting to answer your questions given material in the article and some sprinkling of industry knowledge: TCP is subject to DoS attacks with SYNs. There are mitigation techniques in there, but.. look.. you've received the packet already and have to do some processing on it to figure out if you should discard or not. This will be true of *any* protocol, TCP, UDP, SCTP, whatever.

The purpose of the encryption is twofold:
1) it makes it less likely that some intermediary will attempt to interpret the bytes in the packets, or screw with the bytes in the packets. This means that the protocol can continue to evolve, as intermediaries are prevented from doing 'helpful' things today, which end up being harmful tomorrow.
2) it makes it more difficult for people to read your communications

I think you're implying something about UDP being stressful on systems? UDP can allow a protocol to be stressful on a system, but it doesn't imply that it must be. In the article, they say that QUIC attempts to be fair with TCP and does congestion control. It shouldn't be any more stressful on a system than TCP if that is true.

Re:The always-present question for UDP (3, Interesting)

Wesley Felter (138342) | about a year ago | (#44136501)

QUIC uses an equivalent of SYN cookies to prevent some kinds of DoS. It also uses packet reception proofs to prevent some ACK spoofing attacks that TCP is vulnerable to. Overall it looks even better than TCP.

As for encryption, Google gives two reasons. They intend to run HTTP over QUIC and Google services are encrypted by default; it's more efficient for QUIC itself to implement encryption than to layer HTTP over TLS over QUIC. The other reason is that middleboxes do so much packet mangling that encryption is the only way to avoid/detect it.

Re:The always-present question for UDP (1)

hedwards (940851) | about a year ago | (#44136621)

I take it you've never heard of tarpits. Depending upon the type of DOS or DDOS, you can run through an incredible amount of processing power on the part of the attacker without straining your server, but it really depends upon the type of attack and the specifics of your set up.

Re:The always-present question for UDP (1)

neutrino38 (1037806) | about a year ago | (#44136653)

Games, document sharing, aptics, real time text chat.

Re:The always-present question for UDP (0)

i kan reed (749298) | about a year ago | (#44136781)

Games, document sharing, aptics, real time text chat.

Please, if you can't use SSL+TCP for text chat and keep it real time, you've got horrendous software. Moreover, the potentially lossy nature of UDP is really bad for text. You can outright lose data. Your packets can arrive out of order. It's okay with video data where a hiccup only makes a few missing pixels, but with text, that's a terrible idea.

QUIC is more like TCP in these ways, exception to (4, Insightful)

raymorris (2726007) | about a year ago | (#44137455)

> Please, if you can't use SSL+TCP for text chat and keep it real time

They could have, but QUIC is "better" for their use cases. In many ways, it's like an improved version of TCP. It runs on top of UDP simply
because routers, firewalls, etc. often only speak TCP and UDP. From the FAQ:

> it is unlikely to see significant adoption of client-side TCP changes in less than 5-15 years. QUIC allows us to test and experiment with new ideas,
> and to get results sooner. We are hopeful that QUIC features will migrate into TCP and TLS if they prove effective.

> You can outright lose data. Your packets can arrive out of order. It's okay with video data where a hiccup only makes a few missing pixels,
> but with text, that's a terrible idea.

Unless of course the protocol you're running over UDP handles that stuff, just like TCP handles that stuff.
Normally, it's a bad idea to use UDP to run a protocol that has in-order packets, guaranteed delivery, etc. because TCP already gives you that.
Why re-invent TCP? Unless you're going to spend a few million dollars on R&D to make your UDP-based protocol actually be better than TCP,
you should just use TCP.

That "unless you're going to spend a few million dollars on R&D" is the key here. Google DID make the investment, so the protocol actually does
work better for the particular use than TCP does.

Re:The always-present question for UDP (1)

DuckDodgers (541817) | about a year ago | (#44137499)

There's nothing stopping you from implementing your own flow control protocol in the data you send by UDP. TCP sends a periodic sequence acknowledgement of every set of packets it receives. If you implement your own flow control in UDP, you could have it only send back a message when it detects that some data was lost. Likewise, TCP connections maintain a little bit of state on each side. UDP does not, so the networking software in the client and server operating system has less work to do - just hand off the data to the application associated with that port for processing (although this is not "free", the application has to do its own processing of the data if you implemented your own flow control and so forth).

It doesn't make sense for somebody building a website in his garage, or even for a company at the smaller end of the fortune 1000. But for some company like Google that is handling mountains of network traffic, I bet QUIC might save them a big pile of cash by slowing down the rate at which they need to enhance their internal networking bandwidth.

Re:The always-present question for UDP (1)

WaffleMonster (969671) | about a year ago | (#44137013)

How do you stop a denial of service attack if both sides aren't required to maintain the overhead of the connection? TCP uses the overhead caused by ACK packets as a rate limiter on clients.

The "zero" RTT is like TLS session resumption or session tickets in that it only works by assuming a set of initial parameters. If it fails then you fallback to additional rounds to hello/ handshake TLS parameters.

No Audit (0)

Anonymous Coward | about a year ago | (#44136291)

I see no indication of a security audit. Given that SPDY initially had a major security hole (chosen plain text attack based on the compression+encryption combination), I will wait for some expert review of this.

That said, this sounds like a great idea, and I'm looking forward to see how this develops. It appears to be a lower latency faster to connect encrypted network protocol. I'd love to see all network traffic encrypted, and I suspect this would have a disproportionately large improvement for multi-hop encrypted streams, like TOR. If this works out, it will be great for privacy and security.

On the other hand, it might have horrible security holes all over. We need to be careful, but this does look promising. I'll give it a good read through, but I don't consider myself qualified to say much about its security.

Re:No Audit (1)

kermidge (2221646) | about a year ago | (#44136873)

While the inner working of protocols are something I've not looked at and suspect are over my head, this does sound interesting and, whatever else Google is or isn't doing, I'm glad they're continuing to do some interesting research and fooling around with things.

Not necessarily the right place (2, Insightful)

Bruce Perens (3872) | about a year ago | (#44136351)

I have no objection to protocol experiments that are 100% Open Source implementations. I wouldn't trust one that was not, and an Open Standard is just instructions for people who make implementations.

But it seems that a lot of this might belong in a system facility rather than the browser and server. I don't know if it makes sense to put all of TLS in the kernel, but at least it could go in its own process. Using UDP is fine for an experiment, but having 200 different ad-hoc network stacks each tied into their own application and all of them using UDP is not.

Re:Not necessarily the right place (0)

Anonymous Coward | about a year ago | (#44136477)

If you believe in iteration of experiments, then putting it into the OS is probably a bad idea.
The OS is mostly replaced about every 5-10 years. Should it take that long to change something in the experiment?

In the faq, they say that the intent is to head into the system stack down the road.

Re:Not necessarily the right place (2)

Bruce Perens (3872) | about a year ago | (#44136739)

The OS is mostly replaced about every 5-10 years.

That's why we have Linux. You can get a real OS implementation in users hands immediately. You only need these poor half measures for the Microsoft version.

Re:Not necessarily the right place (1)

Wesley Felter (138342) | about a year ago | (#44136807)

Immediately you say? Android users might disagree.

Re:Not necessarily the right place (1)

Bruce Perens (3872) | about a year ago | (#44136985)

I wasn't saying that all of the Red Hat Enterprise Linux users would install it immediately in their mission critical systems on Wall Street, either.

But we can give you a significant number of users of a real kernel for your experiment.

Re:Not necessarily the right place (0)

Anonymous Coward | about a year ago | (#44138043)

Or MacOs, IOS, etc., but yea :)

The claim in the article was about client-side, not server side. I think that linux is probably king (or a king) of server-side these days(?). ME? I use linux client-side most of the time, but I'm not exactly a part of the majority on this.

Re: Not necessarily the right place (0)

Anonymous Coward | about a year ago | (#44139167)

You can always install a new network stack driver in Windows. How do you think 3rd party VPN client software works? They don't need to be built into the RTM release of the next version of Windows.

Re:Not necessarily the right place (1)

dfghjk (711126) | about a year ago | (#44139215)

"That's why we have Linux."

That's an absurd and meaningless statement. It may be valuable, but it's not "why" and you don't speak for everyone. There are others whose opinions are far more principal to that question than yours.

"I have no objection to protocol experiments..."

What a relief! Google can go ahead on now that it has your blessing.

"But we can give you a significant number of users..."

Because they are yours to give?

If there's one thing you make clear here, Bruce Perens, it's conceit.

Re:Not necessarily the right place (0)

Bruce Perens (3872) | about a year ago | (#44139397)

There are others whose opinions are far more principal to that question than yours.

Those who ignore history are bound to make really big fools of themselves on Slashdot.

Go away, troll.

Re:Not necessarily the right place (2)

Wesley Felter (138342) | about a year ago | (#44136535)

I think Google intends to put it in the kernel once they have finished actually designing and standardizing it. Since it would take 10-15 years to get QUIC into the Windows kernel, they're putting it in Chrome as a stopgap.

Re:Not necessarily the right place (2)

bill_mcgonigle (4333) | about a year ago | (#44137727)

Well, hopefully a library at least. That's how some OS's are handling DTLS, which is similar.

That initial question of mine is addressed (partially) in the FAQ:

Why didnâ(TM)t you use existing standards such as SCTP over DTLS? QUIC incorporates many techniques in an effort to reduce latency. SCTP and DTLS were not designed to minimize latency, and this is significantly apparent even during the connection establishment phases. Several of the techniques that QUIC is experimenting with would be difficult technically to incorporate into existing standards. As an example, each of these other protocols require several round trips to establish a connection, which is at odds with our target of 0-RTT connectivity overhead.

but I still wonder if introducing a "DTLS 0-RTT mode" RFC wouldn't have been a better move, as far as gaining momentum, which DTLS has spent a few years building. I know, it's Google, but playing nice with others is a great way to get stuff done on the Internet. Even if it had to be DTLSv2, go through the process and get your stuff widely adopted.

Re:Not necessarily the right place (0)

Anonymous Coward | about a year ago | (#44139371)

Hopefully they will play nice eventually. SPDY is becoming HTTP 2.0, so hopefully if QUIC is a success its ideas will get merged into the proper standards.

Not the issue (-1, Troll)

chris200x9 (2591231) | about a year ago | (#44136411)

"The chinaman [protocol] is not the issue here, Dude." If google gives up data to the NSA who gives a shit about the encryption involved in transporting that data to a google server? That's like saying "I'm using bittorrent to seed copyrighted info to a MPAA spy but I'm using full stream encryption, they'll never catch me TROLOLOLOLOL"

Re:Not the issue (0)

DuckDodgers (541817) | about a year ago | (#44137535)

Funny, but I think the worry is that people will adapt QUIC for communications, websites, and programs that don't involve Google. Then if the protocol has security flaws, it's an engineered backdoor Google and the NSA can use.

Yes, I am not going to lose sleep over communicating with GMail via QUIC. I already assume anything I store there is in NSA vaults.

It's all about ads and tracking (0)

Animats (122034) | about a year ago | (#44136413)

The point of this is to improve performance for tiny HTTP transactions. The need for all those tiny transactions comes from ads and tracking data and their associated tiny bits of Javascript and CSS. The useful content is usually one big TCP send.

Blocking of all known tracking systems [mozilla.org] is a simpler solution.

Re:It's all about ads and tracking (2)

grmoc (57943) | about a year ago | (#44136577)

You should open up the perf tab of your browser and look at this page to see if it supports your conclusions.

Re:It's all about ads and tracking (1)

grmoc (57943) | about a year ago | (#44136625)

Lol, and sadly, it does. But it isn't true for many other sites. :)

Re:It's all about ads and tracking (1)

Bengie (1121981) | about a year ago | (#44137409)

You should see HTTPS, which has a 12-way hand-shake, over a 200ms cell-phone link. This is one of the reasons why we need something other than TCP+HTTPS. Fewer-hand-shakes in exchange for more CPU usage, which we have tons idle CPU time.

That's a terrible idea (0)

Anonymous Coward | about a year ago | (#44136443)

It is bad enough that they rush the TCP start with no regard for the effects of the initial packet flood. If they are going to give up throughput control altogether, network operators will justifiably throttle flows to prevent unfair monopolization of the bandwidth through flooding. Web traffic is not real-time traffic. UDP creates an impression of urgency that isn't true. That kind of traffic will be a bad neighbor on your network.

Re:That's a terrible idea (2)

Wesley Felter (138342) | about a year ago | (#44136569)

QUIC has congestion control. (I suppose your brain would explode if you saw uTP, which runs over UDP yet is even less aggressive than TCP.)

Re:That's a terrible idea (0)

Anonymous Coward | about a year ago | (#44145603)

You're not wrong (and GP's a wee but thick), and I'm not bashing uTP -- anything's better than nothing, and using UDP to experiment in userspace is nicer than throwing all your experimental stuff into TCP at first -- but if you haven't, compare it with TCP-LP for a similar concept done "right" for some value of right more related to production than experimentation.

Re:That's a terrible idea (0)

Anonymous Coward | about a year ago | (#44136737)

It is bad enough that they rush the TCP start with no regard for the effects of the initial packet flood. If they are going to give up throughput control altogether, network operators will justifiably throttle flows to prevent unfair monopolization of the bandwidth through flooding. Web traffic is not real-time traffic. UDP creates an impression of urgency that isn't true. That kind of traffic will be a bad neighbor on your network.

Wrong. QUIC is better at congestion control than TCP, and is fair when used along side TCP. QUIC monitors both packet loss, and latency which gives it more information than TCP for flow control. The ACKs also include proof of received packets so an invalid ACK attack to cause a server to flood a network (which works with TCP) does not work with QUIC. QUIC also optionally (when beneficial) includes FEC to recover lost packets so it can still detect congestion via packet loss, but without the retransmittion delay TCP gets. Also, the multiple multiplexed streams over QUIC get to work together to collect congestion information, which further provides an advantage for congestion control over TCP.

In short: QUIC is much better at congestion control than TCP, and suffers less latency from the Packet loss used to signal the congestion.

There may be some cases that are not covered yet, but the design doc clearly shows that Google is aware of network congestion problems has the problems mostly (if not complexly) solved at least as well as TCP, but likely better. Asserting Google does not know how to control congestion on a network is troll worthy: Google runs a huge global network.

Re:That's a terrible idea (1)

WaffleMonster (969671) | about a year ago | (#44138103)

Wrong. QUIC is better at congestion control than TCP, and is fair when used along side TCP. QUIC monitors both packet loss, and latency which gives it more information than TCP for flow control.

BS there are a number of congestion algorithms for TCP that use latency.

The ACKs also include proof of received packets so an invalid ACK attack to cause a server to flood a network (which works with TCP) does not work with QUIC

ACK attack requires guessing sequence numbers or being able to spy on the data path which severly limits the usefulness vs much much lower hanging fruit (DNS/chargen amplification)

QUIC also optionally (when beneficial) includes FEC to recover lost packets so it can still detect congestion via packet loss

Yea "FEC" as in sending duplicate packets from what I've read.

but without the retransmittion delay TCP gets.

I don't understand this shit. If there is no cost then how can there be meaningful congestion avoidance? TCP has fast retransmit why is that not enough?

Also, the multiple multiplexed streams over QUIC get to work together to collect congestion information, which further provides an advantage for congestion control over TCP.

Hu? What prevents the OS vendor from using data from multiple TCP sessions as input to a congestion decision?

In short: QUIC is much better at congestion control than TCP, and suffers less latency from the Packet loss used to signal the congestion.

In short the points you offered to get to this conclusion only make sense if you pretend TCP = RFC 793 and ignore decades of innovation.

There may be some cases that are not covered yet, but the design doc clearly shows that Google is aware of network congestion problems has the problems mostly (if not complexly) solved at least

This is nothing new. Everyone is aware of the importance of congestion avoidance.. thanks in no small part to IETF D.A.R.E campaign (Datagram Abuse Resistance Education)

as well as TCP, but likely better. Asserting Google does not know how to control congestion on a network is troll worthy: Google runs a huge global network.

Google is still just a CONTENT network.

Re:That's a terrible idea (0)

Anonymous Coward | about a year ago | (#44138753)

Wrong. QUIC is better at congestion control than TCP, and is fair when used along side TCP. QUIC monitors both packet loss, and latency which gives it more information than TCP for flow control.

BS there are a number of congestion algorithms for TCP that use latency.

I wasn't aware of any that were part of the standard TCP stack, but there may be some.

The ACKs also include proof of received packets so an invalid ACK attack to cause a server to flood a network (which works with TCP) does not work with QUIC

ACK attack requires guessing sequence numbers or being able to spy on the data path which severly limits the usefulness vs much much lower hanging fruit (DNS/chargen amplification)

That may be true. Parent was implying QUIC was worse than TCP as far as flow control goes, so I just wished to point out an example from the design doc that apparently offers an improvement.

QUIC also optionally (when beneficial) includes FEC to recover lost packets so it can still detect congestion via packet loss

Yea "FEC" as in sending duplicate packets from what I've read.

Nope. Suppose you have 1% packet loss. By sending 1/50 of your data as parity packets, you can avoid most retransmition delays. Sending duplicate packets also works, but is a naive trivial case. There are much more sophisticated approaches as well (LDPC, hamming codes etc). Regardless, there are many cases where a round trip delay is way worse than a a small increase in data size, so selectively doing FEC where it is beneficial can be very useful. QUIC has the info to know when its beneficial.

Modems do FEC, so do WIFI. Its related to what RAID does. Read into Shannon's limit a bit. It is all very well worked out and formalized.

but without the retransmittion delay TCP gets.

I don't understand this shit. If there is no cost then how can there be meaningful congestion avoidance? TCP has fast retransmit why is that not enough?

I simply wanted to clarify that FEC allows packet loss to signal congestion without hurting latency as bad as it does with TCP (Even if you can recover the packet, you still know its lost and can act accordingly). Again, this is often, but not always useful. You can recover the lost packet in much less than a round trip delay in these cases. Propagating the information back to the sender so they know to throttle still incurs a delay of course.

Also, the multiple multiplexed streams over QUIC get to work together to collect congestion information, which further provides an advantage for congestion control over TCP.

Hu? What prevents the OS vendor from using data from multiple TCP sessions as input to a congestion decision?

Nothing prevents this from being done, but I don't think it is. Its easier to do with QUIC, and standard.

In short: QUIC is much better at congestion control than TCP, and suffers less latency from the Packet loss used to signal the congestion.

In short the points you offered to get to this conclusion only make sense if you pretend TCP = RFC 793 and ignore decades of innovation.

I'll admit saying its "much" better is an exaggeration, but I do think at the very least, it is not worse (as the parent claimed).

There may be some cases that are not covered yet, but the design doc clearly shows that Google is aware of network congestion problems has the problems mostly (if not complexly) solved at least

This is nothing new. Everyone is aware of the importance of congestion avoidance.. thanks in no small part to IETF D.A.R.E campaign (Datagram Abuse Resistance Education)

I agree here. Nothing very impressive in this respect. Parent simply claimed QUIC was bad a congestion avoidance (well I assume thats what was implied).

as well as TCP, but likely better. Asserting Google does not know how to control congestion on a network is troll worthy: Google runs a huge global network.

Google is still just a CONTENT network.

Google runs a world wide private network for their datacenters. Third parties get to use it via their various APIs and services as well (Compute Engine among other things). I don't see how the fact that yes, their network only transmits content makes it irrelevant to congestion control (Content is all any network transmits, whats a non content network?). I simply wanted to point out that it would be very surprising if a Google developed claiming to properly handle congestion control failed in that exact respect, since deal with all aspects of the issue internally.

If you want to oppose my claim that that Google is "Google is aware of network congestion problems" right after asserting everyone is aware of it, go ahead, but I'll just think you are silly.

Re:That's a terrible idea (1)

WaffleMonster (969671) | about a year ago | (#44139021)

Nope. Suppose you have 1% packet loss. By sending 1/50 of your data as parity packets, you can avoid most retransmition delays. Sending duplicate packets also works, but is a naive trivial case. There are much more sophisticated approaches as well (LDPC, hamming codes etc). Regardless, there are many cases where a round trip delay is way worse than a a small increase in data size, so selectively doing FEC where it is beneficial can be very useful. QUIC has the info to know when its beneficial.

Read the design document again.. they say key packets in session setup can be proactivly duplicated..later they explicitly state that simple packet duplication counts as "FEC".. .. FEC is normally implemented within or below the packetization layer within the link where it makes sense and can scale to replace individual corrupted symbols in the transmission stream..when you get to the packet layer your severly constrained if you don't fill the MTU you are wasting resources. Correction codes will either consume more bandwidth or have very limited ability to do anything about packet loss. There just are not enough packets transmitted within the lifetime of the average http session for this to work in practice without substantial overhead. Also there are scary second order effects to think about. Increasing data consumption in response to congestion can very easily become a cause of more congestion.

I wasn't aware of any that were part of the standard TCP stack, but there may be some.

Congestion algorithms typically have no corrosponding expression on TCP data fields transmitted over the wire. It is simply a discipline controlled by the operating system. There have been quite a number of them developed over the years.

Nothing prevents this from being done, but I don't think it is. Its easier to do with QUIC, and standard.

Why? QUIC is currently not a standard and people have been talking about correlated behaviors to set ICW and friends for a long time.

If you want to oppose my claim that that Google is "Google is aware of network congestion problems" right after asserting everyone is aware of it, go ahead, but I'll just think you are silly.

Being aware of a problem does not mean you have the skills or resources necessary to correct the problem. Just because I know my car broke down it does not automatically follow I have any idea how to fix it. Everyone knows the importance of congestion avoidance. I suspect very few are actually smart enough to advance the state of the art.

The problem with google from tcpm lurking is that when they say we have reduced x by y they stop their analysis and assume victory. There is no talk about second order effects sometimes present in their own data.

Content networks see different traffic flows from that of an eyeball or transit network. The upstream channel of an eyeball network is mostly nill...networks with more symmetric traffic should be accounted for as well. It has to be more than google says to be applicable to the whole Internet.

Re:That's a terrible idea (0)

Anonymous Coward | about a year ago | (#44139391)

Nope. Suppose you have 1% packet loss. By sending 1/50 of your data as parity packets, you can avoid most retransmition delays. Sending duplicate packets also works, but is a naive trivial case. There are much more sophisticated approaches as well (LDPC, hamming codes etc). Regardless, there are many cases where a round trip delay is way worse than a a small increase in data size, so selectively doing FEC where it is beneficial can be very useful. QUIC has the info to know when its beneficial.

Read the design document again.. they say key packets in session setup can be proactivly duplicated..later they explicitly state that simple packet duplication counts as "FEC".. .. FEC is normally implemented within or below the packetization layer within the link where it makes sense and can scale to replace individual corrupted symbols in the transmission stream..when you get to the packet layer your severly constrained if you don't fill the MTU you are wasting resources. Correction codes will either consume more bandwidth or have very limited ability to do anything about packet loss. There just are not enough packets transmitted within the lifetime of the average http session for this to work in practice without substantial overhead. Also there are scary second order effects to think about. Increasing data consumption in response to congestion can very easily become a cause of more congestion.

You are referring to the duplication mentioned in the "PROACTIVE SPECULATIVE RETRANSMISSION" section. In that case there is very little data (only 1 packet), so trivial duplication is appropriate. The "PLAUSIBLE ERROR CORRECTING PATTERNS" also covers other schemes for more general higher throughput cases (such a parity which I mentioned). Both these approaches are clearly FEC. There are lots of FEC schemes which have their uses in different contexts. Yes you are right that in many HTTP cases, FEC isn't too useful (And you don't use it in those cases). There are other cases where it is though, like during the startup of encrypted connections. I don't see a the option to make the bandwidth vs latency trade-off as a bad thing.

I'll take this moment to point out that there are a lot of tuning options in QUIC that actually effect the wire protocol. I guess the tendency is for stuff to get more complicated to special case more use patterns to gain that extra little bit of efficiency. This isn't the elegant kind of standard I'd like (when/if it becomes standard), but it seems useful if implemented well.

Regarding sharing congestion related information between connections: QUIC multiplexes multiple connections to the same server over the same connection, which makes this information sharing trivial (and part of the basic QUIC implementation). Thus its easier. I wasn't arguing that its hard with TCP, but its less trivial. You make a good point though: I do need to read into progress in that area with TCP.

I'll agree that Google (or anyone) has not really 'solved' the congestion problem in the general sense. I think I did incorrectly imply otherwise earlier and I apologize for that.

Re:That's a terrible idea (1)

Bengie (1121981) | about a year ago | (#44137205)

BitTorrent has a custom protocol that uses UDP and is even more fair than TCP. UDP doesn't have flow-control built in, but there is nothing stopping the application layer from implementing it.

The main reason to not use TCP is that you can roll your own hand-shake and flow-control and congestion detection, without relying on the baked-in static implementation your OS has.

Because TCP is broken? (0)

Anonymous Coward | about a year ago | (#44136447)

What do we get from this that we don't already get from TCP? Not to be as conspiratorial as some other posters here; but remember the good ol' days when Microsoft angered you by playing around with HTTP? Now Google wants to muck around at a much lower layer. Whatever.

Re:Because TCP is broken? (1)

Squidlips (1206004) | about a year ago | (#44136803)

UDP has a number of uses. It used to the be the protocol behind SUN's NFS (which was cool in its day), but it can be used now for rapid transmission of small messages because the connection is simple (actually it used to called connectionless).

Re: Because TCP is broken? (0)

Anonymous Coward | about a year ago | (#44139239)

It still is connectionless

Re:Because TCP is broken? (1)

jfdavis668 (1414919) | about a year ago | (#44136945)

UDP is almost as old as TCP. They are not mucking around with it, they are just using it. It just doesn't guarantee delivery like TCP. If you are streaming something, who cares about lost packets from the past?

Re:Because TCP is broken? (1)

wonkey_monkey (2592601) | about a year ago | (#44137243)

Not to be as conspiratorial as some other posters here;

There's nothing conspiratorial about it. It's just the usual stupid-trying-to-look-clever attitude of some to immediately assume that any announcement of a new experiment or idea is doomed to failure.

Google aren't claiming this is going to be the next thing, yet. They're experimenting, and it's interesting that they're doing so, so let's watch this space instead of pissing on their very small, low-key parade.

Re:Because TCP is broken? (4, Informative)

Bengie (1121981) | about a year ago | (#44137359)

TCP has some major issues with congestion control that isn't playing well with buffer-bloat. The Internet is bursty in nature. TCP takes too long to ramp-up. It is acutally easier on infrastructure to burst 10MB over one second than to stream it over 10 seconds.

There are a lot of write-ups on issues with TCP, but one of the big issues that is starting to become a problem as speeds increase but latency is staying fixed, is the congestion control. Because TCP starts off slow and ramps up, it tends not to make use of available bandwidth. Un-used bandwidth is bad. The other issue is current TCP uses packet-loss to decide when to back off. The issue this creates is packet-loss tends to affect a lot of connections at the same time. You get this synchronization where lots of computers experiencing packet-loss all at the same time, so they all back-off at the same time. Suddenly the route is under-utilized. All of the connections start building up again until the route is over-utilized, then they all back-off at the same time.

This issue alone could possible cause large portions of the Internet to fail. It has happened in the past and the variables are getting to be similar again. Essentially you're left with a large portion of the Internet routes in a constant violent swing between over-utilized and under-utilized.

You get this issue where the average utilization is low, but packet-loss and jitter is high. Relatively speaking.

There is a lot of theory on how to fight these issues, but the only real way to figure this out is to actually test these theories on the large scale. A protocol that rides on top of UDP and runs in the application is the perfect place to test this. If something goes wrong, you just disable it. You can't do that with most OSs TCP stacks.

Re:Because TCP is broken? (1)

grmoc (57943) | about a year ago | (#44137985)

Mod parent up, please!

Re:Because TCP is broken? (1)

WaffleMonster (969671) | about a year ago | (#44138295)

TCP has some major issues with congestion control that isn't playing well with buffer-bloat.

Nothing plays well with buffer bloat thats why you fix buffer bloat.

The Internet is bursty in nature. TCP takes too long to ramp-up.

This is what TCP quick start is for.

but one of the big issues that is starting to become a problem as speeds increase but latency is staying fixed, is the congestion control.
Because TCP starts off slow and ramps up, it tends not to make use of available bandwidth. Un-used bandwidth is bad.

Path oversubscription is far worse.

The other issue is current TCP uses packet-loss to decide when to back off.

What else would it use?

The issue this creates is packet-loss tends to affect a lot of connections at the same time. You get this synchronization where lots of computers experiencing packet-loss all at the same time, so they all back-off at the same time. Suddenly the route is under-utilized. All of the connections start building up again until the route is over-utilized, then they all back-off at the same time.

Hence the jitter parameter in the retransmit timer computation.

This issue alone could possible cause large portions of the Internet to fail. It has happened in the past and the variables are getting to be similar again. Essentially you're left with a large portion of the Internet routes in a constant violent swing between over-utilized and under-utilized.

The historical congestive collapses occured because nobody was using any congestion control.

There is a lot of theory on how to fight these issues, but the only real way to figure this out is

And RFCs and working code even. The year is 2013...please adjust your chronometer accordingly.

Re:Because TCP is broken? (0)

Anonymous Coward | about a year ago | (#44139349)

Other than packet loss you can use one way delay, see LEDBAT as used by uTP in a hundred million BitTorrent clients.

Re:Because TCP is broken? (1)

Bengie (1121981) | about a year ago | (#44153327)

Adding to your response, ECN. Not to say ECN is better than latency, but still better than packet-loss.

Re:Because TCP is broken? (0)

Anonymous Coward | about a year ago | (#44139133)

This is a nice post. I was saddened to see it hidden. Please do give it some more visibility?

Just like in my nightmares (1)

MSG (12810) | about a year ago | (#44136493)

So... we're probably going to see new connection flood DOS attacks like the ones that prompted SYN cookies a couple of decades ago. Application stacks will need to handle their own congestion control, and applications that do so poorly will negatively impact carrier networks. And, yay, a new variant of TLS when there are already several versions that aren't widely implemented, let alone deployed.

Oh, and in the application so that each of those problems can be addressed over and over. Yay!

Re:Just like in my nightmares (1)

wonkey_monkey (2592601) | about a year ago | (#44137257)

Google isn't going to badger everyone into abandoning TCP, y'know.

Re:Just like in my nightmares (0)

Anonymous Coward | about a year ago | (#44137947)

TLS over UDP has been needed for a long while. It's just finally making some headway -- I don't think one other "competing" experimental version is going to be the undoing of the Internet.

QUIC = HDX = PCOIP (0)

Anonymous Coward | about a year ago | (#44136509)

Everyone has seen this work before, nothing new here except the who, and the why. Without protocols like this, VDI is next to impossible to give anything like a normal experience over the web. If Google rolls out cloud desktops to go with their apps, I think certain people may take notice. Vmware recently added HTML5 access into the mix, though its still kind of spartan in the current version, its hopefully going to mature in the next. I don't really pay attention to citrix that much, so I don't know where the state of HDX is right now.

Fed up with google "standards" (0, Flamebait)

neutrino38 (1037806) | about a year ago | (#44136585)

Here we are again.

After VP8, protocol buffer, Google is a it again providing some free replacement of some existing standard (DTLS here http://www.rfc-editor.org/rfc/rfc4347.txt [rfc-editor.org] )

But of course, Google's people know better and have more money. And the list can go on. Dart as a replacement of javascript. Protocol buffer as replacement of ASN.1, SPDY to replace HTTP. With Jingle google tried to replace SIP protocol as well but at least the extended an existing standard but they dropped the support when stopping Google talk. For couse everyting is free, open source and Not Evil. So why bother?

Well as an aging network engineer, I am starting to be fed up with such "innovations". More consensus building would be refreshing.

Re:Fed up with google "standards" (1)

Anonymous Coward | about a year ago | (#44136815)

Google, frankly, doesn't care what other people create. They suffer from the world's worst case of "not invented here" syndrome, and it's starting to seriously hinder the web in general.

They need to step back and stop trying to reinvent every wheel on their own. It's cringe-worthy to see them do this kind of self-centered stuff while hiding behind the facade of open specifications.

And yes, I mean hiding. It doesn't matter if this is an open source and open spec, because by the time people start relying on it in Chrome/Blink, the odds are quite low that anyone who catches up to it will be able to change it at all beyond perfunctory bug fixes or minor extensions.

Re:Fed up with google "standards" (1)

DuckDodgers (541817) | about a year ago | (#44137565)

Of course they're open! Look at the public API for Google Plus, and their pledge to support XMPP forever.

Oh, wait...

Introducing GTCP (0, Troll)

sl4shd0rk (755837) | about a year ago | (#44136881)

I don't know how you expect to keep things in sequence and accommodate dropped packets without implementing a sequencing and transmission protocol. With that in mind, you've just recreated TCP.

What's more likely is Google will create a TCP protocol (you must accept the EULA) which forwards a copy of everyone's session to Google. When you make a request to, say weather.com, Google will then serve you cached results from previous cached sessions to make things faster by eliminating network hops to weather.com. Muhahahahaha...

Re:Introducing GTCP (1)

Anonymous Coward | about a year ago | (#44138307)

Skip you next physical, your kneejerk reflexes are working. The doc is actually an interesting read, and goes into their reasoning as to why they aren't recreating TCP, etc. I guess some people just find that kind of reading interesting, and some would rather be +5 awesome.

/. & TLA / FLA ! (0)

Anonymous Coward | about a year ago | (#44137397)

Wow, we hit another low point for /. A 2 line 'article' gets to front page because it contains a bunch of TLA's and a FLA (Three-Letter-Abbreviation and Four-Letter-Abbreviation).

It would be really nice if someone would actually explain in any 'article' what the f*ck they are talking about to all the people reading it who don't know what a TLS is. Or even an UDP for that matter.

Is it that difficult to simple put some brackets and the full name after the TLA?

Sorry all, but I can't figure out why this is even 'news' !!!

Re:/. & TLA / FLA ! (0)

Anonymous Coward | about a year ago | (#44145621)

Sorry all, but I can't figure out why this is even 'news' !!!

It's actually news for nerds, like it used to be in the old days. Nerds generally know what TLS, UDP, etc. are; the few who don't know damn well know how to look them up and choose the meanings related to networking.

Now get your arse back to SlashBI where you belong, twerp.

Why I question googles motives (1)

WaffleMonster (969671) | about a year ago | (#44137765)

Reducing RTT for connection setup and encryption is a nobel goal yet I'm not clear on why technically this can't be solved without the reinvention of TCP over UDP.

TCP fast open coupled with TLS session tickets/snap start offers essentially the same possibility for actual transmission of an encrypted request before completion of first round trip.

Multiple concurrent TCP streams normally end up having much the same properties as multiplexed UDP in the aggregate so I don't buy head of line koolaid either.

What I think is really going on here they want an excuse to bring congestion control algorithms into user space where they can program whatever congestion algorithms they want into their browser and bypass having to deal with the OS vendors.

The design of congestion algorithms is a black art... I worry about algorithms designed using data from google networks not generally applicable to the Internet or otherwise too aggressive which could ultimatly give browser vendors and sites an unfair advantage or prove to be detremental to the larger Internet.

Even in their text they talk about "speculative" double posting of packets just to mitigate against the *possibility* of incurring delays due to dropped packets. There must some kind of real pain felt when packet loss occurs otherwise things turn to shit. Google is always blabing away about "tail loss" and avoiding the dreaded RTO but they never seem to care about the repercussions.

QUIC; We were first! (1)

sustik (90111) | about a year ago | (#44137923)

Re:QUIC; We were first! (0)

Anonymous Coward | about a year ago | (#44138357)

Nah, they'll just deindex you so that evidence that you were first can't be googled. :-)

Why over UDP? (1)

Chrisq (894406) | about a year ago | (#44140923)

What I don't understand is why over UDP? They are building a transport protocol, which logically should be another alternative to TCP, UDP, SCTP, etc. Wouldn't this be both more efficient and architecturally cleaner?

Re:Why over UDP? (2)

petermgreen (876956) | about a year ago | (#44144297)

UDP provides a mechanism (source ports) for multiple client applications on the same host to coexist. Furthermore through the mangling of source ports NATs can allow multiple hosts running UDP applications to coexist and communicate with the same servers from behind one public IP.

If you created a new IP protocol then you'd have to implement your own mechanism for multiple client applications on the same host to exist. Furthermore your system would likely break if two users behind the same NAT tried to access the same server with it because unless the NAT specifically supported your new protocol* it would have no way to differentiate between the traffic to different clients.

The overhead of building on top of UDP is pretty minimal, a source and destination port which as i've mentioned above are useful for allowing applications to coexist and a checksum which may or may not be redundant with checksums in your new protocol.

* Which will likely happen sometime arround when hell freezes over.

Re:Why over UDP? (1)

Chrisq (894406) | about a year ago | (#44151673)

Thanks - you deserve an "informative" mod point!

SPDY Snappy QUIC Go Dart (1)

s1lverl0rd (1382241) | about a year ago | (#44140945)

What's with the naming convention? With Google releasing SPDY, Snappy and QUIC, I'm guessing they will run out of synonyms for 'fast' sooner than Apple will run out of cats...

Re:SPDY Snappy QUIC Go Dart (1)

loosescrews (1916996) | about a year ago | (#44146411)

Haven't you heard? Apple already ran out of cats. 10.9 is going to be called Mavericks [wikipedia.org] .

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?