Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Taking Google's QUIC For a Test Drive

Soulskill posted about 10 months ago | from the hopefully-in-one-of-google's-automated-cars dept.

Networking 141

agizis writes "Google presented their new QUIC (Quick UDP Internet Connections) protocol to the IETF yesterday as a future replacement for TCP. It was discussed here when it was originally announced, but now there's real working code. How fast is it really? We wanted to know, so we dug in and benchmarked QUIC at different bandwidths, latencies and reliability levels (test code included, of course), and ran our results by the QUIC team."

cancel ×

141 comments

Sorry! There are no comments related to the filter you selected.

Fuck you, site. (5, Informative)

Anonymous Coward | about 10 months ago | (#45370817)

Javascript required to view *static* content?

No.

Learn how to write a webpage properly.

Re:Fuck you, site. (0)

Anonymous Coward | about 10 months ago | (#45371193)

Yeah I like how it teases you first with the content THEN disappears...

Re:Fuck you, site. (4, Interesting)

GameboyRMH (1153867) | about 10 months ago | (#45371557)

Fun trick: Copy the address into your URL bar, hit enter and then very quickly hit Escape.

Javascript isn't technically required to view the page (as shitty as that would be). They're just being dicks.

Re:Fuck you, site. (2)

Derek Pomery (2028) | about 10 months ago | (#45371707)

Heh. I was trying to read this from work 'cause, well, it is exactly the sort of work relevant stuff it is worth checking out.
Stupid firewall here was stupid and blocked their site (as instant messaging or some other stupidity). No prob, I switch to ssh + tmux + w3m (where I have like 30 tabs open) and open it. Aaaaand hit their lame redirect. Luckily, hitting back was sufficient to solve that in w3m.

Re:Fuck you, site. (0)

Anonymous Coward | about 10 months ago | (#45371751)

LOL.

Re:Fuck you, site. (1)

Anonymous Coward | about 10 months ago | (#45372083)

Completely agreed.

Another way to get around this besides the quick esc-hitting is to install https://addons.mozilla.org/en-US/firefox/addon/requestpolicy/ [mozilla.org]

Re:Fuck you, site. (1)

Anonymous Coward | about 10 months ago | (#45372249)

Slashdot, news for luddites...

Re:Fuck you, site. (0)

Anonymous Coward | about 10 months ago | (#45372783)

All I get is the article and a notice saying "NoScript blocked a <META> redirect inside a <NOSCRIPT> element: ...", but maybe that's just because I'm using NoScript and RequestPolicy to un-fuck the web.

Re:Fuck you, site. (1)

Cajun Hell (725246) | about 10 months ago | (#45372937)

It looks like they know how to do it, and are just actively hostile:

<noscript>
<meta http-equiv="refresh" content="0; url=/no-javascript/" />
</noscript>

The content is really on the page; they just added it a little more to it (went to extra trouble!), to make their site fail.

I guess browsers need "pay attention to refresh" to become an opt-in option.

It's not broken... (1)

Joining Yet Again (2992179) | about 10 months ago | (#45370857)

...but Google said something.

Let's fix it!

Re:It's not broken... (2)

ThatsMyNick (2004126) | about 10 months ago | (#45371011)

It is broken. Google just made a bad solution. Doesnt mean the problem doesnt exist.

Re:It's not broken... (1)

grmoc (57943) | about 10 months ago | (#45372503)

To be clear, the solution isn't even done being implemented yet-- the project is working towards achieving correctness still, and hasn't gotten there yet. After that part is done, the work on optimization begins.

As it turns out, unsurprisingly, implementing a transport protocol which works reliably over the internet in all conditions isn't trivial!

Re:It's not broken... (1, Interesting)

Anonymous Coward | about 10 months ago | (#45371057)

TCP is far from not broken. TCP for modern uses is hacks upon hacks at best. Everyone knows this.

The problem is coming to an agreement as to what is the best way to get away from this to optimize things best.
SPDY worked fairly well, god knows what is happening there. It helped fix a lot of lag and could cut down some requests by more than half with very little effort on part of the developers or delivery mechanisms.
This... yeah not sure what is happening there either.

TCP is far from useful. It is terrible for most things in fact. It is sadly all we have because we got used to it. Just like we got used to using paper instead of computers. So much for the future.
TCPs abuse is on levels worse than the web being used to deliver the internet through it instead of the web being another service on top of the internet like it originally was.
Both are huge problems that won't be fixed overnight. And both are problems that do need serious fixing as bloat is constantly increasing every 2years in service deployment. It is a headache for us developers and a headache for the network guys even more so. My poor friend, I feel bad for the things I do to him.

Re:It's not broken... (2)

Golan Machello (2938743) | about 10 months ago | (#45371335)

TCP is far from not broken. TCP for modern uses is hacks upon hacks at best. Everyone knows this.

I think most of the people screaming "TCP is broken!" are those with lots of bandwidth who have very specific uses. TCP seems to work quite good for almost everything I have thrown at it. I have a low latency 500kbps down / 64kbps Internet connection and do mostly SSH and HTTP. I am able to saturate my link quite well. I don't know if the QUIC guys are thinking about a significant portion of the population who simply can't get >1.5Mbps connectivity to their homes. In fact their slides seem to indicate they are more wasteful of bandwidth - a risky tradeoff IMHO.

Re:It's not broken... (1)

profplump (309017) | about 10 months ago | (#45371547)

TCP has limitations even on links like yours. In fact many of the limitations of TCP are worse on lossy, low-bandwidth links than on faster, more reliable ones. And the fact that you can saturate your link is not evidence that TCP isn't slowing you down -- given enough ACKs I can fill any pipe, but that doesn't mean I'm transferring data efficiently.

Also be careful how you characterize "wasteful of bandwidth". For example, a typical TCP over a typical DSL connection would be unusable without the FEC correction provided at the link layer; when tuned correctly that "wasted" bandwidth used for duplicate signaling actually increases throughput significantly by eliminating retransmissions. FEC is actually a very common technique in purpose-built across-a-high-bandwidth-pipe WAN acceleration technology, and can be useful even when your packet loss rate is quite small.

Re:It's not broken... (0)

Anonymous Coward | about 10 months ago | (#45372139)

Oh please that "double" signalling - ie the second layer that provides lossless connection to the application is just going to be pushed up into the application layer. Not saying there's not a different solution there, but don't confuse layer 2 reliability between two points and reliability end to end at layer 4 and above.

Re:It's not broken... (1)

profplump (309017) | about 10 months ago | (#45371421)

Actually non-Google studies suggest the SPDY is only marginally helpful in decreasing page load times unless there's aggressive server push of dependent documents AND favorable parameters and network conditions for the underlying TCP connection. For example, SPDY does very poorly on lossy connections, particularly with the default TCP recovery settings. And even server push has problems -- in addition to requiring configuration it bypasses the client cache mechanism, and on low-bandwidth connections the additional data transfer may overwhelm the other savings SPDY.

Don't get me wrong, I think SPDY is a good idea in general. But it's still built on TCP and subject to the limitations thereof. If it were built on a reliable message-passing protocol instead the application layer of SPDY would be significantly simpler and network performance would improve in a number of common cases.

Re:It's not broken... (1)

grmoc (57943) | about 10 months ago | (#45372529)

I'd be curious to see that/those study-- can you tell which one/ones it is (so I can go read 'em and fix stuff?)
I thought I was aware of most of the SPDY/HTTP2 studies, but that is becoming more and more difficult these days!

Re:It's not broken... (0)

Anonymous Coward | about 10 months ago | (#45371079)

It IS broken in some ways, just read the bechmark link ... Google hasn't fixed it yet though.

first impression (1)

watcher-rv4 (2712547) | about 10 months ago | (#45370881)

Right now, seems that QUIC is not that quic.

Re:first impression (4, Informative)

bprodoehl (1730254) | about 10 months ago | (#45370993)

Yeah, with the current state of the code, and for the scenarios we tested, that's about right. Google's big focus is on doing a better job of multiplexing streams and reducing the amount of round-trips required to establish the connection and stuff. So our test scenario, of pulling down a 10MB psuedorandom file, is a scenario that's near and dear to our hearts, but isn't at the top of Google's TODO. I suspect that flipping on forward error correction is a simple thing, and changing the maximum congestion window size to better overcome the Long Fat Network problem shouldn't be too bad, either.

Re:first impression (1)

plover (150551) | about 10 months ago | (#45371173)

Is Google's focus on making serving up the traffic more efficient? Obviously if it improves the client experience it's a win, but I would imagine they'd be more invested in a way to pump 2,000 QUIC streams out of a box that can only handle 1,000 TCP/HTTP streams today.

Re:first impression (1)

grmoc (57943) | about 10 months ago | (#45372561)

That is a complicated question :)

Hopefully this mostly answers it:

The goal is to not be worse than TCP in any way. Whether or not we'll achieve that is something we won't know 'till we've achieve correctness and had time to optimize, and then time to analyze where and why it performs badly, then iterate a few times!

Better on Paper, Worse In Reality (1)

Anonymous Coward | about 10 months ago | (#45370925)

I understand the limitations of TCP, and although QUIC may look good on paper, the benchmarks provided in the link provided show that in every test QUIC failed miserably and was far worse than TCP. So the real-world benefits of QUIC would be what then? Once Google has a protocol that actually out-performs the tried and true on every front then bring it to the party, otherwise just stahp already.

Re:Better on Paper, Worse In Reality (0)

Joining Yet Again (2992179) | about 10 months ago | (#45370971)

The private sector always does a better job, you fucking heathen.

Re:Better on Paper, Worse In Reality (1)

game kid (805301) | about 10 months ago | (#45370991)

Yeah. Google is breaking the internet, selling off its users, and generally being a Facebook parody, and YouTube co-founder Jawed Karim had something (however brief) to say about it [theguardian.com] . It's a case study in why selling off your internet startup that happens to fulfill your life dreams and customer needs should be a worst-case scenario, not a bloody business model.

Re:Better on Paper, Worse In Reality (1)

bprodoehl (1730254) | about 10 months ago | (#45371061)

Its pretty nice that the code is actually in a state that anyone can download, build, and benchmark things they care about, and the stuff presented in the IETF slides (in TFA) is really interesting about how they can use Chrome to A/B test parameters in the protocol, to see which actually work out. Presumably that's just for folks that hop in to chrome internals and enable QUIC, but who hasn't done that already? ;)

alpha is, if your pages are all 10MB single files (5, Informative)

raymorris (2726007) | about 10 months ago | (#45371299)

As I understand it, QUIC is largely about multiplexing - downloading all 35 files needed for a page concurrently. The test was the opposite of what QUIC is designed for

    TCP handles one file at at a time* - first download the html, then the logo, then the background, then the first navigation button ....

QUIC gets all of those page elements at the same time, over a single connection. The problem with TCP and the strength of QUIC is exactly what TFA chose NOT to test. By using a single 10 MB file, their test is the opposite of web browsing and doesn't test the innovations in QUIC.

* browsers can negotiate multiple TCP connections, which is a slow way to retrieve many small files.

Re:alpha is, if your pages are all 10MB single fil (3, Informative)

bprodoehl (1730254) | about 10 months ago | (#45371415)

Yes, you're absolutely right that this left out stream multiplexing, but it did test QUIC's ability to handle packet loss. Seeing as how QUIC is aiming, at a high level, to fix problems in TCP and make the Internet faster, I think the test is fair, and I'm excited to see how things improve. There are other scenarios in the tests in Github, including some sample webpages with multiple images and such, if anyone is interested.

Thanks. What were web page results? (1)

raymorris (2726007) | about 10 months ago | (#45371541)

Thank for that info, and for making your test scripts available on Github.
I'm curious* what were the results of web page tests? Obviously a typical web page with CSS files, Javascript files, images, etc. is much different from a monolithic 10 MB file.

* curious, but not curious enough to run the tests for myself.

Re:Thanks. What were web page results? (4, Informative)

bprodoehl (1730254) | about 10 months ago | (#45371741)

In some of the web page scenario tests, HTTP over QUIC was about 4x faster than HTTP over TCP, and in others, QUIC was about 3x worse. I'll probably look into that next. The QUIC demo client appears to take about 200ms to get warmed up for a transfer, so testing with small files over fast connections isn't fair. After that 200ms, it seemed to perform as one would expect, so tests that take longer than a couple seconds are a pretty fair judge of current performance.

Re:alpha is, if your pages are all 10MB single fil (2)

grmoc (57943) | about 10 months ago | (#45372627)

The benchmark looked well constructed, and as such is a fair test for what it is testing: unfinished-userspace-QUIC vs kernel-TCP

It will be awesome to follow along with future runs of the benchmark (and further analysis) as the QUIC code improves.
It is awesome to see people playing with it, and working to keep everyone honest!

Re:alpha is, if your pages are all 10MB single fil (2, Informative)

CyprusBlue113 (1294000) | about 10 months ago | (#45371581)

As I understand it, QUIC is largely about multiplexing - downloading all 35 files needed for a page concurrently. The test was the opposite of what QUIC is designed for

    TCP handles one file at at a time* - first download the html, then the logo, then the background, then the first navigation button ....

QUIC gets all of those page elements at the same time, over a single connection. The problem with TCP and the strength of QUIC is exactly what TFA chose NOT to test. By using a single 10 MB file, their test is the opposite of web browsing and doesn't test the innovations in QUIC.

* browsers can negotiate multiple TCP connections, which is a slow way to retrieve many small files.

What the hell are you talking about? You're conflating HTTP with TCP. TCP has no such limitation. TCP doesn't deal in files at all.

Read up on QUIC. if (tcp && http) stream== (1)

raymorris (2726007) | about 10 months ago | (#45371713)

> You're conflating HTTP with TCP.

I'm discussing how HTTP over TCP works, in contrast to how it works over QUIC.
TCP provides one stream, which when used with HTTP means one file.

QUIC provides multiple concurrent streams specifically so that http can retrieve multiple concurrent files.

Re:Read up on QUIC. if (tcp && http) strea (1)

Anonymous Coward | about 10 months ago | (#45372189)

So then don't you just need http to transfer multiple files at the same time over 1 stream?

this is like replacing ethernet with something more reliable because dns queries fail over udp sometimes. You're wrenching on the wrong protocol.

Re:Read up on QUIC. if (tcp && http) strea (1)

serviscope_minor (664417) | about 10 months ago | (#45372945)

Actually not quite.

IP provides a bnuch of packets: there is no port number.

TCP and IP provide multiplexing over IP by introducing the port number concept.

Re:alpha is, if your pages are all 10MB single fil (-1)

Anonymous Coward | about 10 months ago | (#45372063)

Someone downmod this turd toll. Upmod the uninformed to +5 informative.

Re:alpha is, if your pages are all 10MB single fil (2, Informative)

Anonymous Coward | about 10 months ago | (#45371785)

I haven't RTFA and I don't know much about QUIC, but if it's what you suggest...

As I understand it, QUIC is largely about multiplexing - downloading all 35 files needed for a page concurrently. The test was the opposite of what QUIC is designed for

        TCP handles one file at at a time* - first download the html, then the logo, then the background, then the first navigation button ....

...then it sounds like a really horrible idea. If I click on a link will I have to wait for 20MB of images to finish downloading before I can read the content of a webpage, only to find out it wasn't what I was looking for anyway?

Re:alpha is, if your pages are all 10MB single fil (2)

grmoc (57943) | about 10 months ago | (#45372649)

AC, don't worry.
TCP is simply a reliable, in-order stream transport.
HTTP on TCP is what was described, and, yes, not the best idea in today's web (though keep in mind that most browsers open up 6 connections per hostname), but that is also why HTTP2 is working on being standardized today.

Re:Better on Paper, Worse In Reality (1)

grmoc (57943) | about 10 months ago | (#45372595)

Right now QUIC is unfinished, so I hesitate to draw conclusions about it. :)

What I mean by unfinished is that the code does not yet implement the design; the big question right now is how the results will look after correctness has been achieved and a few rounds of correctness/optimization iterations have finished.

Only one thing. (4, Funny)

Richy_T (111409) | about 10 months ago | (#45371033)

Paragraph 1 of RFC:

User SHALL register for Google Plus

slide design (0)

Anonymous Coward | about 10 months ago | (#45371035)

Wow, those slides are HIDEOUS. I'm guessing that's one of the default themes for the Google Apps slideshow thing... it's gotta be one of the worst, though.

Re:slide design (0)

Anonymous Coward | about 10 months ago | (#45371657)

LCARS interface from Star Trek:TNG, so probably not a default

And free ddos (4, Informative)

Ubi_NL (313657) | about 10 months ago | (#45371055)

The current problem with UDP is that many border routers do not check whether outgoing udp packages are from within their network. This is the base for DNS based ddos attacks. They are very difficult to mitigate on server level without creating openings for Joe job attacks instead... Standardizing on udp for other protocols will emphasize this problem

Re:And free ddos (0)

Anonymous Coward | about 10 months ago | (#45371129)

Why the fuck isn't that handled at IP level?

Re:And free ddos (2)

skids (119237) | about 10 months ago | (#45372587)

Because the ISPs can barely manage to tape the BGP infrastructure together in a stable fashion; there are numerous problems encountered when to ask a L3 router to perform at the speeds demanded at peering locations, and keeping a full trust mesh of ASNs and IP prefixes is beyond the state of the art (you have to not only know who's advertisements you can trust, but who's readvertisements of who's advertisements you can trust, etc, etc.) Both strict and loose reverse-path filtering are rarer to find in use the deeper towards the midddle of the network you go. Then there's IPv6's larger address space on top of that....

Re:And free ddos (1)

plover (150551) | about 10 months ago | (#45371199)

How would this be worse than a SYN flood attack today?

Re:And free ddos (1)

WaffleMonster (969671) | about 10 months ago | (#45371673)

The current problem with UDP is that many border routers do not check whether outgoing udp packages are from within their network. This is the base for DNS based ddos attacks. They are very difficult to mitigate on server level without creating openings for Joe job attacks instead... Standardizing on udp for other protocols will emphasize this problem

This is incorrect. Ingress filtering is a global IP layer problem.

TCP handles the problem with SYN packets and SYN cookie extensions to prevent local resource exhaustion by one sided SYNs from an attacker.

A well designed UDP protocol would be no more vulnerable to this form of attack than TCP using the same proven mechanisms of TCP and other better designed UDP protocols (DTLS).

DNS can also be fixed the same way using cookies but seems people are content to make the problem worse by implementing DNSSEC and ignoring the underlying issue.

Re:And free ddos (1)

skids (119237) | about 10 months ago | (#45372621)

It's in-effect correct because there are lots of UDP protocols designed before the general concept of "do not amplify unauthenticated solicitations with a larger reply" finally sunk in. (Or at least, sunk in among more serious protocol designers/implementers.)

Re:And free ddos (1)

WaffleMonster (969671) | about 10 months ago | (#45372715)

It's in-effect correct because there are lots of UDP protocols designed before the general concept of "do not amplify unauthenticated solicitations with a larger reply" finally sunk in. (Or at least, sunk in among more serious protocol designers/implementers.)

The parent was making a point against QUIC because it used UDP. It is a false statement. QUIC has appropriate mechanisms to prevent unsolicited mischief.

What DNS, SNMP, NTP and god knows what else did way back then have nothing at all to do with the topic at hand.

Thank you (2)

TFoo (678732) | about 10 months ago | (#45371113)

The Internet has needed a standardized reliable UDP protocol for many years. There have been many attempts, but hopefully this one will stick.

Re:Thank you (3, Informative)

whoever57 (658626) | about 10 months ago | (#45371205)

The Internet has needed a standardized reliable UDP protocol for many years. There have been many attempts, but hopefully this one will stick

It has existed for decades. It's called TCP.

Did you RTFA? This new protocol appears to have little to no advantages over TCP and significant disadvantages under some circumstances.

Re:Thank you (1)

bprodoehl (1730254) | about 10 months ago | (#45371237)

Definitely a work in progress, and the scenario tested is outside of Google's main focus. I suspect that things will get a lot better in the weeks and months to come.

Re:Thank you (1)

grmoc (57943) | about 10 months ago | (#45372687)

bprodoehl is absolutely correct-- the code is unfinished, and while the scenario is certainly one which is worried about, it isn't the focus of attention at the moment. The focus at the moment is getting the protocol working reliably and in all corner cases... Some of the bugs here can cause interesting performance degredations, even when the data gets transferred successfully.

I hope to see the benchmarking continue!

Re:Thank you (1)

profplump (309017) | about 10 months ago | (#45371583)

TCP is a stream protocol. UDP is a message protocol. They have different limitations and features and aren't always suitable for the same purposes. How do you expect to participate in a discussion about the limitations of TCP if you can't be bothered to learn even the basics of the existing protocols?

Re:Thank you (5, Insightful)

fatphil (181876) | about 10 months ago | (#45371441)

> reliable UDP protocol

You want a reliable *unreliable* datagram protocol protocol?
Sounds like something guaranteed to fail.

Everyone tries to reinvent TCP. Almost always they make something significantly worse. This is no exception.

Re:Thank you (2)

bprodoehl (1730254) | about 10 months ago | (#45371569)

Maybe, but this still looks really promising. They've made a few really smart decisions, in my opinion.

1) Avoid the usually-doomed-from-the-start approach of starting at the standards committee level. Frame up the project, and let some really smart engineers go nuts. Take what works to a standards committee, and save the time that you would've spent arguing over stuff that might or might not have worked.
2) Make it work with the existing Internet infrastructure. Making everyone adopt a new TCP stack is probably not going to happen. I'm looking at you, Multipath TCP.
3) Do it in the open. Let geeks like me poke around with it so I can complain that it doesn't speed up my file transfers over crappy Internet connections yet.

Re:Thank you (1)

skids (119237) | about 10 months ago | (#45372967)

Making everyone adopt a new TCP stack is probably not going to happen.

Neither is it likely to happen that a true multipath helper will be built into core routers (e.g. something that uses TTL counters to determine when to take the second/third/etc preferred route in their route table and leaves the job of computing preferable paths to the end systems.) Which means what really needs to happen... won't. We've reached a technological glaciation point where the existing install base is dictating the direction of technology, rather than the other way around.

Re:Thank you (1)

serviscope_minor (664417) | about 10 months ago | (#45372973)

veryone tries to reinvent TCP. Almost always they make something significantly worse. This is no exception.

No, you misunderstand. The GP wants a reliable protocol like TCP but with datagram boundaries perserved like UDP. That's not a particularly unreasonable thing to want since the packet boundaries do exist and it's a pain to put them into a stream.

In fact some systems provide exactly such a protocol, the Bluetooth L2CAP protocol, for example. It's quite appropriate for the ATT protocol, for example.

Morons (2, Insightful)

Anonymous Coward | about 10 months ago | (#45371119)

UDP is for messages (eg. DNS) and real time data. TCP is far superior for bulk data because of the backoff mechanism, something they want to work around with that UDP crap.

QoS works perfectly with TCP because of TCP backoff.

So much wrong with this idea it makes my head hurt. It is OK to run game servers with UDP. It is OK for RT voice or even video to use UDP. It is not OK to abuse the network to run bulk, time insensitive transfers over UDP, competing with RT traffic.

What is the problem? Too many connections for 1 resource each? Too many TCP handshakes? You know, there was this idea about PIPELINING. Perhaps improve that a little to reduce 200 tcp connections to maybe 3 or 5, instead of piping data over UDP. But that would be too easy - why not invent your own network stack instead :S

What are people at Google smoking?

Re:Morons (1)

bprodoehl (1730254) | about 10 months ago | (#45371187)

Oh, you should definitely read up on the docs. Yes, they care about backoff mechanisms and generally not breaking the Internet, a LOT. By moving to UDP, they can work on schemes for packet pacing and backoff that do a better job at not overwhelming routers. And they can get away from all the round-trips you need to set up a SSL session over TCP, without sacrificing security.

Re:Morons (0)

Anonymous Coward | about 10 months ago | (#45371655)

Oh, you should definitely read up on the docs. Yes, they care about backoff mechanisms and generally not breaking the Internet, a LOT. By moving to UDP, they can work on schemes for packet pacing and backoff that do a better job at not overwhelming routers. And they can get away from all the round-trips you need to set up a SSL session over TCP, without sacrificing MUCH security.

There, fixed that for you.

Re:Morons (0)

Anonymous Coward | about 10 months ago | (#45371231)

The only reasonable comment in the entire post.

Re:Morons (1)

profplump (309017) | about 10 months ago | (#45371653)

The back off mechanism is one of the problems they're trying to fix. Internet protocols need some way to control bandwidth usage, but there are a lot of limitations with the existing options in TCP. And if your RTFA you'd see they intend to provide alternative mechanisms to regulate bandwidth, addressing both the continuing need to avoid flooding and the limitations of TCP's back off mechanisms.

Plus stream protocols are inefficient when transferring multiple items (which is the typical use case for HTTP) and require more complicated parsing in the application layer (which is bad for security, among other things). Reliable message passing, as opposed to streaming, makes a lot of sense for use in web browsers and similar applications.

And for that matter, modern end-user DNS is a arguable a poor use case for UDP. Most hosts issues DNS requests against only a small number of resolvers (often only 1 or 2), frequently issue a number of requests in rapid succession, and generally retry if there is some network failure rather than just accepting the loss. Given those restraints a reliable transfer mechanism is very desirable and there's very little cost to doing a once-per-boot connection setup to the DNS resolver. UDP makes more sense for the peer-to-peer style system in place at higher levels (and around which DNS was designed) but that's just not the case for end-user DNS.

Re:Morons (1)

WaffleMonster (969671) | about 10 months ago | (#45371945)

The back off mechanism is one of the problems they're trying to fix. Internet protocols need some way to control bandwidth usage, but there are a lot of limitations with the existing options in TCP.

Like? Please be specific. This thread is getting old quick with people saying "TCP sucks" going on and on about how it just sucks without ever citing any technical justifications why that is so.

There are tons of congestion algorithms
http://en.wikipedia.org/wiki/TCP_congestion-avoidance_algorithm [wikipedia.org]

and extensions
http://en.wikipedia.org/wiki/TCP_tuning [wikipedia.org]

for TCP.

Re:Morons (2)

grmoc (57943) | about 10 months ago | (#45372769)

TCP doesn't suck.

TCP is, however, a bottleneck, and not optimal for all uses.

Part of the issue there is the API-- TCP has all kinds of cool, well-thought-out machinery which simply isn't exposed to the application in a useful way.

As an example, when SPDY or HTTP2 is layered on TCP, when there is a single packet lost near the beginning of the TCP connection, it will block delivery of all other successfully received packets, even when that lost packet would affect only one resource and would not affect the framing of the application-layer protocol.

Big mistake (2, Funny)

gmuslera (3436) | about 10 months ago | (#45371149)

The problem with that UDP proposal is that the IETF may not get it.

Re:Big mistake (3, Funny)

wonkey_monkey (2592601) | about 10 months ago | (#45372215)

And if they do, they won't acknowledge it.

Benchmarking premature; QUIC isn't even 100% coded (5, Informative)

grmoc (57943) | about 10 months ago | (#45371167)

As someone working with the project.

The benchmarking here is premature.
The code is not yet implementing the design, it is just barely working at all.

Again, they're not (yet) testing QUIC-- they're testing the very first partial implementation of QUIC!

That being said, it is great to see that others are interested and playing with it.

Re:Benchmarking premature; QUIC isn't even 100% co (-1)

Anonymous Coward | about 10 months ago | (#45371289)

The benchmarking here is premature.
The code is not yet implementing the design, it is just barely working at all.

Totally. Everybody knows benchmarking should happen at the very end of the process and just before everything is about to go live.

Re:Benchmarking premature; QUIC isn't even 100% co (1)

shrikel (535309) | about 10 months ago | (#45371693)

Well they should at least wait for a beta rather than alpha.

Re:Benchmarking premature; QUIC isn't even 100% co (1)

grmoc (57943) | about 10 months ago | (#45372915)

Nah; it is valuable for many people to be doing this benchmarking even with the current state of code. .. it just requires careful explanation of what the benchmark entails!

Concluding that buggy-unfinished-QUIC is slower than TCP is absolutely valid, for instance.
That isn't the same as QUIC being slower than TCP (at least, not yet!)

Re:Benchmarking premature; QUIC isn't even 100% co (1)

Anonymous Coward | about 10 months ago | (#45371709)

Benchmarking for the sake of bug testing should happen al the time.

Benchmarking for the sake of a story could be useful. That is what Phoronix is all about, for the most part.

But, testing for your own use cases, as these guys say they are testing in TFA, before the developers say it is even implemting the actual design is pointless.
To then publish the results is simply moronic.

Re:Benchmarking premature; QUIC isn't even 100% co (1)

grmoc (57943) | about 10 months ago | (#45372341)

The benchmarking itself is awesome-- it is good to have people playing with it.

Concluding things right now, when the implementation is working towards correctness instead of optimality, however, is potentially misleading.

it is google (0, Funny)

Anonymous Coward | about 10 months ago | (#45371207)

I don't care about the speed just yet. Tell me how this screws with my privacy first.

UDT (1)

dmbasso (1052166) | about 10 months ago | (#45371225)

Hmmm, several posts, yet no mention of UDT [sourceforge.net] so far. It would be nice if the benchmark included it.

Re:UDT (1)

bprodoehl (1730254) | about 10 months ago | (#45371285)

I can take a look at incorporating UDT to the benchmarks going forward. The test scripts are all in github, but I'm not all that familiar with UDT. It looks like UDT4's sendfile/recvfile examples would drop in pretty nicely.

Re:UDT (1)

dmbasso (1052166) | about 10 months ago | (#45371435)

I'm not familiar with UDT as well, but coincidentally, I'm about to design/implement a protocol that has to work over UDP, for hole-punching (it is a p2p application). I haven't decided yet if the dependency on UDT is justified, and I wasn't aware of QUIC.

So if you could include UDT in future benchmarks, I would appreciate. Btw, thanx for the work so far.

Re:UDT (0)

Anonymous Coward | about 10 months ago | (#45372829)

i just want to mention, that when i was playing with UDT i noticed that by default it doesn't use smaller udp packets for limited MTU connections with or without mss clamping on tcp. and so there's a potential issue with hole-punching - generally speaking, hope punching is more necessary on ADSL etc connections that often have limted mtu. so you may want to cap udp size at 1200 bytes or something so that it'll generally work through vpns, and ADSL connections. it may be possible to detect when this is necessary, but it may not be worthwhile depending on your application.

in my own testing of UDT i discarded it for a few reasons:
  a: it's a bit of a pita to get tcp like behaviour out of it, the thing that was biting me is i want connections that open/close.
  b: although it's fast on wan connections, it struggles to get gigabit speeds on local lan.
  c: connection initiation performance is worse than tcp, along with the mtu issue.

in a way tcp fast open improves tcp connection speed, and there's a lot of segment offload etc in modern ethernet cards, but most lack proper udp acceleration. as we move from gigabit to 10 gigabit and infiniband tcp often outperforms udp for local connectoins with lower cpu utilisation. and falls back to slower connections with window tuning and so forth.

for p2p i'd recommend doing something more like what utorrent etc do. i think it was called utp? it scales up the size of packets gradually, and is optimised to minimise creating latency on the connection. although most people seem to tune for too large an increase in latency you can reduce that.

i started working on my own udp like stuff which was going to act like tcp multipath. but now it seems tcp multipath is implementing similar things. but what i want really is nice ways to be able to hop between various connections and find good paths that goes across multiple networks, and side with the cleanest paths when possible. i've always hated how p2p etc ends up sending to the other side of the world and then ignores close peers. i was thinking it'd be cool to do optimisation stuff on top of bittorrent and act as a semi bittorrent cache of kinds automatically pulling in bits in advance by predicting what friendly peers want. so that you can use a close fast connection to boost p2p speeds.

UDP vs TCP (1)

Anonymous Coward | about 10 months ago | (#45371249)

As a newish developer who knows only the minimum I need to about TCP/IP protocol, I was surprised that this, and a number of common things (apparently games, streaming video [stackoverflow.com] ) use UDP at all. I thought it was basically just used for ping.

Out of curiosity can anyone point out good books for learning more about how to implement applications that use TCP/IP including udp in ways other than the common ssh/http/ftp connections.

Re:UDP vs TCP (2)

NotSanguine (1917456) | about 10 months ago | (#45371555)

As a newish developer who knows only the minimum I need to about TCP/IP protocol, I was surprised that this, and a number of common things (apparently games, streaming video [stackoverflow.com] ) use UDP at all. I thought it was basically just used for ping.

Out of curiosity can anyone point out good books for learning more about how to implement applications that use TCP/IP including udp in ways other than the common ssh/http/ftp connections.

ICMP is used for ping, friend. I recommend the Comer [purdue.edu] books. Also, I'd also recommend that you read the IP [ietf.org] , UDP [ietf.org] and TCP [ietf.org] specs.

Re:UDP vs TCP (3, Informative)

T.E.D. (34228) | about 10 months ago | (#45371641)

The general gist is that UDP and TCP both have kind of an ideal mileu. UDP is great for small packets that you want delivered with a minimum of overhead, and if the packet is late, lost, or out of order, it won't kill anything.

TCP is great if you are sending large amounts of data at once, between a pair of systems, and in situations where its important not to lose packets or get them out of order, and you don't care that much if this takes a little extra time (occasionally perhaps a lot of extra time) to accomplish. Also good in situations where you'd like to know when your partner on the other side goes away for some reason.

Most applications are going to be in-between somewhere, so you have to make a decision. For example, if your packets are small and need to be delivered quickly, but you also need reliability, you might go with TCP just to get that reliability. Alternatively, if you can get away with it, you might instead go with UDP, but use dedicated links between the systems and a handshaking protocol at your application layer to prevent collisions.

Alternatively, you might do what Google is doing, and try to reimplement TCP's reliability in your application layer on top of UDP. The thing about UDP is that you can always reimplement any parts of TCP you need on top of it.

Re:UDP vs TCP (1)

profplump (309017) | about 10 months ago | (#45371769)

UDP isn't used for ping either. You're thinking of ICMP -- a lower-level protocol than TCP or UDP. ICMP is *almost* part of the IP layer; technically it uses IP for data transfer, but IP also depends on it for control messaging.

Re:UDP vs TCP (5, Informative)

profplump (309017) | about 10 months ago | (#45372021)

UDP and TCP have different uses; one isn't better than the other, they just do different things.

UDP is message-based. When I send a UDP message the remote end either gets the whole message or none of it. This can make parsing in applications a lot easier; rather than putting delimiters into a stream and trying to pick apart the data as it comes in in chunks I can be sure that I'm always working with a complete message, and the messages can by of different types/sizes/etc. as dictated by my application-layer needs. But there's a maximum size for messages, and if I need to send more data than fits in a single message it's up to my application to ensure they get put into the right order when they are received.

UDP is unreliable, in that if a UDP packet gets drop the message is lost and no notification is made to the sender. Often this is bad, but in certain instances it is valuable. One such instance is data with a short lifetime, such as games or streaming media. If I'm in the middle of a game and a packet gets dropped it doesn't do me any good to get that packet 2 seconds later -- the game has moved on. This unreliable nature also makes UDP simpler; there's no need to setup a "connection" to send UDP data -- you just slap an IP address on the packet and send it along, and the other end will get it or not and use it or not and you don't have to care. So if you're writing a server that will handle billions of clients UDP has a lot less overhead as it doesn't have to keep track of billions of "connections" (or have billions ports available).

TCP is a streaming protocol. You put data in on one end and it pops out in the same order on the other. This is great if you're sending a single file -- you can be sure the other end will get all the bits in the right order. But it also means if you have something important to say you have to wait in line until all of the preceding data has been transmitted, possibly including things that will be expired by the time they are received. It also means your application-layer protocol has to have some method to separate messages if you send more than one thing over a single connection.

TCP has reliable delivery. Often this is a good thing, as the sender can be sure the receiver got all the data (and got it in the right order). But in order to make the protocol reliable the receiver must acknowledge each and every packet from the sender, and the sender and receiver must store information about each other so they can keep track of this ongoing bi-directional connection. So there's at least a couple of round-trip exchanges necessary to setup a TCP connection, and when you connect to a server it must have a free TCP port number for each and every client it's connected to, and enough memory to keep track of all of the connections.

QUIC (and several other new-ish protocols) are proposing a sort of compromise protocol -- a protocol that's both message-based and reliable, and frequently that allows messages of any size. Such protocols provide the delivery assurances of TCP without the waiting-in-line issues the streaming model can produce, and they reduce the amount of setup overhead by allowing clients to open a single connection to the server and fetch many different things.

fuc43r (-1)

Anonymous Coward | about 10 months ago | (#45371391)

w>indows, sUN or [goat.cx]

wherever you go, there you aren't (4, Informative)

epine (68316) | about 10 months ago | (#45371545)

Those fuckers at www.connectify.me redirected my connection attempt to
http://www.connectify.me/no-javascript/ [connectify.me] so that even after I authorized Javascript for their site I was unable to navigate to my intended destination (whatever shit they pulled did not even leave a history item for the originally requested URL).

This sucks because I middle-click many URLs into tabs I might not visit until ten minutes later. It I had a bunch of these tabs open I wouldn't even have been able to recollect where I had originally been. In this case, I knew to come back here.

Those fuckers at www.connectify.me need to procure themselves an Internet clue stick PDQ.

Nerfing congestion avoidance for increased profits (2)

WaffleMonster (969671) | about 10 months ago | (#45371593)

May I be so bold as to suggest graphs citing only x performance improvement for protocol y are insufficient, harmful and useless measures of usable efficiency. We know how to make faster protocols.. the challenge is faster while preserving generally meaningful congestion avoidance. This part is what makes the problem space non-trivial.

Look at TFA and connectify links it is all performance talk with total silence on addressing or simulating congestion characteristics of the protocol.

Having sat in on a few tcpm meetings it is always the same with google... they show data supporting by doing x there will be y improvement but never as much enthusiasm for consideration of secondary repercussions of the proposed increased network aggression.

My personal view RTT reductions can be achieved thru extension mechanisms to existing protocols without wholesale replacement. TCP fast open and TLS extensions enabling 0 RTT requests thru the layered stack...experimental things for which "code" exists today can provide the same round trip benefits as QUIC.

What google is doing here is taking ownership of the network stack and congestion algorithms away from the current chorous of stakeholders and granting themselves the power to do whatever they please. No need to have a difficult technical discussion or get anyones opinions or signoff before droping in a new profit enhancing congestion algorithm which could very well be tuned to prefer google traffic globally at the expense of everyone elses ... they control the clients and the servers...done deal.

There are two fundamental improvements I would like to see regarding TCP.

1. Session establishment in the face of in band adversaries adding noise to the channel. Currently TCP connections can be trivially reset by an in-band attacker. I think resilience to this necessarily binding security to the network channel can be a marginally useful property in some environments yet is mostly worthless in the real world as in-band adversaries have plenty of other tools to make life difficult.

2. Efficient Multi-stream/message passing. Something with the capabilities of ZeroMQ as an IP layer protocol would be incredibly awesome.

Re:Nerfing congestion avoidance for increased prof (1)

MobyDisk (75490) | about 10 months ago | (#45372173)

I tend to agree. I am glad that someone is trying to create a better TCP. If they fail, we validate that TCP is a good idea. If they succeed, then we can have a better protocol.

If the QUIC exercise is successful, then the IETF should consider extending TCP to support the relevant features. For example, their point about multiple steams is a good one. Perhaps TCP should define an option to open N simultaneous connections with a single 3-way handshake. Existing implementations would ignore the new option bytes in the header so nothing would break.

If adding FEC really helps, then the same logic applies. Add it. If pacing really helps, then when you get an ack of 10 packets, pace the replies (although I doubt that would really help). But let us add these things only once they are proven to work.

You make a good point about poor congestion control causing wide-scale harm, but that problem exists today. Anyone can create any protocol they want, as long as it is built on TCP or UDP. This has always been the case and so far no one has managed to create a protocol that destroys the internet... yet. :-) Although no one as powerful as Google has tried. And certainly some protocols have brought down LANs due to inefficiency (Windows file sharing).

Re:Nerfing congestion avoidance for increased prof (1)

WaffleMonster (969671) | about 10 months ago | (#45372533)

If the QUIC exercise is successful, then the IETF should consider extending TCP to support the relevant features. For example, their point about multiple steams is a good one. Perhaps TCP should define an option to open N simultaneous connections with a single 3-way handshake. Existing implementations would ignore the new option bytes in the header so nothing would break.

While TCP is ancient there has been continuous work to improve it over the years. I think most people throwing stones here have not taken the time to look around and understand the current landscape. Indeed many ideas in QUIC are good ones yet not a single one of them are something new or something that had not been implemented or discussed in various WGs.

Regarding multiple streams what effectively is the difference between this and fast open? I send a message the message arrives and is processed immediately without a round trip... with a scenario like that what is even the point of something labeled "multi-stream"?

You make a good point about poor congestion control causing wide-scale harm, but that problem exists today. Anyone can create any protocol they want, as long as it is built on TCP or UDP.

TCP congestion is normally implemented within the operating system where user space does not have access or visibility into scheduling of transmission.

UDP mostly used for low bandwidth exchanges or inherently rate limited realtime communications. Most of these applications sport non-existent or insufficient congestion coverage. For example a bit torrent client without built in queue management or where you have neglected to set rate limits will easily saturate any network link to the point of being unusable. UDP congestion is a problem it just is not that big of a bandwidth consumer to have a large scale impact where it can be a threat to the network.

To be clear I don't think Google is at risk of bring about a congestive apocalypse. The potential risk I see takes the form of excessive aggression for competitive advantage.

SCTP (1)

andy753421 (850820) | about 10 months ago | (#45371789)

I'm no networking expert, but does anyone know how this compares to SCTP [wikipedia.org] ?
And have they taken various security considerations into account, e.g. SYN floods?

Re:SCTP (1)

grmoc (57943) | about 10 months ago | (#45372395)

It is similar in some ways, and dissimilar in other ways.
One of the outcomes of the QUIC stuff that is considered a good outcome is that the lessons learned are incorporated into other protocols like TCP, or SCTP.

QUIC absolutely takes security into account, including SYN floods, magnification attacks, etc.

Meant for Denail of Service Attack Denial? (1)

EngineeringStudent (3003337) | about 10 months ago | (#45371791)

When I look at the Goodput vs. Bandwidth being capped for QIC, and the statement that most folks don't use more than that, my thought was "so why does it exist in TCP?".

Is this something exploited primarily by hackers but not giving any value to normal humans?

Pacing, Bufferbloat (4, Interesting)

MobyDisk (75490) | about 10 months ago | (#45371933)

The slides refer to a feature called "pacing" where it doesn't send packets as fast as it can, but spaces them out. Can someone explain why this would help? If the sliding window is full, and an acknowledgement for N packets comes in, why would it help to send the next N packets with a delay, rather than send them as fast as possible?

I wonder if this is really "buffer bloat compensation" where some router along the line is accepting packets even though it will never send them. By spacing the packets out, you avoid getting into that router's bloated buffer.

From the linked slides:

Does Packet Pacing really reduce Packet Loss?
* Yes!!! Pacing seems to help a lot
* Experiments show notable loss when rapidly sending (unpaced) packets
* Example: Look at 21st rapidly sent packet
        - 8-13% lost when unpaced
        - 1% lost with pacing

Re:Pacing, Bufferbloat (1)

WaffleMonster (969671) | about 10 months ago | (#45372183)

The slides refer to a feature called "pacing" where it doesn't send packets as fast as it can, but spaces them out. Can someone explain why this would help? If the sliding window is full, and an acknowledgement for N packets comes in, why would it help to send the next N packets with a delay, rather than send them as fast as possible?

I wonder if this is really "buffer bloat compensation" where some router along the line is accepting packets even though it will never send them. By spacing the packets out, you avoid getting into that router's bloated buffer.

Yes essentially it is a hedge against future probability of packet loss. I don't know about QUIC but with TCP statistically more packet loss tends to present toward the end of a window than start therefore normally more expensive to correct.

Re:Pacing, Bufferbloat (3, Interesting)

grmoc (57943) | about 10 months ago | (#45372435)

What seems likely is that when you generate a large burst of back-to-back packets, you are much more likely to overflow a buffer, causing packet loss.
Pacing makes it less likely that you overflow the router buffers, and so reduces the chance of packet loss.

TCP does actually do pacing, though it is what is called "ack-clocked". For every ACK one receives, one can send more packets out. Since the ACKs traverse the network and get spread out in time as they go through bottlenecks, you end up with pacing.... but ONLY when bytes are continually flowing. TCP doesn't end up doing well in terms of pacing out packets when the bytes start flowing and stop and restart, as often happens with web browsing.

Re:Pacing, Bufferbloat (2)

Above (100351) | about 10 months ago | (#45372793)

Reducing packet loss is not always a good thing. Packet loss is mechanism that an IP network uses to indicate a lack of capacity somewhere in the system. BufferBloat is one attempt to eliminate packet loss with very bad consequences, never throw packets away by queueing them for a very long time. Pacing can be the opposite side of that coin, send packets so slowly loss never occurs, but that also means the transfer happens very slowly.

When many TCP connections are multiplexed onto a single link the maximum aggregate throughput is achieved with 3-5% packet loss. Less and there are idle points on the line, more and there is self synchronizing congestion collapse.

What's really amusing here is the notion that pacing is the fix for BufferBloat. That's sort of the two wrongs make a right theory; break per link queueing with BufferBloat and then break senders by making them all painfully slow.

This is not the answer.

Re:Pacing, Bufferbloat (0)

Anonymous Coward | about 10 months ago | (#45372841)

pacing is often bad, it may help .. but it tends to make networks perform worse when lots of people use it. i think it's probably best only used in combination, where it's only pacing some of the data. so 75% becomes in a burst, and 25% is paced.

Hold on (1)

Psicopatico (1005433) | about 10 months ago | (#45371967)

I think I'll wait for QUICv6, thanks.

Handled at layer 7 (1)

sl4shd0rk (755837) | about 10 months ago | (#45372195)

Are you friggin nuts? This seems to imply that any filtering at the kernel level will need to unwrap all the application specific jibber-jabber in this protocol to determine wtf it's supposed to do with it. That would be quite costly in terms of performance. No, I don't trust applications to handle the security of packet data coming in. Especially when some entity wants to bury support for the protocol in their own web browser. This just smells like all kinds of Hell naw .

Re:Handled at layer 7 (2)

grmoc (57943) | about 10 months ago | (#45372471)

There are definitely people and opinions on both sides of the fence on this.

Unfortunately, though performance might improve with access to the hardware, wide and consistent deployment of anything in the kernel/OS (how many WinXP boxes are there still??) takes orders of magnitude more time than putting something into the application.

So.. we have a problem: We want to try out a new protocol and learn and iterate (because, trust me, it isn't right the first time out!), however can't afford to wait long periods of time between iterations/availability.

Hopefully the project will drive us all towards solutions to these problems that are generic, usable for any new protocol, and which actually work!

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>