Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Google's SPDY Could Be Incorporated Into Next-Gen HTTP

timothy posted more than 2 years ago | from the 10-4-gd-bdy dept.

Google 275

MojoKid writes "Google's efforts to improve Internet efficiency through the development of the SPDY (pronounced 'speedy') protocol got a major boost today when the chairman of the HTTP Working Group (HTTPbis), Mark Nottingham, called for it to be included in the HTTP 2.0 standard. SPDY is a protocol that's already used to a certain degree online; formal incorporation into the next-generation standard would improve its chances of being generally adopted. SPDY's goal is to reduce web page load times through the use of header compression, packet prioritization, and multiplexing (combining multiple requests into a single connection). By default, a web browser opens an individual connection for each and every page request, which can lead to tremendous inefficiencies."

Sorry! There are no comments related to the filter you selected.

It's a secret plot by apple (5, Funny)

gearloos (816828) | more than 2 years ago | (#38815363)

Its a secret plot by Apple to fix the lag in IOS 5 Safari by getting Google to find a way to speed up web page loads to cover it up!

Re:It's a secret plot by apple (5, Funny)

Anonymous Coward | more than 2 years ago | (#38815399)

Look dude, if you're going to spread a conspiracy properly, you need to spread your ideas out to a least 3 excessively long paragraphs or raging insanity. You could have easily structured it:

-Secret plot by apple. They're evil, man.

-The secret plot happens to be cuz iOS is evil like them.
-iOS version 5 is also evil.

-They've drugged Google with evil for more evil.

Re:It's a secret plot by apple (5, Funny)

LeperPuppet (1591409) | more than 2 years ago | (#38815631)

Your conspiracy needs more Illuminati/Lizard People/FEMA camps/Aliens/Other batshit craziness.

Pick any two and try again

Re:It's a secret plot by apple (1)

Calos (2281322) | more than 2 years ago | (#38815675)

You forgot the New World Order.

Seems to be one of the leading causes of stupid I see anymore.

Bonus points for Haiku (0)

martin-boundary (547041) | more than 2 years ago | (#38816001)

Secret plot by Apple.
Evil Apple, Evil Apps.
GOOG snowed in by Them.

Re:It's a secret plot by apple (5, Insightful)

scdeimos (632778) | more than 2 years ago | (#38815707)

Hate to rain on your parade, but didn't you realize that Apple will wait three years then have a media conference introducing iSPDY as their own invention?

Re:It's a secret plot by apple (3, Funny)

phonewebcam (446772) | about 2 years ago | (#38816207)

And Microsoft to merely wrap some bloatware round it and call it their own, like they do with their "search engine" [blogspot.com] .

The IMPORTANT bit about SPDY (5, Insightful)

YA_Python_dev (885173) | about 2 years ago | (#38816515)

I realize you guys are just kidding, but there's a very important and overlooked part of the SPDY protocol. Hopefully TPTB won't understand its implications before it's too late to stop SPDY adoption.

You see, the way I read the spec and the way it's currently implemented, SPDY requires every single connection to be encrypted. It's not optional.

Imagine that, a world where MITM attacks suddenly become much much harder, where your ISP doesn't inject ads in your search results, where your mobile provider cannot "help" you screwing up your HTTP connections with a transparent proxy, where the British government cannot censor a Wikipedia page, where even the small sites can be encrypted because web hosts save bandwidth money by offering this option to everyone.

Imagine a world where net neutrality becomes much harder to break because all big protocols are encrypted all (or at least most) of the time and the deep packet inspection shit that's used much more widely than people think just doesn't work anymore.

SSH, Freenet, Skype BitTorrent and other P2P protocols are already there. This is the chance to do it for HTTP.

Disclaimer: I speak only for myself and not anyone else. IANARE.

Re:The IMPORTANT bit about SPDY (1)

Pieroxy (222434) | about 2 years ago | (#38816589)

I did not know that about SPDY, thanks for the info.

That's one reason to adopt it widely and quickly IMHO.

Re:The IMPORTANT bit about SPDY (4, Insightful)

makomk (752139) | about 2 years ago | (#38816655)

Imagine a world where every time you set up a website you have to fork over money to a certificate provider in order for people to be able to access your sites, where the prices for certificates are sky-high once again because the CAs know that they've got everyone over a barrel, where governments use their influence to get their own CAs into every browser and go right on ahead MITMing everything in site by painting anyone that refuses as a supporter of child porn...

I wonder what this means for caching proxies? (1)

Anonymous Coward | more than 2 years ago | (#38815373)

I wonder what this means for caching proxies?

Re:I wonder what this means for caching proxies? (2)

joesteeve (2002048) | more than 2 years ago | (#38815403)

Things that get refreshed often dont get cached anyways. AJAX interactions mostly.

Re:I wonder what this means for caching proxies? (-1)

Anonymous Coward | more than 2 years ago | (#38816053)

Things that get refreshed often dont get cached anyways. AJAX interactions mostly.

Yo mama douched her stanky pussy with Ajax and vinegar.

Re:I wonder what this means for caching proxies? (1)

symbolset (646467) | about 2 years ago | (#38816159)

Nothing, if you're SSL'ing through it with your offshore VPN connection like you're not a total tool.

Re:I wonder what this means for caching proxies? (0)

Christopher B. Linn (2560089) | about 2 years ago | (#38816277)

Do you even know what a reverse proxy means? Hint: it's used to host websites.

From my Ops group - yay! (1)

Anonymous Coward | more than 2 years ago | (#38815389)

No more max-connections increase every new browser (although I must say we noticed when it went from 256 back to 48 on firefox). Multiply 3000+ users, x 48 max-connections per tab, x say 5 tabs, and there's an unhappy proxy cluster... workstations are more multi-threaded than network link/proxy can keep up!

It's going to fail (3, Funny)

Anonymous Coward | more than 2 years ago | (#38815409)

Microsoft has announced their implementation, DirectSPEW.

It's going to kill SPDY when it debuts in a few months with Windows Phone 7.1.

Hi bonch

What about pipelining and keep-alive? (5, Interesting)

the_other_chewey (1119125) | more than 2 years ago | (#38815413)

I realise that SPDY is about reducing the latency of HTTP connection handshakes -
but wouldn't using the already existing and even implemented HTTP 1.1 standards
for pipelining (requesting multiple resources in one request) and keep-alive (keeping
a once-established connection open for reuse) mostly remove the handshake latency
bottleneck?

Re:What about pipelining and keep-alive? (0)

Anonymous Coward | more than 2 years ago | (#38815519)

Yes, it would. Which is why on the SPDY whitepaper page claiming how much faster SPDY is they don't include results for pipelining. They didn't test it, or didn't report it, because they didn't want you to know that their new protocol is not necessary. Geeks got to have their own protocol for ego sake.

Re:What about pipelining and keep-alive? (5, Informative)

laffer1 (701823) | more than 2 years ago | (#38815529)

When pipelining first came out, there were many buggy implementations. As a result, many browsers and web servers disabled the feature. Maybe it's time to turn it on for everything.

Re:What about pipelining and keep-alive? (5, Informative)

zbobet2012 (1025836) | more than 2 years ago | (#38815545)

To reiterate my reply below no, pipelining offers very little gain vs true "multiplexing" and it represents a security risk.

Re:What about pipelining and keep-alive? (0)

Anonymous Coward | about 2 years ago | (#38816445)

To reiterate my reply below no, pipelining offers very little gain vs true "multiplexing" and it represents a security risk.

security risk? like what?

Re:What about pipelining and keep-alive? (1)

Chrisq (894406) | about 2 years ago | (#38816585)

To reiterate my reply below no, pipelining offers very little gain vs true "multiplexing" and it represents a security risk.

security risk? like what?

GP is bullshitting, I can't think of any way that sending exactly the same data from the same host can introduce security issues if you don't close and reopen the connection between each one.

Re:What about pipelining and keep-alive? (1)

ArsenneLupin (766289) | about 2 years ago | (#38816765)

GP is bullshitting, I can't think of any way that sending exactly the same data from the same host can introduce security issues if you don't close and reopen the connection between each one.

IIRC, some attacks against SSL were based on pipelining (where a malicious man-in-the-middle was somehow injecting its own data into the connection, making it look like it was pipelined data from the original client)

Re:What about pipelining and keep-alive? (5, Informative)

zbobet2012 (1025836) | more than 2 years ago | (#38815533)

Browsers and servers almost all use persistent connections these days and have since at least the early 2000's. SPDY doesn't claim to do anything with this (the summary above is incorrect). Speedy does however implement several features of "pipelining" but in a more elegant manner. There are a host of issues with pipelining on the server side (it is a security risk, a description of why is here [f5.com] ). SPDY effectively implements pipelining but without the associated security risks. It also implements more advanced features that allow the server to push data to the client without the client requesting it.

Re:What about pipelining and keep-alive? (0)

Anonymous Coward | more than 2 years ago | (#38815587)

There are a host of issues with pipelining on the server side (it is a security risk, a description of why is here [f5.com]).

That page does not describe any security risks in HTTP pipelining. Maybe you linked to the wrong page?

Re:What about pipelining and keep-alive? (3, Insightful)

Chatterton (228704) | more than 2 years ago | (#38815973)

The risk described in the page is a Denial Of Service risk.

Re:What about pipelining and keep-alive? (1)

naasking (94116) | more than 2 years ago | (#38815625)

HTTP pipelining just magnifies the DoS vulnerability that all HTTP servers inherently have.

Re:What about pipelining and keep-alive? (1)

spongman (182339) | more than 2 years ago | (#38815697)

meh. there's plenty of stuff pipelining does allow you to do out of order and still comply. and pipelining on hinders your ability to mitigate DoS attacks if your metrics are trivial.

reminds me not to buy f5.

Re:What about pipelining and keep-alive? (0)

Anonymous Coward | about 2 years ago | (#38816473)

If you're not going to buffer responses you can't do anything out-of-order. And if you're going to buffer you're increasing your DoS attack surface, because a single client can consume many times the number of resources compared to a no-pipelining case, and with no related increase in the resources required at the client. You can pick the balance point between those two, but you're going to make the tradeoff no matter what metrics you collect.

Re:What about pipelining and keep-alive? (0)

Anonymous Coward | about 2 years ago | (#38816301)

> push data to the client without the client requesting it

IF (bandwidthCap - client.traffic(date.month()) epsilon THEN
      spdy.push(client,bigfile.dat) # Profit!!!
ENDIF

Re:What about pipelining and keep-alive? (1)

bn-7bc (909819) | about 2 years ago | (#38816675)

Well more along the lines of : A forum notifies the clients that are viewing a particular thread when a new post is made, instead of the browsers having to poll/refresh to check, this can save a lot of load on a popular forom Disclaimer This is a quick posy from work so I did not have time to check SPDY Out in detail so If this is nonsense piece don't mood me down but if you have time a correction will be appreciated

Re:What about pipelining and keep-alive? (0)

Anonymous Coward | more than 2 years ago | (#38815567)

If i remember correcly http 1.1 does have some buggy implementation. Some proxy software are problematic. It's significant if your isp is using a transparent proxy for http connection or if you are in a entreprise network.

SPDY might be a solution.

 

Re:What about pipelining and keep-alive? (5, Informative)

grmoc (57943) | more than 2 years ago | (#38815867)

As one of the creators of SPDY..

No, HTTP suffers from head-of-line blocking. There is no way around that.
HTTP also doesn't have header compression (which matters quite a bit on low-bandwidth pipes) nor prioritization to ensure that your HTTP, javascript and CSS load before the lolkittens :)

Re:What about pipelining and keep-alive? (1)

Jerome H (990344) | about 2 years ago | (#38816153)

Correct me if I'm wrong but isn't everyone pushing javascript AFTER the lolkittens so the user sees the content before interacting with it ?

Re:What about pipelining and keep-alive? (1)

Pieroxy (222434) | about 2 years ago | (#38816597)

Depends. On a typical commercial website you usually have the scripts you care about - the scripts you page depends on, and javascript you don't care all that much about - analytics, ads, etc.
The former is usually placed in the header, the latter in the footer.

YMMV of course.

Re:What about pipelining and keep-alive? (1)

Anonymous Coward | about 2 years ago | (#38816163)

Can you elaborate on the priority for the CSS/javascript? I thought a protocol shouldn't bother with the upper level content and it was the browser's task to decide which resources to load first, since after all, it first has to do the HTML parsing to figure out what it actually has to download.

Re:What about pipelining and keep-alive? (5, Funny)

symbolset (646467) | about 2 years ago | (#38816173)

Now that's a comment you don't see every day. Glad you're thinking of the lolkittens. They're precious.

Re:What about pipelining and keep-alive? (3, Insightful)

FireFury03 (653718) | about 2 years ago | (#38816269)

No, HTTP suffers from head-of-line blocking.

Head of line blocking is a feature of TCP, and my (very cursory) understanding is that SPDY still uses TCP so how is this not still a problem?

A technologically better solution would probably be to use SCTP instead of TCP, but unfortunately suffers from the fact that SCTP is only 12 years old, so Microsoft have stated that they have no intention of ever supporting it. However, despite MS's lack of support, that doesn't prevent the possibility of using SCTP in environments where it is available.

Re:What about pipelining and keep-alive? (5, Informative)

Anonymous Coward | about 2 years ago | (#38816435)

Yes, the layer-4 protocol still requires in-order delivery. But lost packets is not the sort of problem SPDY is trying to solve. SCTP might also be a good idea, but you'd still want SPDY on top of it instead of HTTP because it's faster even assuming perfect, in-order packet arrival.

But SPDY adds encapsulation within the TCP stream so data frames from multiple SPDY streams can be interleaved. It's like opening a bunch of separate TCP streams (like browser do now) but with a lot less overhead. It's also less complicated to implement than HTTP pipelining because each request/response gets its own SPDY stream so the underlying HTTP server can treat them as fully independent connections.

Using with HTTP, even with pipelining, the server must respond to requests in the order they are received. So if you request something slow -- say a DB query -- and then a bunch of static images, HTTP will have to wait for the slow request, send that response, and then send the images. Using SPDY each request creates a new SPDY stream, and the server can send back responses in each stream without respect for the order in which they arrived. So in the same slow-DB + static images scenario SPDY would be able to immediately start sending back images and then send back the DB response when it was ready. SPDY could even start sending images, and when it sees that the DB response is half-ready and has a higher priority, interrupt the image transfer, send the first half of the DB response, complete the image response, and then complete the DB response.

Re:What about pipelining and keep-alive? (2)

WaffleMonster (969671) | about 2 years ago | (#38816653)

As one of the creators of SPDY..

No, HTTP suffers from head-of-line blocking. There is no way around that.
HTTP also doesn't have header compression (which matters quite a bit on low-bandwidth pipes) nor prioritization to ensure that your HTTP, javascript and CSS load before the lolkittens :)

Please tell me your joking.

SPDY is layered on top of TCP... **TCP** suffers from head-of-line blocking and therefore so does SPDY.

By collapsing everything into a single connection you induce *more* latency on low speed, high latency connections because the chance of idling the link when waiting for an ACK/response with one TCP session or waiting for the next request increases vs multiple concurrent TCP sessions.

You don't need to invent a new protocol for header compression if you intend to always use TLS anyway. That is stupid... Just enable compression within TLS.

I would love to see a prioritzation scheme that works but the cold hard truth is you won't get much better than simple hurestics that don't require prioritization anyway. Dependancy graph is discovered late and simply isn't knowable ahead of time. You can do something like PGO to figure it out but this only works with static content.

If we are going to bother with a new protocol it needs to be a new *IP* layer protocol or at least layered on UDP... Anything less is akin to purchasing "Internet accelerator software" from an infomercial.

Re:What about pipelining and keep-alive? (1)

Anonymous Coward | about 2 years ago | (#38816787)

I can haz SPDY?

Real issue is... (5, Insightful)

blahplusplus (757119) | more than 2 years ago | (#38815419)

... embedded links that activate scripts/contact other adservers, etc. There is so much junk embedded in modern web-pages that most users have no clue how bad their client is being raped of accumulating identifiable information.

Re:Real issue is... (1)

Anonymous Coward | more than 2 years ago | (#38815621)

"Real issue is embedded links that activate scripts/contact other adservers, etc. There is so much junk embedded in modern web-pages that most users have no clue how bad their client is being raped of accumulating identifiable information."

So does Google's SPDY solve that problem or only contribute to it? Or is the problem you mention and SPDY orthogonal?

Re:Real issue is... (3, Interesting)

jeti (105266) | more than 2 years ago | (#38815821)

Too true. Installing the Firefox NoScript extension has been an eye-opener for me.

Re:Real issue is... (1)

symbolset (646467) | about 2 years ago | (#38816191)

Most "IT security professionals" don't know how inept they are either. It's a human trait: we all have blind spots and the biggest one is the breadth of our incompetence.

HTTP 3.0 and 4.0 nightly already released (-1)

Anonymous Coward | more than 2 years ago | (#38815427)

Get it from nightly.mozilla.org today. Of course your company is rolling out IE6 again which means HTTP 1.1 for you.

remove that Adsense first? (3, Insightful)

Anonymous Coward | more than 2 years ago | (#38815439)

Maybe Google could remove all the useless Google Adsense ads first because the main reason why pages load so slow is because of the million ad calls to Google?

Amen to that - GA as well (1)

Anonymous Coward | more than 2 years ago | (#38815467)

They could at least make it a single one - but as someone already pointed out, there's so many cross-site and domain requests on a single page, and they all want to get their own google analytics report as well :)

Re:remove that Adsense first? (1)

simoncpu was here (1601629) | about 2 years ago | (#38816219)

Ads are OK. They're the reason why most things in life (i.e., the Internet) are free.

Re:remove that Adsense first? (3, Insightful)

Anonymous Coward | about 2 years ago | (#38816281)

Yes, because before ads, there was no culture whatsoever and people didn't communicate. What was the point anyway? It's not like you could monetize it somehow...

Protip: in the real world, content precedes advertisements... strangely, even on the Internet... I know you're probably too young to remember, but there _was_ a time when the Internet had more content than ads. Waaaaay back.

Also, most things on the Internet are free, because most things in the Universe are free anyway and because... if you charged money for your blog... do you think people would bother even consider paying you? LOL

Enjoy your stupid capitalist utopia.

Re:remove that Adsense first? (2)

Pieroxy (222434) | about 2 years ago | (#38816623)

If you continue mixing everything in one big meta problem, you'll fail to see the big picture.

Fact: People need to eat. They also usually need a roof over their head.
Consequence: People need money.

As a result of this, most people are not willing to work for free, because it is usually their work that makes the ends meet.

Look at Wikipedia. Free of ads, but they come whining every other year for money. All in all, they do put ads on their site, buit it's ads for themselves.

And before ads were invented the web was mostly academics and porn. Ah yes, a few fan website that ware so full of crap it was barely useable.

Re:remove that Adsense first? (2)

symbolset (646467) | about 2 years ago | (#38816231)

You have to dig that deep for a dig at Google? That's pretty lame. Google can't do anything about how novice blogs include their ads.

Re:remove that Adsense first? (0)

Anonymous Coward | about 2 years ago | (#38816497)

...You do realize that most websites would simply use a different provider for ads if they did this? And that those ads would probably be more intrusive/annoying? Google's text ads are barely any overhead compared to the rest of a page loading.

Grossly Incorrect Summary (3, Interesting)

zbobet2012 (1025836) | more than 2 years ago | (#38815491)

By default, a web browser opens an individual connection for each and every page request, which can lead to tremendous inefficiencies

HTTP1.1 which is supported by everything newer than IE5 (at least) utilizes persistent connections. You can verify this yourself with Wireshark in seconds. SPDYs optimizations largely revolve around "pipelining", but without some of the issues that it causes.

Re:Grossly Incorrect Summary (5, Funny)

VortexCortex (1117377) | more than 2 years ago | (#38815919)

By default, a web browser opens an individual connection for each and every page request, which can lead to tremendous inefficiencies

... SPDYs optimizations largely revolve around "pipelining", but without some of the issues that it causes.

No no no... You're using the wrong definition of "pipelining". First you must realize the Internet is a Series of Tubes. The Series part is important because TCP is a Serial protocol. If any Tube in the Series cracks and data is lost, endpoints start spewing packets all over the place in alarm! SPDY's optimizations revolve around "pipelining" -- That is: Lining the Pipes to prevent such events from happening in the first place.

HTTP1.1 is OLD! The pipes built around that time are OLD too. The HTTP1.1 "pipelining" is starting to wear through after 17 years... Connecting the Tubes is expensive too; If a Header is compressed there's less chance of part being lost in a data leak when it flows through the tubes.

There is an underground movement to get rid of the whole ridiculous idea of Tubes. I mean, Why would you take something as permeable as a NET and build a Series of Tubes out of it?! OF COURSE YOU NEED PIPELINING if you want it to be efficient!

However, what if you didn't need a "pipe" what if you could get your information from the Sea of Data by Casting about the Web and sorting out the results on your end? You could simply keep trying until you were satisfied with the data you had. Even better if relevant data could be naturally organized -- swarm together -- in a sort of BitTorrent so you could get the data in the Net with less Casting... One could even take the center out -- Decentralize it -- to help prevent conflicts about which data came from what side. I mean: Who cares if someone wants to DownLoad a car so much that it's undrivable from all the weight? It doesn't make your car any less usable! Besides, the naysayers are all Hypocrites: They have to participate in the things they say are wrong just to even see into what we're doing -- You have to Peer to Peer!

Don't even get me started on Cloud Computing! Seriously... It's VaproWare!

Re:Grossly Incorrect Summary (1)

YoopDaDum (1998474) | about 2 years ago | (#38816531)

Modded "5, Funny" I could have understood, but "5, Informative"?. We need a "meta-funny" modding option.
And I for one would welcome the ones who modded this informative to reply to this comment explaining their rationale. Don't be shy people, we already love you!

Re:Grossly Incorrect Summary (1)

Pieroxy (222434) | about 2 years ago | (#38816637)

You are correct!! There are 30% of informative mods on this !!!

That anyone could find this "informative' clearly show the mod point go to the wrong persons. I mean, if you have that low of a knowledge of networks in general, you shouldn't moderate these discussions.

Alternatives? (2)

Ostracus (1354233) | more than 2 years ago | (#38815493)

What about alternatives [geant.net] like UDT, and SCTP?

I before E... (5, Funny)

grcumb (781340) | more than 2 years ago | (#38815495)

"[T]he SPDY (pronounced 'speedy') protocol ....

No WAY am I pronouncing it 'speedy'. I'm a callin' it 'spidey'. That way, I can build wearable network monitors which vibrate at high frequencies when the web server gets bogged down.

And then.... I'll be able to interrupt my boss in mid-sentence and say, "Hang on, my spidey sensors are tingling..."

Re:I before E... (5, Funny)

Surt (22457) | more than 2 years ago | (#38815811)

"I better check my web servers."

HTTP 2.0? (1)

Niscenus (267969) | more than 2 years ago | (#38815575)

We haven't done that yet? Wasn't that a late nineties thing? We're still on a 10 and 20 year old protocol!? Why isn't slashdot using html 1.1? Tables not good enough? As someone who still catches up on the IEEE from time to time, this is actually surprising. No wonder lynx hasn't needed much upgrading for connections beyond bug fixes.

That means all advantages have been the physical pathways (that includes wireless) and TCP! Wow! Based on the fact psychics made it to the front page of slashdot without James Randi popping up, I can only presume most of slashdot has no idea how bizarre that is.

Re:HTTP 2.0? (0)

Anonymous Coward | more than 2 years ago | (#38815659)

That is a feature, not a bug.

Re:HTTP 2.0? (2)

Hadlock (143607) | more than 2 years ago | (#38815797)

We have other protocols, like FTP for example, that handle things besides web pages. HTTP is a pretty wide open protocol and allows all sorts of things to be jammed in to it, which is why it's worked so well in the past.
 
Also, as they say, "if it ain't broke, don't fix it".

Re:HTTP 2.0? (2)

Pieroxy (222434) | about 2 years ago | (#38816649)

We have other protocols, like FTP for example, that handle things besides web pages. HTTP is a pretty wide open protocol and allows all sorts of things to be jammed in to it, which is why it's worked so well in the past.

Also, as they say, "if it ain't broke, don't fix it".

With all the NAT going around, FTP is becoming less and less useable.

The biggest pro for HTTP is it's universal support, not its openness.

As far as your last sentence is concerned, we'd still be fighting for a little fire if we followed it all along...

Re:HTTP 2.0? (1)

VortexCortex (1117377) | more than 2 years ago | (#38815997)

Ah, I see... So, if it's not broke, take it apart and re-engineer it until it is. I'm filing this under my subtle methods for maintaining job security.

The Model T wasn't Broken (2)

Niscenus (267969) | about 2 years ago | (#38816283)

HTTP's inception predates the scale at which the Internet is used today, and like IPv4, the failure to anticipate the shifts in use and data access within the Internet make it far less efficient than it could otherwise be. SPDY, along with many of the streaming protocols, identify more with the modern Internet practices than, "Get this page now," technique of HTTP.

I'm on the second tier of my ISP's access rate, and even though many pages should load in the theoretical second, they don't due to modern styling and plugin/include/addon calls. I honestly would have thought that what SPDY does would already have been more commonly implemented, and that data access in general would be moving to a peer/metapeer networking solution to save on demand of resources in general. Some of the assumptions of its design come out of the late-eighties and early-nineties anticipation of the large-first to small-second distribution common among universities, government installations and large commercial systems of the time, where maintaining a large lan with few nodes possessing Internet access in both directions.

HTTP1.1 is as not broken compared to HTTP2.0 as a protocol as XFree86 4.x isn't anymore broken than any of its successors (irregarding explicit bug fixes, as that's not applicable relative to a protocol...usually), but regardless of where you came down on the forks and licensing, you probably aren't running it on any *Nix under 5 years old.

Countermeasure for Nokia/RIM/O... speedup-proxies? (5, Interesting)

q.kontinuum (676242) | more than 2 years ago | (#38815623)

Several mobile phone companies and some browsers offer special proxies nowadays to speed up browser experience on mobile phone and to reduce data usage for customers by serving prerendered or otherwise optimized/reduced pages. This might severely reduce Googles ability to collect user data from these users on the visited web pages (unless the user is logged in to google+ or alike with his browser, which might be unlikely given that for social networks there are usually separate apps).

Is this now a step to reduce the need for these proxies in order to protect their own business?

Re:Countermeasure for Nokia/RIM/O... speedup-proxi (1)

Anonymous Coward | more than 2 years ago | (#38815749)

Several mobile phone companies and some browsers offer special proxies nowadays to speed up browser experience on mobile phone and to reduce data usage for customers by serving prerendered or otherwise optimized/reduced pages.

No they don't. They transparently proxy everything to reduce the amount of traffic through their upstreams and peers - because Telcos rape either other with bills in the same way they rape their own customers. That customers get a faster, more responsive connection to popular sites is only an unintended side effect.

Re:Countermeasure for Nokia/RIM/O... speedup-proxi (2)

q.kontinuum (676242) | about 2 years ago | (#38816337)

Opera (offering Opera Turbo), Nokia (http://www.developer.nokia.com/Develop/Series_40/Nokia_Browser_for_Series_40/) etc. are not the network providers, so your argument is moot. They neither save any bandwidth on their side by offering the proxy, quite the opposite, not do they just cache the traffic.

Waiting for ad.doubleclick.net ...zzz... (1)

Frogking (126462) | more than 2 years ago | (#38815627)

Great! But will it do anything to speed up pages that refuse to display until the advertisements do? Even Slashdot takes longer to display because of some third-party ad server.

Re:Waiting for ad.doubleclick.net ...zzz... (1)

scdeimos (632778) | more than 2 years ago | (#38815753)

No ads here. Maybe I clicked that "Disable Advertising" thingy on the front page at some point.

Re:Waiting for ad.doubleclick.net ...zzz... (5, Interesting)

VortexCortex (1117377) | more than 2 years ago | (#38816013)

I have that option too, but I'm willing to allow Slashdot to make whatever meager income they can by my presence since they're kind enough to allow my positive (and negative) contributions.

Re:Waiting for ad.doubleclick.net ...zzz... (1)

oobayly (1056050) | about 2 years ago | (#38816359)

Good to see I'm not the only one. I don't tend notice the ads because I'm too busy reading the comments (the article? don't be daft), but there have been a few occasions when an ad has been useful (like RC helicopters).
Without the income Slashdot would be slashdoted daily.

Re:Waiting for ad.doubleclick.net ...zzz... (1)

Surt (22457) | more than 2 years ago | (#38815851)

Yes. Or, it should. Because your browser definitely doesn't have to wait on those. Mine doesn't.

Re:Waiting for ad.doubleclick.net ...zzz... (4, Funny)

FireFury03 (653718) | about 2 years ago | (#38816293)

Great! But will it do anything to speed up pages that refuse to display until the advertisements do? Even Slashdot takes longer to display because of some third-party ad server.

XHTML largely fixed that by banning document.write(). Unfortunately the "industry" didn't like that and produced the enormous brain-fart known as HTML5, which went back to allowing all the crazy shit that XHTML had banned for a good reason.

premature optimization is the root (1)

decora (1710862) | more than 2 years ago | (#38815691)

of something... i cant remember it right now, because, well, my text reading program has been optimized for ARM NEON, but my smart phone doesn't have NEON, or at least, this version of the driver i'm using doesn't work with NEON, and i tried to port it, but its like i can't find a good way to work around the specialized vectorization code inside of the Eigen2 library because it doesnt work on certain platforms. but if it did work, my text editor would be like 75% faster than some luser running it on a single CPU machine with no NEON optimization.

what im trying to say is, imagine a beowulf cluster, and then imagine you fill it with sand, and drain the sand out, and you film it.

Re:premature optimization is the root (1)

Surt (22457) | more than 2 years ago | (#38815857)

I think the internet has proven sufficiently slow that it is now officially time to go ahead and get on top of optimization. It's premature by about negative one decade at this point.

Re:premature optimization is the root (1)

symbolset (646467) | about 2 years ago | (#38816239)

Have you considered using a hammer to pound that nail? It sounds like the limp fish you're using isn't getting it done.

SPYW (1)

microbee (682094) | more than 2 years ago | (#38815839)

At least it's not SPYW..thanks God.

It'd be a faster still without all the redirects (2, Informative)

Anonymous Coward | more than 2 years ago | (#38815883)

Says it all. The connection setup isn't the problem, it's being bounced to 10 different sites other than the one you wanted to visit, half of which are overloaded at any given time.

Fix that instead Google and there's no need to mess with the standards anyway.

What above the layer below? (4, Interesting)

Lisandro (799651) | more than 2 years ago | (#38815965)

SPDY's goal is to reduce web page load times through the use of header compression, packet prioritization, and multiplexing (combining multiple requests into a single connection).

I'd like to see SCTP [wikipedia.org] getting some love, which sadly enough seems unlikely if it hasn't happened so far. It's a very simple protocol mixing the good parts of both TCP and UDP, plus it supports multiplexing and priorization off the bat.

Re:What above the layer below? (1)

FireFury03 (653718) | about 2 years ago | (#38816323)

I'd like to see SCTP [wikipedia.org] getting some love, which sadly enough seems unlikely if it hasn't happened so far. It's a very simple protocol mixing the good parts of both TCP and UDP, plus it supports multiplexing and priorization off the bat.

Unfortunately, whilst its a very good protocol, it isn't supported by Windows, and Microsoft is on record as saying they have no intention of ever implementing it. I guess this is no surprise - the protocol is very new (only 12 years old) and not in common use Microsoft traditionally wait until technologies have been in common use for a good 10-15 years before bothering to produce a half-arsed broken implementation of them (see standards like C99 for details).

That said, it would be nice to see SCTP being used for this sort of thing automatically where it is available, with fallback to TCP where not.

Re:What above the layer below? (5, Informative)

rdebath (884132) | about 2 years ago | (#38816355)

That's a really poor way of describing SCTP. Firstly the relationship between TCP and UDP is such that TCP could be built entirely ontop of UDP, the only reason it isn't physically is so that the port numbers for UDP and TCP are distinct. On the other side the best description of UDP is actually "Bare IP packets with port numbers".

SCTP is not that, it would probably be most accurate to describe it as being a protocol with multiple TCP streams in both directions within one connection. Because it's within a single connection a 'stream' can be very small (ie a little message) and still be efficient and because there are multiple streams messages don't have to wait for each other; though they can if you want. It is probably simpler that TCP, but only because TCP has had so much bolted on.

But you are absolutely correct, this would be a very good protocol for throwing a load of tiny requests at a web server and getting the results back as soon as they're ready. BUT, mixing it with SSL would not be very simple, I guess you'd have to do what OpenVPN does.

Re:What above the layer below? (1)

dkf (304284) | about 2 years ago | (#38816619)

That's a really poor way of describing SCTP. Firstly the relationship between TCP and UDP is such that TCP could be built entirely ontop of UDP, the only reason it isn't physically is so that the port numbers for UDP and TCP are distinct. On the other side the best description of UDP is actually "Bare IP packets with port numbers".

It also adds packet content checksums, so you're much less likely to get bad data delivered. It's less important than it used to be (due to improvements in physical network quality) but even so, it's a huge help since it lets you assume that the data is at least uncorrupted by the transfer process itself.

Adds bufferbloat and reduces VoIP sound quality (5, Interesting)

haffy (66129) | more than 2 years ago | (#38815969)

SPDY is a great example of someone thinking only of their own application.

By increasing the initial window size from 3 to 10 they add to the bufferbloat effect (at the microscopic level) and increase Jitter from tolerable 38 ms to intolerable 126 ms on a 1 Mbit/s ADSL line. This level of jitter severely affects VoIP sound quality. And for this calculation I have assumed that the web browser only uses one TCP connection to load the page; if it uses two TCP connections the Jitter may double.

But hey! What does any application developer care about other applications? They are only concerned about getting their own application sped up.

When you improve the performance of your application, you should think about how it degrades the performance of other applications. If someone recommended increasing the O/S priority level of the web browser to the maximum, so all your other applications slowed down to a halt while the web browser was running, you would probably object. The increased initial window size is a comparable recommendation, but at a network buffering level, so very few people understand its negative side effects.

We all want faster loading web pages, but we also want other applications to respond faster, and we also want perfect VoIP sound quality without the walkie talkie effect caused by high latency or jitter.

Re:Adds bufferbloat and reduces VoIP sound quality (0)

Anonymous Coward | more than 2 years ago | (#38816049)

Then don't use SPDY for VoIP applications?

Re:Adds bufferbloat and reduces VoIP sound quality (2)

mnot (71203) | about 2 years ago | (#38816235)

Actually, Jim Gettys has called for the adoption of SPDY (alongside the wider deployment of HTTP pipelining) to help mitigate bufferbloat.

Re:Adds bufferbloat and reduces VoIP sound quality (0)

Anonymous Coward | about 2 years ago | (#38816307)

Don't use VoIP on a 1Mbit/s line that isn't really dedicated to VoIP. Besides, it's pretty rare to have a ADSL line that slow these days.

Re:Adds bufferbloat and reduces VoIP sound quality (0)

Anonymous Coward | about 2 years ago | (#38816331)

Quite true regarding the initial window size on TCP connections.

Maybe fixing pipelining should be the priority here not fucking around with TCP window sizes so you can push more flash ads faster into the eyes of your product.

Re:Adds bufferbloat and reduces VoIP sound quality (0)

Anonymous Coward | about 2 years ago | (#38816387)

Proper VoIP applications don't even use TCP as their transport protocol as TCP verifies the checksum of each frame and re-requests bad frames to ensure data integrity (which is too slow for a VoIP application, which typically use small frames and just drop bad packets).

The TCP frame size will keep increasing if the error rate is low anyways, as TCP was designed to start will a small frame size and gradually increase it (to keep bad connections using small frames, and thereby reducing network congestion).

Re:Adds bufferbloat and reduces VoIP sound quality (5, Informative)

FireFury03 (653718) | about 2 years ago | (#38816401)

By increasing the initial window size from 3 to 10 they add to the bufferbloat effect (at the microscopic level) and increase Jitter from tolerable 38 ms to intolerable 126 ms on a 1 Mbit/s ADSL line.

I can't really see how increasing the window size would increase jitter. Network bandwidth aside, the throughput of a TCP connection is a function of latency and window size. Increasing the window size simply increases the throughput on high-latency networks.

You're still limited to MTU-size packets (probably 1500 octets), and if you're using a priority queuing discipline this gives you a network jitter of about 12ms on a 1Mbps connection: (1500*8)/1000000 = 0.012 since the priority queue will insert the small high priority RTP packets between the large low priority packets.

When you improve the performance of your application, you should think about how it degrades the performance of other applications. If someone recommended increasing the O/S priority level of the web browser to the maximum, so all your other applications slowed down to a halt while the web browser was running, you would probably object. The increased initial window size is a comparable recommendation, but at a network buffering level, so very few people understand its negative side effects.

Increasing the window size is not comparable to increasing the priority of the traffic. I would agree with you if the application were setting the ToS flags(*) in an abusive way, but the window size just affects the maximum throughput for a connection given a specific latency connection. Given that latency isn't something generally very controllable, this can't even be regarded as an effective method of intentionally throttling throughput (shaping the traffic on the router would be more sensible here).

(* routers out on the internet won't generally pay attention to ToS flags, so setting them abusively wouldn't normally give you any advantage anyway. However, routers at the ends of low bandwidth links, such as ADSL, should be paying attention to ToS flags in order to prioritise latency-sensitive protocols. If you're not doing this and just relying on FIFO queuing then you're pretty much screwed already for VoIP unless you're using the connection for nothing else).

Re:Adds bufferbloat and reduces VoIP sound quality (0)

Anonymous Coward | about 2 years ago | (#38816535)

SPDY is a great example of someone thinking only of their own application.

Sure, idiot, because Google doesn't do Google Talk, Voice or Hangouts.

Tethering (1)

jones_supa (887896) | about 2 years ago | (#38816241)

Do you know any methods to improve tethering? I use my 3G phone as a WiFi hotspot. It seems that when the connection is idle for a while, it takes some time to kick it up to full speed. Sometimes when I try to post a message to some forum, I have to load some another page in the background to wake up the link.

Does the TCP slow-start make things worse here, would the connection run better if I encapsulate it to some single tunnel, should I send some constant keep-alive data, can I force an Android phone to stay in full network speed, and so on.

How many of you read it as "Google Spy-d" (0)

Anonymous Coward | about 2 years ago | (#38816457)

That's what my brain translated!

Supported since Firefox 11 (5, Informative)

Skuto (171945) | about 2 years ago | (#38816631)

Go to about:config and switch network.http.spdy.enabled.

Mozilla has been quite critical of some Google technologies (Dart, Native Client, ...) that it saw as not truly open and closing down the internet to be the GoogleWeb. SPDY got implemented though. So I guess it's a a keeper and might see wider adoption.

Re:Supported since Firefox 11 (1)

makomk (752139) | about 2 years ago | (#38816681)

From what I recall from reading the bug report about adding it, SPDY was just on the borderline of being actually implementable by other browsers - undocumented and requiring some hairy low-level changes to their SSL implementation, but simple enough to be doable. There's probably a reason it's not enabled by default though. Also, I think bits of it may have been effectively reverse-engineered from the behaviour of the Google services using it.

Never used multipart mime? Never used AJAX? (0)

Anonymous Coward | about 2 years ago | (#38816763)

[...] and multiplexing (combining multiple requests into a single connection). By default, a web browser opens an individual connection for each and every page request, which can lead to tremendous inefficiencies."

Uuum, that’s why there is the multipart mime type! Also, if you do AJAX, you can request as much stuff as you want over a single HTTP connection.
This is not new guys! I’m using it since 2003! That’s 9 years! (Yes, there was no AJAX when I started. But a <object> tag with a <form>, did just fine for a packet-based tunnel of "Whatever The Fuck I Want (TM)". ;)

Also, all style pictures can be incorporated in one common image anyway. Also done since back then.

The header compression Well, I always thought that should be its own layer right ontop of TCP., or even IP. Encryption already is, so this is a no-brainer.

At least we're finally getting those things at all. Could’ve been solved better though. Luckily I don’t have to care, and can just continue to use my own, better, solution.

Although (3, Funny)

sakura the mc (795726) | about 2 years ago | (#38816777)

The speed benefits provided by this new protocol will rapidly be negated by the ability to cram more shit into each connection.

What about caching proxies & web filtering? (4, Insightful)

mangobrain (877223) | about 2 years ago | (#38816795)

By choosing TLS as the underlying transport, inspecting traffic for debugging or monitoring purposes becomes nigh on impossible. It will only be possible to capture traffic at the application level, after decryption and decapsulation (which, depending on the nature of any given bug, may be too late), or if it originates from servers you own (i.e. the server's private key is available to you). SPDY proxies will become dumb pipes, unable to perform any sort of content caching, unless users are willing to accept that the proxy may perform MITM on their traffic. For such MITM to be feasible, users need to trust a CA certificate associated with the proxy, and rely on the proxy performing upstream certificate checks on their behalf, because the browser no longer has a single end-to-end TLS session with the origin server. In corporate and other "managed" environments, people often find this acceptable in moderation*, but I would be worried if this became the norm - it creates a mind-set whereby it's acceptable to breach users' privacy, and the more proxy vendors have incentive to implement the necessary (proprietary) code, the more scope there is for them to get it wrong, completely undermining security.

Not to mention that introducing a mux/demux layer in between the network traffic and the individual requests/responses greatly increases the complexity needed to implement a proxy compared to plain-old HTTP.

Losing the functionality of caching proxies would seem counter to Google's goal of speeding up the Web. Losing the ability to monitor and filter network traffic will greatly diminish the ability of schools, employers, public hotspot providers etc. to enforce acceptable usage policies, to the extent that some - especially employers - may simply resort to blocking web access outright.

IMHO, the behaviour of SPDY proxies needs to be tightly specified, if they are going to exist. Standardise MITM behaviour, so that users and admins are aware of the pros and cons. Make it mandatory that end users are warned when MITM is taking place. Perhaps introduce new extensions, such as the ability for the proxy to establish TLS with the browser using its own certificate, but transmit a copy of any origin server certificates corresponding to proxied requests, so that the browser isn't entirely at the proxy's mercy WRT certificate verification. Perhaps introduce the ability to multiplex requests to different domains on the same connection, something a browser can't do when talking directly to distinct origin servers.

Note that similar concerns apply to reverse proxies, used by website providers themselves to speed up their own infrastructure. It may seem desirable for both front-end caching and back-end servers to use SPDY, but establishing TLS sessions between two halves of the same infrastructure, over the LAN, will be detrimental to resource usage.

* Bear in mind that currently, MITM is only necessary for HTTPS sites, which means that there are vast swathes of in-the-clear HTTP traffic for which caching doesn't introduce any inherent security concerns. By making *everything* use TLS, MITM becomes a consideration if you want to perform *any* caching/filtering at all, not just caching/filtering of secure sites. If there is no longer any distinction between secure and insecure sites, how does even a responsible admin know where to draw the line?

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?