Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Google Cuts Chrome Page Load Times In Half w/ SPDY

CmdrTaco posted about 3 years ago | from the well-that-seems-optimistic dept.

Chrome 310

An anonymous reader writes "It appears as if Google has quietly implemented the SPDY HTTP replacements in Chrome (well, we knew that), and its websites. All its websites were recently updated with SPDY features that address some of the HTTP latency issues. The result? Google says the pageload times were cut about in half. SPDY will be open source, so there is some hope that other browser manufacturers will add SPDY as well."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered


Most obvious use (1)

robot256 (1635039) | about 3 years ago | (#35781636)

Now you can refresh your Facebook page twice as fast!

Re:Most obvious use (1)

YodasEvilTwin (2014446) | about 3 years ago | (#35781688)

Heard of AJAX? You don't need to refresh your Facebook feed.

Re:Most obvious use (0)

Anonymous Coward | about 3 years ago | (#35781724)

you know... ajax also uses http protocol...

Re:Most obvious use (1)

eviljolly (411836) | about 3 years ago | (#35782092)

HTTP Protocol? Uh-oh...better get some money out of the ATM Machine for a new NIC Card.

Re:Most obvious use (1)

LordLimecat (1103839) | about 3 years ago | (#35782168)

You know, HTTP IS a protocol, even if it does have the word "protocol" as part of its name. I dont think it is incorrect to refer to it as one, any more than it is incorrect to refer to TCP as a protocol of IP.

sweet! (0)

Anonymous Coward | about 3 years ago | (#35781642)

50% faster first posts!

And this... (1)

marcello_dl (667940) | about 3 years ago | (#35781668)

is the "extend" part. Let's hope they don't get any temptation to become incompatible.

Re:And this... (2)

M. Baranczak (726671) | about 3 years ago | (#35781854)

They're releasing the implementation under a BSD license. And unlike that other giant software company back in the old days, they don't have an overwhelming market share, so they can't just ram new standards down everyone's throats. If they make it incompatible, then nobody will use it. So it looks pretty good so far.

Re:And this... (3, Interesting)

Zocalo (252965) | about 3 years ago | (#35781962)

SPDY is (according to Google) going to be released as open source, so I'm hopeful that it's development will be more akin to Mozilla's tack with its "Do Not Track" header - add support to your own browser, then throw it out there and see if the market is interested. IE9 already supports the "Do Not Track" header and there is also signs of some interest from websites too, so that's looking good.

What would be even better though, especially given that SPDY is really an extension to HTTP akin to using GZ compressed data, is if Google were also to write up and submit an RFC or whatever mechanism it is that W3 uses to get HTTP extensions added to the standard, such as it is. SPDY seems very much like a win for both content providers and content consumers to me, so once the details are out there I'd like to think that we'd see fairly rapid adoption by the browsers over the next several months, followed by backend support from Apache, IIS et al with their next major releases.

Re:And this... (1)

hedwards (940851) | about 3 years ago | (#35782146)

Not releasing it to the W3C isn't as big a deal as it was, they'll just throw it into HTML and people will have to guess whether or not it's supported.

Re:And this... (2)

Lennie (16154) | about 3 years ago | (#35782574)

Actually there are people working on submitting to the IETF as a RFC, it takes time.

It just uses an extra header to get it started/switch to the SPDY-protocol. Not only that, but the extension for HTTP to make the switch is already gone through most of the IETF-process.

I wouldn't be surprised if the biggest hold up is actually websockets. Because the new 'HTML5' websockets where found to be insecure, atleast in combination with transparant caching proxies that didn't implement HTTP properly. Java and Flash are just as 'insecure' or actually it is these proxies which can be fooled in caching the wrong information (think kind of like phising).

The last 'spec' of websockets was implemented atleast Opera and the Firefox-developers but as these problems exist, the websocket protocol is disabled in their browsers.

Don't belive it! (-1)

Anonymous Coward | about 3 years ago | (#35781672)

Up to now, I've thought that Firefox has still the best:)


Embrace, Extend, ? (0)

ArhcAngel (247594) | about 3 years ago | (#35781698)

This is Google's version of Embrace, Extend, ?, Profit. It's just that Google really sucks at being EVIL [xkcd.com]

Re:Embrace, Extend, ? (4, Insightful)

bmo (77928) | about 3 years ago | (#35781756)

No, because the Microsoft way to embrace, extend, extinguish was to keep the "how to extend" part to itself and secret, like what they did with Kerberos.

This is open sauced. You are free to implement it in your own stuff.

You would have known that if you read the article.


Re:Embrace, Extend, ? (2)

reilwin (1303589) | about 3 years ago | (#35781846)

This is open sauced.

And it's a damn good sauce too!

Princess, I have altered the sauce! (0)

Anonymous Coward | about 3 years ago | (#35782238)

Pray I do not alter it further.

Re:Embrace, Extend, ? (0)

Anonymous Coward | about 3 years ago | (#35781878)

Not yet it isn't, from the article "Google said that it intends to release SPDY as open source". The road to hell is paved with good intentions.

Re:Embrace, Extend, ? (2)

Lennie (16154) | about 3 years ago | (#35782606)

Actually the article is wrong about this, the code is open source (Chrome or alteast Chromium which it is based on is an open source project after all) and their are drafts for the RFC.

Re:Embrace, Extend, ? (1)

ArhcAngel (247594) | about 3 years ago | (#35781952)

I think I covered that in the "They suck at being EVIL" part. Perhaps you failed to read my entire post?

Re:Embrace, Extend, ? (1)

BitZtream (692029) | about 3 years ago | (#35781984)

like what they did with Kerberos.

The way MS extended Kerberos was 100% within the specifications of Kerberos. It was designed to do EXACTLY what Microsoft did.

The problem is that most implementations of Kerberos in the unix world were based off the same broken implementation that didn't actually deal with the specification properly. Get your facts straight.

Re:Embrace, Extend, ? (1)

robmv (855035) | about 3 years ago | (#35781842)

A lot of people knew that when they decided to hide the http:/// [http] portion of the URL bar, the real reason was to be able to push SPDY without users noticing, the problem: they told the user do not need to see that, bla bla bla, Not that I have something against an efficient enhancement to HTTP, but please, no need to hide the real reason

Re:Embrace, Extend, ? (1)

LordLimecat (1103839) | about 3 years ago | (#35782246)

If theyre using SPDY on google.com, its STILL an HTTP GET. Dont believe me? Go there with Chrome's developer tools open to the network tab. Check the headers.

Re:Embrace, Extend, ? (1)

Lennie (16154) | about 3 years ago | (#35782624)

SPDY just multiplexes HTTP-requests over TLS ('SSL' as used by HTTPS) or in theory TCP.

sigh, this is really sad. (-1)

Anonymous Coward | about 3 years ago | (#35781702)

sigh, this is really sad.

I used to religiously read Slashdot .. all through my early 20's and even until very recently, until I realized that there are other sites out there that get their news so much before slashdot. you guys are so far behind the ball it's not funny.

I came back today, for my first visit in a few months, and your frontpage news is so dated it made me laugh. What happened slashdot, you used to be cool.

This is the end, I'm so sorry, but you're dead

Re:sigh, this is really sad. (0)

Anonymous Coward | about 3 years ago | (#35781874)

And nothing of value was lost.

Re:sigh, this is really sad. (0)

Anonymous Coward | about 3 years ago | (#35781886)

Are we completely pathetic?

Re:sigh, this is really sad. (1)

i-linux123 (2003962) | about 3 years ago | (#35782012)

Must be a specimen from the nearly extinct species of "diggeratus". I, for one, am excited for having spotted one of these out in the wild.

Have no page load problems (4, Interesting)

fermion (181285) | about 3 years ago | (#35781710)

My pages load plenty fast. What I notice is that when the page goes to google analytics that load process stops while waiting for the server. There was a time when pages would load partial content, and then go for the ads. Now, many pull the ads and analytics first. This would be good if the ad servers were fast, but the seem to be getting slower. Since google serves so many ads, it seems within it's power to make the web faster by making the ads faster. Perhaps, like MS, they want the web slow for all other browser, so Chrome seems so much faster.

Re:Have no page load problems (3, Informative)

MasterEvilAce (792905) | about 3 years ago | (#35781810)

This is the fault of bad webmasters. There is a newer version of the google analytics code that allows asynchronous loading... meaning the google analytics stuff loads while other things are loading.

Re:Have no page load problems (2)

sockonafish (228678) | about 3 years ago | (#35781814)

That's just bad code. The author of the page should at the very least be putting the call to Google Analytics below the footer. Preferably, they'd make it a callback to document.ready().

So let me get this straight (0)

Anonymous Coward | about 3 years ago | (#35781850)

We aren't sure if this keyboard posted by Beverloo is actually a screenshot of the Chrome keyboard.

"Old Media" now simply comment on "New Media"
"New Media" now simply plagiarize from blogs
Who is making the next product, I'll throw together some fake screenshots and call them "leaked photos" - I suggest you all do the same in your own inventive ways.

Re:Have no page load problems (1)

Charliemopps (1157495) | about 3 years ago | (#35781956)

Not sure how you're on slashdot and don't yet know about it.

Re:Have no page load problems (3, Insightful)

Dhalka226 (559740) | about 3 years ago | (#35782028)

Some people choose not to block ads.

Re:Have no page load problems (0)

Anonymous Coward | about 3 years ago | (#35782284)

Some people choose not to block ads.

Then they don't really have a right to complain about ads.... Or is this like when a girl complains about something and wants sympathy, not a solution?

Re:Have no page load problems (0)

Anonymous Coward | about 3 years ago | (#35782430)

LOL and now we listen to them complain about the foibles of internet advertising. Classic...

Re:Have no page load problems (1)

pz (113803) | about 3 years ago | (#35782010)

I also notice much of my latency is due to DNS lookup. I've never understood why DNS lookups aren't locally cached by default. Even a cache with a 10-minute timeout would speed things up a lot (and, really, how often does any web site change their IP address?).

Re:Have no page load problems (3, Informative)

gpuk (712102) | about 3 years ago | (#35782352)

Normally, DNS lookups *are* locally cached by default....... if you're on Windows, try running ipconfig /displaydns

The problem might be with your upstream resolver(s). If you use your ISPs resolvers, maybe they are are overloaded? Or if you are using a non-ISP upstream cache, maybe it's sparsely populated? Either of these would make initial lookups slow.

You could give Google's public resolvers a try and see if they improve your lookup times: and

Re:Have no page load problems (2)

AndrewBuck (1120597) | about 3 years ago | (#35782364)

I have noticed the same thing in my house. Our DNS server that we get from Qwest can take as much as 10-15 seconds to resolve DNS queries sometimes (this is not all the time but when it happens its a major pain). I have dnsmasq running on my Ubuntu box (which will use OpenDNS to resolve cache misses instead of the Qwest DNS server). This makes cache misses faster than they would have been anyway, and cache hits take 0ms. I have switched over all of the computers in the house (both Windows and Linux boxes) to resolve through this machine here and that works very nicely. An added benefit is that if I want to change from OpenDNS to something else I only have to change it in one place now. I definitely recommend doing this.

As the parent post mentions, websites don't change IP often enough for this to be a problem and therefore it makes sense to cache DNS for at least some length of time. DNSMasq seems to honor the keepalive times in the results it gets from upstream. Does anyone here know if there is a way to tell it to keep all entries for at least some length of time (e.g. 1 day) before considering the info stale? This would not only speed up lookups but would further reduce load on the upstream DNS server.


Re:Have no page load problems (1)

gpuk (712102) | about 3 years ago | (#35782608)

>Does anyone here know if there is a way to tell it to keep all entries for at least some length of time (e.g. 1 day) before considering the info stale?

AFAIK, you can only override TTL values if you use a broken or modified resolver. Also, it is generally a bad idea to second guess the domain owners intention (e.g. upping the TTL will probably screw up their load balancing/maintenance assumptions).

Re:Have no page load problems (0)

Anonymous Coward | about 3 years ago | (#35782452)

Uh, what sort of ass-backwards browser do you have? Opera, Chrome, Firefox and Safari all cache DNS (Chrome even prefetches it), and ... oh, I see. Well, use a browser that's made for 2011, then?

Re:Have no page load problems (1)

blincoln (592401) | about 3 years ago | (#35782476)

"I've never understood why DNS lookups aren't locally cached by default."

As far as I know, they are. Are you using a web proxy? Because if so, unless you are also using a proxy autoconfig ("PAC") javascript file, you are implicitly delegating DNS lookups to that proxy. That may be why you're seeing some DNS lookup latency.

"(and, really, how often does any web site change their IP address?)."

Among other things, websites hosted by CDNs (Akamai, etc.) give different IP addresses to different clients (or even the same client) constantly for load-balancing and geographic optimization.

Let's get this out of the way (0)

Anonymous Coward | about 3 years ago | (#35781726)

[Insert reason on why Google is "evil" for doing this here]

Re:Let's get this out of the way (3, Insightful)

CharlyFoxtrot (1607527) | about 3 years ago | (#35782162)

I'll take a stab at it :

Everything is sent through an encrypted channel making it difficult to filter out ads before they hit the client (like with privoxy for example.)
No cashing ("Since we're proposing to do almost everything over an encrypted channel, we're making caching either difficult or impossible." -Protocol Draft [chromium.org]) means you'll be served "fresh" ads every time.

So it looks like this would be good news for Google's core business.

Re:Let's get this out of the way (1)

Lennie (16154) | about 3 years ago | (#35782654)

SPDY does not make ads cache more or less, any browser that does or would implement SPDY already does caching for HTTPS correctly (it adheres to what the creator of the webpage specied in the headers).

Detailed info on SPDY (5, Informative)

NevarMore (248971) | about 3 years ago | (#35781750)

Re:Detailed info on SPDY (0)

Anonymous Coward | about 3 years ago | (#35782362)

It looks like a good idea.

One question I have is does it address cachability of objects.

For example many sites use dozens of servers to feed up content and will sprinkle those around dynamic pages. This gives them load balancing thru the use of a dynamic page. It also gets them around limits built into many browsers (which you can configure but most people dont) of number of connects per server. So one time I hit a page I get a.someserver.org then next time I get b.someserver.org. Even though they both served up the exact same static content my cache treats it as 2 different things.

There are many hacks out there to 'figure it out' but you need to do that for each site. As each one does it slightly differently than the others. Youtube being one that comes to mind.

It would be nice if they could hint back to us that 'a.someserver.org' and 'b.someserver.org' are really the same thing.

Re:Detailed info on SPDY (1)

Carewolf (581105) | about 3 years ago | (#35782410)

Interesting. I see nothing in the technical documentation that would lead it to be significantly faster than modern HTTP.

It has exactly the same overhead for establishing connections, since it uses TCP, and HTTP has no additional connection overhead. It compresses HTTP-headers which might helps a few procent on small requests, but not much. It allows multiple requests within the same TCP connection, but then so does HTTP 1.1 by pipelining.

I only see sources of more bugs. The short version is that SPDY is HTTP wrapped in an additional SPDY datagram protocol, wrapped in a stream protocol TCP wrapped in a datagrad protocol IP. And what we gain is compressed HTTP headers and a requirement to support multiple requests, and multiple streams.

I would much rather like to see anything based on SCTP.

Does it speed up "Waiting for ad.doubleclick.net"? (0)

Anonymous Coward | about 3 years ago | (#35781752)

Because this causes more slowdowns than anything else.

Why people is not upgrading standards all the time (1)

Tei (520358) | about 3 years ago | (#35781808)

Well.. creating standards or changing standards is bad. You break things, stuff stop working, people get angry.
Theres always some ways to cut corners and optimize everything withouth touching the standard.
Then you see that the standard is really unappropiate for the problem, and that a simple change on the standard coud unlock freedom and speed.
So you create a new protocol. But you find that such protocol break a lot of routers and proxys (very old, buggy, crappy, undocumented using, proxies).
Then you see a person getting the same speed you have manage with your new standard, using the old standard in a new way.
So you drop your new standard, because have a lot of problems, and is not really giving nothing new to the table.
Then you learn to respect standards a lot more. Like I do.

So only once in a blue moon you see this history having a happy end. Lets hope this time we have it here.

Re:Why people is not upgrading standards all the t (1)

poetmatt (793785) | about 3 years ago | (#35781994)

Where the hell do you come up with this?

Standards are always being tweaked/change, but it's up to the companies seeking to make those changes doing it in an ethical way that matters most. While a lot of companies do a lot of unethical shit, the standards (and requirements of standards) do tend to speak for themselves. If this was some "Chrome-only" feature they wouldn't document it and/or open source it. If this is truly anti competitive you would hear the world shouting about it by now.

"will be open source" (2)

cpscotti (1032676) | about 3 years ago | (#35781824)

Whenever someone starts a project with that in mind: it means shit!
Why wasn't it open source from the start?

Look what happened to symbian...
(Well, maybe I should rtfa but I'm already killing precious time by reading slashdot so that wouldn't be nice..)

HTTP is getting old... (0)

Anonymous Coward | about 3 years ago | (#35781912)

HTTP got us along ways, but it has some serious architectural issues that make it unsuited for the modern web. We've been looking at SPDY and while it isn't perfect, it does fix a lot of the bottlenecks that are intrinsic to HTTP. It isn't a perfect replacement, but the perfect is the enemy of the good. I'll be happy if SPDY manages to displace HTTP over the next decade or so.

Wait a Minute (2)

ZamesC (611197) | about 3 years ago | (#35781922)

Let's say, for example, that Microsoft had: 1) Taken an existing web standard and made proprietary changes to it (promising to make the changes open-source, "in the future"), and 2) Implemented those changes in IE and MSN/Bing/Live.Com making those sites faster when using IE. Wouldn't everyone here being screaming "Anti-trust" and demanding an SEC investigation?

Re:Wait a Minute (0)

Anonymous Coward | about 3 years ago | (#35782070)

Yes, because MS exerts monopolistic control over desktops. What Google is doing is equally dumb, but not equally illegal.

Re:Wait a Minute (0)

Anonymous Coward | about 3 years ago | (#35782084)

Your fixation with MS makes you look a bit...pathetic.

Re:Wait a Minute (1)

ZamesC (611197) | about 3 years ago | (#35782386)

Hey, Anonymous .... I post ONE COMMENT is the last 8 years, and this is deemed a pathetic "fixation" on MS? (Oh, Yeah.... And the one from 8 years ago wasn't about MS). But thanks for reminding me why a deemed this message board a waste of time 8 years ago.

Re:Wait a Minute (0)

Anonymous Coward | about 3 years ago | (#35782088)

A proof-of-concept Apache mod [google.com], complete with source to analyze.

The whitepaper on SPDY [chromium.org], open for anyone to read and implement, patent-free.

The client code exists, open and available, in Chromium's trunk.

Please try to at least competently troll next time.

Re:Wait a Minute (1)

h4rr4r (612664) | about 3 years ago | (#35782132)

Because one company has done things like that before, the other has not. Actions speak louder than words.

Re:Wait a Minute (2)

TheSunborn (68004) | about 3 years ago | (#35782250)

That would not at all be what Google did. Google have published full documentation of the current version of the spdy protocol. (Its linked on the announcement page, and looks like something which will be given to w3c once its done.

It is however still a draft because the protocol is not finished yet.

Re:Wait a Minute (0)

Anonymous Coward | about 3 years ago | (#35782278)

Dude, people are complaining that Google are doing this as well.

I would be happy if Microsoft implemented this, hell, happier.
Anything that improves the web is a win for me.
MS were the ones who enabled XMLHTTPRequest as well remember. That was one of the few great things they enabled.

Microsoft ain't all bad, just the higher ups sadly.

SPDY means what? (2, Funny)

erroneus (253617) | about 3 years ago | (#35782112)

I am thinking Google did not learn the lesson from the SCSI acronym. Initially, the creator of SCSI wanted it to be pronounced "Sexy" and we ended up saying "Skuzzy." Obviously, Google wants this to be pronounced as "Speedy" but I can easily see this becoming "Spoddy."

And I have looked around a bit... I still can see where SPDY is defined anywhere as to what the letters mean? I can imagine a lot of meanings... except for the Y. (Standard Protocol no aDopted Yet)?

Re:SPDY means what? (0)

Anonymous Coward | about 3 years ago | (#35782218)

? I think you're the only one who thinks "spoddy" reading that...

Original (0)

sker (467551) | about 3 years ago | (#35782150)

"SPDY supports unlimited connection streams, can prioritize and even block requests if Google determines the site is a threat to any government it happens to be seducing^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H a communication channel gets overloaded and supports header compression."

Chrome Innovates Again (1)

mcescalante (2013696) | about 3 years ago | (#35782204)

IMO this is what separates Chrome from other browsers - the fact that Google is turning stuff out like this, and keeping the active development cycle going, quickly. Firefox seems to be playing catch up at this point with other browsers - sure it's more customizable and has a few more features, but do we see Mozilla rolling out protocols that "cut page load times in half" (I have some doubts that it'll be half, but that's just me). Thank you Google, for taking another step in the right direction with your browser.

1999 called... (0)

Anonymous Coward | about 3 years ago | (#35782548)

...and it wants IE5 back. Y'know, that One Cool Browser With All Those Nifty Markup And Protocol Extensions.

SPDY - server push (4, Insightful)

ThePhilips (752041) | about 3 years ago | (#35782272)

My favorite part of the SPDY is server push: now advertisers can clog my internet channel and hog the browser with ads long before the AdBlock kicks in. Or a hacked site would host malware and load it onto potential victims harddrives in parallel to normal surfing. Imagination is the only limit - of how it can go wrong.

For the security reasons, I think SPDY is a bad thing.

And I'm personally not bothered with 1-2s loading times.

P.S. The Chrome guys instead would have invested more times in the bookmarks, to make them useful. They could start by integrating Chrome with the Google Bookmarks.

Re:SPDY - server push (1)

Mr. Slippery (47854) | about 3 years ago | (#35782532)

My favorite part of the SPDY is server push: now advertisers can clog my internet channel and hog the browser with ads long before the AdBlock kicks in.

Seriously. Didn't we learn from the failure of "server push" back in the late 90s?

Open Source != Open Standard (0)

Anonymous Coward | about 3 years ago | (#35782348)

Anyone else a little uncomfortable with this?

What if Microsoft created a new protocol to speed up communication between IE9 and Bing? Slashdot would (rightfully) outrage at the idea.

Why no outrage if Google does it? Because the code will eventually be open source? Give me a break.

This shows where Google is really heading: keep code open source to keep the geeks happy, but keep their standards closed. This is WebM all over again. This is a closed standard, that no one except Google was involved in creating, and Google is using their position in the market to force it on customers.

In an ideal world, standards should be both open source and openly developed. Google is better than Microsoft for open sourcing their code... but they are still closing the standards. Only Google can say how the standard will develop.

SPDY clarifications (5, Informative)

mbelshe (462412) | about 3 years ago | (#35782366)

Thanks for all the kind words on SPDY; I wish the magazine authors would ask before putting their own results in the titles!

Regarding standards, we're still experimenting (sorry that protocol changes take so long!). You can't build a new protocol without measuring, and we're doing just that - measuring very carefully.

Note that we aren't ignoring the standards bodies. We have presented this information to the IETF, and we got a lot of great feedback during the process. When the protocol is ready for a RFC, we'll submit one - but it's not ready yet.

Here are the IETF presentations on SPDY:
      http://www.ietf.org/proceedings/80/slides/tsvarea-0.pdf [ietf.org]
      https://www.tools.ietf.org/agenda/80/slides/httpbis-7.pdf [ietf.org]

I've also answered a few similar questions to this here: http://hackerne.ws/item?id=2420201 [hackerne.ws]

We love help- if you're passionate about protocols and want to lend implementation help, please hop onto spdy-dev@google.com Several independent implementations have already cropped up and the feedback continues to be really great.

Re:SPDY clarifications (-1)

Anonymous Coward | about 3 years ago | (#35782632)

It's not ready for an RFC, but it's ready to be deployed on Google servers and Chrome? It's ready to be running on MY computer? Looks like I'm going back to Firefox...

server knows (0)

Anonymous Coward | about 3 years ago | (#35782368)

From the whitepaper: "...Exclusively client-initiated requests. In HTTP, only the client can initiate a request. Even if the server knows the client needs a resource, it has no mechanism to inform the client and must instead wait to receive a request for the resource from the client."

I bet Google's servers will know the client needs ads. Ad-blocking will not be as easy.

We use HTTP servers to pass assets... (1)

Dzonatas (984964) | about 3 years ago | (#35782374)

We use HTTP servers to pass assets in the virtual world protocols and this sounds like something we did (but expanded in the TCP layers) to combine bidirectional ReSTful connections. SPDY doesn't combine the content, yet everything else we had in mine appears done. This work was described in IETF WGs, so given NOTE WELL I hope we inspired this kind of work for wider deployment!

The reverse connection to the client without an immediate request previously is key! SPDY obviously retains some credentials and that makes it a little more trivial behind firewalls.

Setting off warning bells (2)

140Mandak262Jamuna (970587) | about 3 years ago | (#35782422)

As part of the "Let's make the web faster" initiative, we are experimenting with alternative protocols to help reduce the latency of web pages.

This smells very close to the "embrace extend and extinguish" technique of Microsoft. Unless Google follows it by keeping the technology open, work on getting it certified into the next version of the standards, this would become the first step in Google becoming the next Microsoft.

Re:Setting off warning bells (0)

Anonymous Coward | about 3 years ago | (#35782522)

Which is all speculation that appears to be wrong if you'd take the time to look.

Not to mention Google has no where near the marketshare required to pull off the kind of tricks MS did.


BAD (2, Insightful)

improfane (855034) | about 3 years ago | (#35782440)

I cannot be the only person to think this is not a good thing. So now we'll have sites that have to run both technologies with regular HTTP/TCP as fallback and we fragment the web browser ecosystem even more.

Thanks Google. As much as I want HTTP to be faster, I think this way is a bit degrading to the web... There was no standards process. It will probably now be rushed as a standard.

Basically its a fake way of making Google look faster, so you either adopt Google's tech to get ahead. It reeks of a Microsoft strategic move to me. Can't optimize the browser? Change the browser and make an incompatible change! Well done...

Re:BAD (1)

Anonymous Coward | about 3 years ago | (#35782622)

Client-side computing power continually increases exponentially. Lots of work has been done to utilize this. Browsers have been optimized, and forged into new areas of optimization - JITs for example.

You're just being too short-sided to realize that delay for the end-user is determined by two things: one is the delay for data to reach them and the latency between communications with the server; the other is the client's ability to crunch the data. Bandwidth isn't the issue for modern web apps, it's the latency part. We've simply optimized the browsers enough that the delay seen by the users is the result of the network, and further optimization of the browsers will do little good.

For an electronics analogy: the delays are like charging capacitors in series. Capacitors in series add inversely, in other words, C_total = 1/( (1/C_1) + (1/C_2) ). C_total is determined predominately by the larger of the two.

Kinda sad for the BEEP guys (1)

GodfatherofSoul (174979) | about 3 years ago | (#35782570)

I thought BEEP was a great concept that seemed to die on the vine. When I saw "multiplexing," I figured Google had resurrected the protocol but it looks like BEEP just doesn't go far enough.

As for that 6 socket connections per client connection...wow! Never knew those kinds of resources were being devoured for every network connection.

Google BS (0)

Anonymous Coward | about 3 years ago | (#35782640)

I mean if someone cared about page load times, they would be using Opera, the ONLY browser to fully support and have enabled HTTP 1.1 pipelining, which provides FAR better results across FAR more sites...

The only difference of course, is Opera ASA don't have the bottomless pits of money to tell the wider world about these things. I find it curious that it's Chrome this, chrome that, nobody is talking about the MASSIVE bandwidth savings that Opera 11.10 is getting using WebP server transcoding...


12MB of browsing slashed to 3MB....

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account