Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

HTTP Intermediary Layer From Google Could Dramatically Speed Up the Web

timothy posted more than 4 years ago | from the sufficient-disclosure dept.

The Internet 406

grmoc writes "As part of the 'Let's make the web faster' initiative, we (a few engineers — including me! — at Google, and hopefully people all across the community soon!) are experimenting with alternative protocols to help reduce the latency of Web pages. One of these experiments is SPDY (pronounced 'SPeeDY'), an application-layer protocol (essentially a shim between HTTP and the bits on the wire) for transporting content over the web, designed specifically for minimal latency. In addition to a rough specification for the protocol, we have hacked SPDY into the Google Chrome browser (because it's what we're familiar with) and a simple server testbed. Using these hacked up bits, we compared the performance of many of the top 25 and top 300 websites over both HTTP and SPDY, and have observed those pages load, on average, about twice as fast using SPDY. Thats not bad! We hope to engage the open source community to contribute ideas, feedback, code (we've open sourced the protocol, etc!), and test results."

Sorry! There are no comments related to the filter you selected.

Oh that's wonderful (5, Funny)

Anonymous Coward | more than 4 years ago | (#30077894)

Now we can see Uncle Goatse twice as fast.

Re:Oh that's wonderful (0, Offtopic)

Captain Splendid (673276) | more than 4 years ago | (#30078128)

Jeez, but the mods have trigger fingers. Note to the idiots: If parent had included a link to goatse in his post, a troll mod would be justified.

As it is, just give him a couple of funny mods and untwist your panties.

Re:Oh that's wonderful (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#30078568)

At the time I viewed this, the mods have decided to give funny mods to the person you said, and mod you as a troll... clearly, you should be insightful, whereas this post should be the troll.

Re:Oh that's wonderful (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#30078630)

The problem is that just about any idiot can get mod points on /. It doesn't make their mods correct - just there.

Re:Oh that's wonderful (1)

masshuu (1260516) | more than 4 years ago | (#30078800)

no you need to be moded underrated, while I get moded troll. OF course i won't be the first to see I'm modded troll cause i have a normal Chrome browser, not that fancy one

Is he your biological uncle? (1)

spun (1352) | more than 4 years ago | (#30078158)

Or simply an older man who likes to fondle you?

Re:Is he your biological uncle? (0)

Anonymous Coward | more than 4 years ago | (#30078314)

oldermanwholikestofondleyou.cx

404: Not Found

Re:Is he your biological uncle? (1)

brainboyz (114458) | more than 4 years ago | (#30078524)

What's scary is that you got a 404 and not a NXDOMAIN.

Re:Is he your biological uncle? (1)

commodore64_love (1445365) | more than 4 years ago | (#30078816)

Here you go. The Dirty Old Man's Association - http://www.domai.com/ [domai.com] (warning nudity)

Re:Oh that's wonderful (1)

Exception Duck (1524809) | more than 4 years ago | (#30078378)

Do you have a link for you uncles web page ?

Re:Oh that's wonderful (4, Interesting)

Anonymous Coward | more than 4 years ago | (#30078636)

Before you click! (3, Funny)

courteaudotbiz (1191083) | more than 4 years ago | (#30077912)

In the future, the content will be loaded before you click! Unfortunately, it's not like it today, so I didn't make the first post...

Re:Before you click! (0)

SomeJoel (1061138) | more than 4 years ago | (#30078100)

In the future, the content will be loaded before you click! Unfortunately, it's not like it today, so I didn't make the first post...

You need to have more faith in yourself, man.

Re:Before you click! (2, Interesting)

oldspewey (1303305) | more than 4 years ago | (#30078374)

content will be loaded before you click!

Sounds like those "dialup accelerators" from back in the '90s ... the ones that would silently spider every link on the page you're currently viewing in order to build a predictive cache.

Re:Before you click! (4, Interesting)

wolrahnaes (632574) | more than 4 years ago | (#30078508)

Which of course led to quite amusing results when some failure of a web developer made an app that performed actions from GET requests. I've heard anecdotes of entire databases being deleted by a web accelerator in these cases.

From RFC2616:

Implementors should be aware that the software represents the user in their interactions over the Internet, and should be careful to allow the user to be aware of any actions they might take which may have an unexpected significance to themselves or others.

        In particular, the convention has been established that the GET and HEAD methods SHOULD NOT have the significance of taking an action other than retrieval. These methods ought to be considered “safe”. This allows user agents to represent other methods, such as POST, PUT and DELETE, in a special way, so that the user is made aware of the fact that a possibly unsafe action is being requested.

        Naturally, it is not possible to ensure that the server does not generate side-effects as a result of performing a GET request; in fact, some dynamic resources consider that a feature. The important distinction here is that the user did not request the side-effects, so therefore cannot be held accountable for them.

Re:Before you click! (1)

mcgrew (92797) | more than 4 years ago | (#30078442)

In the future, the content will be loaded before you click!

Wouldn't you have to have some thiotimoline [wikipedia.org] and water in your mouse for that to work? Thiotimoline ain't cheap, you know.

Re:Before you click! (0)

Anonymous Coward | more than 4 years ago | (#30078668)

There is a firefox addon to do just that. Some caching proxy servers can also do the same thing.

and faster still.. (4, Insightful)

Anonymous Coward | more than 4 years ago | (#30077928)

remove flash, java applets ad's
20X faster!

Re:and faster still.. (3, Funny)

amicusNYCL (1538833) | more than 4 years ago | (#30078526)

You could also remove images, CSS, Javascript, and text, imagine the time savings!

Slashdot could use the help (2, Funny)

Anonymous Coward | more than 4 years ago | (#30077954)

How is this different from Web servers that serve up gzipped pages?

If only the Google engineers can do something about Slashdot's atrociously slow Javascript. Like maybe they can remove the sleep() statements.

What, just because the original poster pulls a "look at me, I did something cool, therefore I must be cool!" doesn't mean I have to go along with it.

slashdot (2, Interesting)

jDeepbeep (913892) | more than 4 years ago | (#30078302)

If only the Google engineers can do something about Slashdot's atrociously slow Javascript.

I've noticed a discernible difference in /. loadtime, in favor of Google Chrome vs FF 3.x on Mac OSX at home. And that's just the Chrome dev channel release. I was pleasantly surprised.

Re:Slashdot could use the help (4, Insightful)

Anonymous Coward | more than 4 years ago | (#30078332)

They need start with practicing what they preach...

http://code.google.com/speed/articles/caching.html [google.com]
http://code.google.com/speed/articles/prefetching.html [google.com]
http://code.google.com/speed/articles/optimizing-html.html [google.com]

They turn on caching for everything but then spit out junk like

http://v9.lscache4.c.youtube.com/generate_204?ip=0.0.0.0&sparams=id%2Cexpire%2Cip%2Cipbits%2Citag%2Calgorithm%2Cburst%2Cfactor&fexp=903900%2C903206&algorithm=throttle-factor&itag=34&ipbits=0&burst=40&sver=3&expire=1258081200&key=yt1&signature=8214C5787766320D138B1764BF009CF62A596FF9.D86886CFF40DB7F847246D653E9D3AA5B1D18610&factor=1.25&id=ccbfe79256f2b5b6 [youtube.com]

Most cache programs just straight up ignore this. Because of the '?' in there. It ends up being a query to static data.

Then never mind the load balancing bits they put in there with 'v9.lscache4.c.'. So even IF you get your cache to keep the data you may end up with a totally different server and the same piece of data just served from another server. There have been a few hacks to 'rewrite' the headers and the names to make it stick. But those are just hacks and while they work they seem fragile.

The real issue is at the HTTP layer and how servers are pointed at from inside the 'code'. So instead of some sort of indirection that would make it simple for the client to say 'these 20 servers have the same bit of data' they must assume that the data is different from every server.

Compression and javascript speedups are all well and good but there is a different more fundamental problem of extra reload of data that has already been retrieved. As local network usage is almost always faster than going back out to the internet. In a single user environment this is not too big of a deal. But in a 10+ user environment it is a MUCH bigger deal.

Even the page that talks about optimization has issues
http://code.google.com/speed/articles/ [google.com]
12 cr/lf right at the top of the page that are not rendered anywhere. They should look at themselves first.

Re:Slashdot could use the help (2, Insightful)

amicusNYCL (1538833) | more than 4 years ago | (#30078576)

How is this different from Web servers that serve up gzipped pages?

Well, for one, gzipping output doesn't have any effect on latency.

Just turn off image loading (1, Funny)

Anonymous Coward | more than 4 years ago | (#30077998)

You can generally surf the web ten times or faster if you just
1) Turn off image loading
2) Turn off Javascript
3) Turn off Java
4) Turn off plugins

yeah, yeah... I know... It's called "lynx"

Re:Just turn off image loading (2, Funny)

spun (1352) | more than 4 years ago | (#30078196)

You youngsters and your fancy text based web browsers. In my day, we used gopher, and we LIKED it!

Re:Just turn off image loading (5, Funny)

C0vardeAn0nim0 (232451) | more than 4 years ago | (#30078270)

here's an onion to hang on your belt, granpa.

now, on a more serious note, isn't gopher a faster protocol than HTTP ? could we just use it to transport html, pictures, etc ?

Re:Just turn off image loading (1)

shentino (1139071) | more than 4 years ago | (#30078370)

Speaking seriously, once the main page of HTML is downloaded you pretty much know already where everything goes.

Just stub it out with "loading" boxes in spots where you don't have all the content. Especially if parameters like width= and height= already fix how big the final image is going to be.

When something finishes loading, just update the layout.

Re:Just turn off image loading (but not with SPDY? (0)

Anonymous Coward | more than 4 years ago | (#30078612)

I was thinking about one of the "features" of SPDY
"To enable the server to initiate communications with the client and push data to the client whenever possible"

Would that mean a server can force the images of a html-page upon a client?
If so, the ignoring images will no longer help to speed up the connection.

Also if the pictures (or flash content) are for advertisements,
then it is not so easy anymore to simply block it with today's adblockers.

Suspicious.... (3, Interesting)

Anonymous Coward | more than 4 years ago | (#30078010)

From the link

We downloaded 25 of the "top 100" websites over simulated home network connections, with 1% packet loss. We ran the downloads 10 times for each site, and calculated the average page load time for each site, and across all sites. The results show a speedup over HTTP of 27% - 60% in page load time over plain TCP (without SSL), and 39% - 55% over SSL.

1. Look at top 100 websites.
2. Choose the 25 which give you good numbers and ignore the rest.
3. PROFIT!

Akamai? (0)

ruiner13 (527499) | more than 4 years ago | (#30078012)

Isn't this making what Akamai does free (and likely pissing them off royally)?

Re:Akamai? (1)

epiphani (254981) | more than 4 years ago | (#30078098)

Eh, not at all. Akamai is a distribution/anycast provider. They're about the infrastructure to support large-scale websites and/or content providers with very high SLA targets, not speed up individual requests.

Re:Akamai? (4, Informative)

TooMuchToDo (882796) | more than 4 years ago | (#30078112)

No. Akamai gives boxes to ISPs that cache Akamai's customer's content closer to the ISP's customers. Akamai then uses logic they've put together into DNS to redirect requests to the appliance closest to the request.

Re:Akamai? (2, Informative)

ranson (824789) | more than 4 years ago | (#30078354)

No. Akamai offers many services and features beyond 'giving' boxes to ISPs. For instance, they have their own global CDN unrelated to any ISP which you can pay to have your content served across. They'll host it or reverse proxy/cache it. They also can multicast live streaming media, on demand streaming media, etc. You get the picture. In once sentence, Akamai is a high availability, high capacity provider of bandwidth. And they accomplish that in a variety of ways other than just putting boxes in ISPs.

Re:Akamai? (0)

TooMuchToDo (882796) | more than 4 years ago | (#30078430)

In once sentence, Akamai is a high availability, high capacity provider of bandwidth. And they accomplish that in a variety of ways other than just putting boxes in ISPs.

I disagree. Level3, Cogent, Global Crossing. They are bandwidth providers. Akamai is a "best effort content delivery optimization organization".

Re:Akamai? (-1, Troll)

Anonymous Coward | more than 4 years ago | (#30078498)

And they accomplish that in a variety of ways other than just putting boxes in ISPs.

A variety of ways, like trying to turn DNS into the directory service that it most definitely is not. So are you earning good dividends from your Akamai stock?

Re:Akamai? (0)

Anonymous Coward | more than 4 years ago | (#30078232)

It's same as Akamai as a particular technology is to a particular hack.

How about telling Analytics to take a hike? (5, Insightful)

rho (6063) | more than 4 years ago | (#30078024)

And all other "add this piece of Javascript to your Web page and make it more awesomer!"

Yes, yes, they're useful. And you can't fathom a future without them. But in the meantime I'm watching my status bar say, "completed 4 of 5 items", then change to "completed 11 of 27 items", to "completed 18 of 57 items", to "completed... oh screw this, you're downloading the whole Internet, just sit back, relax and watch the blinkenlights".

Remember when a 768kbps DSL line was whizzo fast? Because all it had to download was some simple HTML, maybe some gifs?

I want my old Internet back. And a pony.

Re:How about telling Analytics to take a hike? (5, Funny)

ramaboo (1290088) | more than 4 years ago | (#30078062)

And all other "add this piece of Javascript to your Web page and make it more awesomer!"

Yes, yes, they're useful. And you can't fathom a future without them. But in the meantime I'm watching my status bar say, "completed 4 of 5 items", then change to "completed 11 of 27 items", to "completed 18 of 57 items", to "completed... oh screw this, you're downloading the whole Internet, just sit back, relax and watch the blinkenlights".

Remember when a 768kbps DSL line was whizzo fast? Because all it had to download was some simple HTML, maybe some gifs?

I want my old Internet back. And a pony.

That's why smart web developers put those scripts at the end of the body.

Re:How about telling Analytics to take a hike? (0)

Anonymous Coward | more than 4 years ago | (#30078224)

Adsense is embedded where the ads are going to be, Google Maps scripts are embedded where the map is going to be, etc. I once made a Google Maps wrapper which allowed deferred loading, but that required ugly hacks (like replacing document.write with a different implementation.) If you analyze the loading times of web pages, Javascript libraries and gadgets, Google stuff in particular, are the biggest offender, and often the page author has no choice where to put them in the HTML code.

Re:How about telling Analytics to take a hike? (2, Informative)

93 Escort Wagon (326346) | more than 4 years ago | (#30078624)

Adsense is embedded where the ads are going to be, Google Maps scripts are embedded where the map is going to be, etc.

This doesn't have to be the case, unless you're still coding per 1997 standards. Even with CSS 1, you can put those DIVs last in the code and still place them wherever you want them to be.

It's what I do with the Google ads (text only ads, FWIW) on one of my personal sites - so the content loads first, and then the ads show up.

Re:How about telling Analytics to take a hike? (0)

Anonymous Coward | more than 4 years ago | (#30078848)

That only works with basically static layouts. In a fluid layout, the element has to be in the same structural context where it's visually going to end up. Yeah, I know, fluid layouts. How 90s.

Re:How about telling Analytics to take a hike? (0)

Rennt (582550) | more than 4 years ago | (#30078244)

And why smart web surfers block them.

Re:How about telling Analytics to take a hike? (2, Insightful)

Zocalo (252965) | more than 4 years ago | (#30078320)

That's why smart web developers put those scripts at the end of the body.

It's also why smart users filter them outright with something like AdBlock - anything that I see in the browser history that looks like a tracking/stats domain or URL gets blocked on sight. Come to think of it, I could probably clean it up publish it as an AdBlock filter list if anyone's interested; there's only a few dozen entries on there at the moment, but I'm sure that would grow pretty quickly if it was used by a more general and less paranoid userbase.

Re:How about telling Analytics to take a hike? (3, Interesting)

causality (777677) | more than 4 years ago | (#30078782)

That's why smart web developers put those scripts at the end of the body.

It's also why smart users filter them outright with something like AdBlock - anything that I see in the browser history that looks like a tracking/stats domain or URL gets blocked on sight. Come to think of it, I could probably clean it up publish it as an AdBlock filter list if anyone's interested; there's only a few dozen entries on there at the moment, but I'm sure that would grow pretty quickly if it was used by a more general and less paranoid userbase.

What's paranoid about insisting that a company bring a proposal, make me an offer, and sign a contract if they want to derive monetary value from my personal data? Instead, they feel my data is free for the taking and this entitlement mentality is the main reason why I make an effort to block all forms of tracking. I never gave consent to anyone to track anything I do, so why should I honor an agreement in which I did not participate? The "goodness" or "evil-ness" of their intentions doesn't even have to be a consideration. Sorry but referring to that as "paranoid" is either an attempt to demagogue it, or evidence that someone else's attempt to demagogue it was successful on you.

Are some people quite paranoia? Sure. Does that mean you should throw out all common sense, pretend like there are only paranoid reasons to disallow tracking, and ignore all reasonable concerns? No. Sure, someone who paints with a broad brush might notice that your actions (blocking trackers) superficially resemble some actions taken by paranoid people. Allowing that to affect your decison-making only empowers those who are superficial and quick to assume because you are kowtowing to them. This is what insecure people do. If the paranoid successfully tarnish the appearance of an otherwise reasonable action because we care too much about what others may think, it can only increase the damage caused by paranoia.

Re:How about telling Analytics to take a hike? (1)

thestudio_bob (894258) | more than 4 years ago | (#30078170)

I want my old Internet back. And a pony.

You forgot to yell at the kids to get off your internet.

Re:How about telling Analytics to take a hike? (0)

Anonymous Coward | more than 4 years ago | (#30078200)

Remember when a 768kbps DSL line was whizzo fast?

I remember the good ol' days.

Oh yea, and get off my lawn!

Re:How about telling Analytics to take a hike? (1)

gbarules2999 (1440265) | more than 4 years ago | (#30078204)

I want my old Internet back. And a pony.

If Slashdot does OMG Ponies again will that satisfy your wants and needs?

Re:How about telling Analytics to take a hike? (1)

gstoddart (321705) | more than 4 years ago | (#30078422)

Remember when a 768kbps DSL line was whizzo fast?

Jeebus. I remember when my 1200 baud modem felt whizzo fast compared to my old 300 baud modem.

And, yes, I can already see the "get off of my lawn" posts below you, and I'm dating myself. :-P

Cheers

Re:How about telling Analytics to take a hike? (2, Insightful)

value_added (719364) | more than 4 years ago | (#30078556)

I want my old Internet back. And a pony.

LOL. I'd suggest disabling javascript and calling it a day.

Alternatively, use a text-based browser. If the webpage has any content worth reading, then a simple lynx -dump in 99% of cases will give you what you want, with the added bonus of re-formatting those mile-wide lines into something readable.

On the other hand, I suspect most people don't want the "old internet". What was once communicated on usenet or email in a few simple lines, for example, now increasingly appears in the form of a complex website that displays giant graphic-laden pages, replete with bad formatting and full of extraneous rubbish. And people like it!

Solving the wrong problem (5, Interesting)

Animats (122034) | more than 4 years ago | (#30078038)

The problem isn't pushing the bits across the wire. Major sites that load slowly today (like Slashdot) typically do so because they have advertising code that blocks page display until the ad loads. The ad servers are the bottleneck. Look at the lower left of the Mozilla window and watch the "Waiting for ..." messages.

Even if you're blocking ad images, there's still the delay while successive "document.write" operations take place.

Then there are the sites that load massive amounts of canned CSS and Javascript. (Remember how CSS was supposed to make web pages shorter and faster to load? NOT.)

Then there are the sites that load a skeletal page which then makes multiple requests for XML for the actual content.

Loading the base page just isn't the problem.

Re:Solving the wrong problem (4, Insightful)

HBI (604924) | more than 4 years ago | (#30078076)

IAWTP. With NoScript on and off, the web is a totally different place.

Re:Solving the wrong problem (1)

rho (6063) | more than 4 years ago | (#30078256)

With NoScript on and off, the web is a totally different place

Yes. Quite often completely non-functional, because the site requires Javascript to do anything.

Usually this is followed by an assertion that the site's developer is a clueless knob--which may be true, but doesn't help at all. This is the Web we deserve, I suppose: 6 megabit cable connections and dual-core 2.5 gigahertz processors that can't render a forum page for Pokemon addicts in under 8 seconds.

Re:Solving the wrong problem (1, Interesting)

Anonymous Coward | more than 4 years ago | (#30078166)

Then there are the sites that load massive amounts of canned CSS and Javascript. (Remember how CSS was supposed to make web pages shorter and faster to load? NOT.)

I definitely agree on this one (who wouldn't). I'd say they clearly improve the look and feel of websites, but the simple addition of making them separate files requires a separation GET, which is far slower than consolidating. Also, a lot of sites do not compress these files (both in-transit, and simply removing whitespace from their web server versions--it's fine to keep it for your personal modification, but a compression tool should always be used on those scripts/files before putting them out for the rest of the internet to download. It has a dramatic affect on speed.

Re:Solving the wrong problem (1)

shentino (1139071) | more than 4 years ago | (#30078470)

How is the separate GET slower?

If it's being properly cached then it shouldn't ask the server about it AT ALL, assuming of course that proper cacheability directives have been placed on the response.

Re:Solving the wrong problem (1)

amicusNYCL (1538833) | more than 4 years ago | (#30078678)

How is the separate GET slower?

For the same reason that it's a lot faster to transfer one 100MB file over FTP than it is to transfer 10,000 10KB files: the overhead in setting up the connection and transfer.

Re:Solving the wrong problem (2, Funny)

BlueBoxSW.com (745855) | more than 4 years ago | (#30078176)

So if Google sped up the non-ad web, they would have more room for their ads?

SNEAKY!!

Re:Solving the wrong problem (4, Funny)

Monkeedude1212 (1560403) | more than 4 years ago | (#30078220)

I think you mean SNKY

Re:Solving the wrong problem (1)

cream wobbly (1102689) | more than 4 years ago | (#30078242)

How am I going to wipe up this mess now? And ... oh, my shirt!

But can't you see how SPeeDY will solve ALL these? (1)

Colin Smith (2679) | more than 4 years ago | (#30078282)

?

No. Neither can I. It will let them *push* adverts at you in parallel though... *before you asked for them*

Google wanting more efficient advert distribution... No, never...

 

Re:Solving the wrong problem (2, Insightful)

Yoozer (1055188) | more than 4 years ago | (#30078294)

Remember how CSS was supposed to make web pages shorter and faster to load? NOT.)

What, you think after the first load that CSS file isn't cached in any way? Inline styles slow down every time, CSS just the first. CSS was supposed to make styling elements not completely braindead. You want to change the link colors with inline styles from red to blue? With inline styles - enjoy your grepping. You're bound to forget some of 'm, too.

Bitching about ad loading times and huge JS libraries? Sure, go ahead. CSS? No, that just makes you look silly.

Re:Solving the wrong problem (1)

mea37 (1201159) | more than 4 years ago | (#30078298)

So... when you try to load slashdot, the requests that fetch the content don't get rolling until the request that fetches the ad finishes... and SPDY allows all of the requests to be processed concurrently so the content doesn't have to wait for the ad...

How is that solving the wrong problem again?

Re:Solving the wrong problem (1)

DaveV1.0 (203135) | more than 4 years ago | (#30078766)

Actually, what he said was

advertising code that blocks page display until the ad loads.

Which is means that even though the desired page may have already been retrieved, the page content will not display until the ads have been downloaded. If the ad server is slow, then the page will load slow, even using SPDY.

Re:Solving the wrong problem (0)

Anonymous Coward | more than 4 years ago | (#30078324)

But Google is the ad server...

Re:Solving the wrong problem (1)

Cyner (267154) | more than 4 years ago | (#30078342)

Don't forget the servers that are overloaded, or have poorly written code. An easy can, check out HP's bloated website. Each page has relatively little content compared to the load times. It's all in the backend processing, which must be massive seeing as how it takes 1/2 to several seconds for the server to process requests for even simple pages.

As the OP said, they're solving the wrong problem. It's not a transport issue, it's design issues. And many websites are rife with horrible design [worsethanfailure.com] .

Re:Solving the wrong problem (0)

Anonymous Coward | more than 4 years ago | (#30078380)

Remember how CSS was supposed to make web pages shorter and faster to load? NOT.

I don't remember that ever being one of the goals of CSS. I thought it was about separating presentation from content.

Re:Solving the wrong problem (3, Insightful)

shentino (1139071) | more than 4 years ago | (#30078434)

CSS can make things shorter and faster if they just remember to link to it as a static file.

You can't cache something that changes, and anything, like CSS and Javascript, that's caught in the on-the-fly generation of dynamic and uncacheable text in spite of actually being static, is just going to clog up the tubes.

In fact, thanks to slashdot's no-edits-allowed policy, each comment itself is a static unchangeable snippet of text. Why not cache those?

Sending only the stuff that changes is usually a good optimization no matter what you're doing.

CSS and javascript themselves aren't bad. Failing to offlink and thus cacheable-ize them however, is.

Re:Solving the wrong problem (0)

Anonymous Coward | more than 4 years ago | (#30078560)

It is so very telling that we're discussing this on a web site that almost can't be scrolled on a 1.6GHz Atom processor, on occasion triggers the runaway script dialog on the homepage and is hardly usable without Javascript either.

Re:Solving the wrong problem (1)

Idbar (1034346) | more than 4 years ago | (#30078618)

I agree with you, most of the times they address the wrong problems because they want to avoid being blamed. Doing research on TCP congestion control mechanisms, realized that ISPs pushed all the problems towards the borders over dimensioning networks. Now core network traffic remains low, while home routers can't handle traffic and drop the packets due to ridiculous access speeds.

Besides, I want to be able to take advantage of the Internet without requiring 2+ cores and battery draining GHz of speed.

Cloud gaming (1)

should_be_linear (779431) | more than 4 years ago | (#30078050)

Everything plays together nicely for "cloud-gaming" statrups. This will solve, at least to some extent, one of their hardest problems, for free. Except if Google itself is not after exect same market. They never mentioned how Chrome OS is supposed to provide gaiming to users ...

Re:Cloud gaming (1)

fuzzyfuzzyfungus (1223518) | more than 4 years ago | (#30078322)

Google has never explicitly mentioned it(at least to my knowledge); but I don't think that it is rocket surgery to infer the likely possibilities.

For basic casual flash stuff, there will almost certainly be flash support(since Adobe seems to at least be promising to get off their ass about reasonably supporting non wintel platforms). In the longer term, Google's work on making javascript really fast will, when combined with SVG or WebGL, allow flash level games to be produced with stock web technologies.

For native binary stuff, Google's quiet-but-interesting NaCL [google.com] project seems like a likely candidate, most probably using ordinary web technologies and the Chrome browser as the desktop UI; but with cached NaCL lumps for applications that can't be done any other way. One might also expect to see Courgette [chromium.org] used to efficiently update those cached NaCL components.

For games beyond the capability of the hardware that ChromeOS will typically be running one(since it seems to be aimed at the weak-'n-cheap end of the market) I'd assume that one of two things will happen: If the various "game streaming" offerings that are sprouting up turn out to actually work reasonably well, one or more will probably end up being available for ChromeOS(either purchased and integrated, or because ChromeOS supports third party NaCL objects). If they end up being laggy crap, Google will probably just ignore the problem, reasoning that everybody's slow and cheap hardware suffers from the same fundamental limitations in that area, and so the lack of more sophisticated games isn't a huge issue

If I use it (0, Offtopic)

overlordofmu (1422163) | more than 4 years ago | (#30078054)

Will I successfully be able to first-post?

Nawlinwiki is a fucking bastard (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#30078058)

So is PeterSymonds, J.delanoy, Pmdrive1061, Pathoschild and Tnxman301

Vandalize Wikipedia today.

Application Layer... (2, Interesting)

Monkeedude1212 (1560403) | more than 4 years ago | (#30078068)

Doesn't that mean that both the client and the server have to be running this new application to see the benefits of this? Essentially either one or the other is still going to be using HTTP if you don't set it up on both, and its only as fast as the slowest piece.

While a great initiative, it could be a while before it actually takes off. To get the rest of the world running on a new protocol will take some time, and there will no doubt be some kinks to work out.

But if anyone could do it, it'd be Google.

Re:Application Layer... (1)

coolsnowmen (695297) | more than 4 years ago | (#30078264)

A plugin gets it into something like firefox. Then, as long as there is a way for a webserver like apache to allow both requests (http or spdy), it shouldn't be that hard because you arn't storing your web pages in (static or dynamic) in a different format so it shouldn't be that much work to add the [apache] module once it is written.

Re:Application Layer... (0)

Anonymous Coward | more than 4 years ago | (#30078756)

Doesn't that mean that both the client and the server have to be running this new application to see the benefits of this? Essentially either one or the other is still going to be using HTTP if you don't set it up on both, and its only as fast as the slowest piece.

While a great initiative, it could be a while before it actually takes off. To get the rest of the world running on a new protocol will take some time, and there will no doubt be some kinks to work out.

That's what the gopher people said

april fools! (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#30078070)

yeah, we all believe you

First Post! (0, Offtopic)

FreshlyShornBalls (849004) | more than 4 years ago | (#30078096)

Would this make the obligatory "first post" happen even quicker?

All the parentheses in the summary... (1)

Ironchew (1069966) | more than 4 years ago | (#30078102)

Am I the only one imagining a ventriloquist controlling a snarky dummy that counters all the points in the summary with dubious half-truths?

First Post ! (-1, Offtopic)

feufeu (1109929) | more than 4 years ago | (#30078118)

...if i had used SPDY...

Cool.... but it's not http (4, Insightful)

Colin Smith (2679) | more than 4 years ago | (#30078178)

So which ports are you planning to use for it?

 

Not a terribly new concept. (5, Informative)

ranson (824789) | more than 4 years ago | (#30078210)

AOL actually does something similar to this with their TopSpeed technology, and it does work very, very well. It has introduced features like multiplexed persistent connections to the intermediary layer, sending down just object deltas since last visit (for if-modified-since requests), and applying gzip compression to uncompressed objects on the wire. It's one of the best technologies they've introduced. And, in full disclosure, I was proud to be a part of the team that made it all possible. It's too bad all of this is specific to the AOL software, so I'm glad a name like Google is trying to open up these kind of features to the general internet.

OT Re:Not a terribly new concept. (1)

netsharc (195805) | more than 4 years ago | (#30078550)

Congrats on being the only interesting post so far. Everybody else is just complaining about the bloat in sites nowadays, which is valid, but I guess unavoidable. I just realize some sites might have 200+ inline elements, and the combined HTTP headers (plus TCP, etc overhead) on that isn't trivial, so this technology will surely help. Oh well, that's IT isn't it, Intel builds faster CPUs, and Microsoft builds bloatier software. I installed "Windows Live Mail" and "Windows Live Messenger" for a friend yesterday, and these 2 pieces of software (mail, and chat) takes up 100 MB. 100 MB!

Yeah, right... but WHY?!? (2, Insightful)

51M02 (165179) | more than 4 years ago | (#30078278)

I mean reinventing the wheel, well why not, this one is old and let say we have done all we could with HTTP...

But why, WHY should you call that with a stupid name like SPDY?!? It's not even an acronym (of is it?).

It sound bad, it's years (decade?) before it is well supported... but why not. Wake me when it's done ready for production.

I guess they start to get bored at Google if they are trying not rewrite HTTP.

Re:Yeah, right... but WHY?!? (1)

shentino (1139071) | more than 4 years ago | (#30078512)

Reinventing the wheel is just fine if the first wheel isn't actually round enough...or has proprietary axle interfaces such that only one kind of wagon can be put on them.

Re:Yeah, right... but WHY?!? (1)

layer3switch (783864) | more than 4 years ago | (#30078586)

I think, Google's motivation is in self interest, not end user's interest. Reducing load and time to serve will benefit content providers, not end users. However it's bit odd and immature for Google to fix something that is NOT broken. Or maybe I'm too old to think like people at Google...

Re:Yeah, right... but WHY?!? (2, Interesting)

fuzzyfuzzyfungus (1223518) | more than 4 years ago | (#30078880)

I strongly suspect that whether or not HTTP is "broken" is largely a matter of perspective. For classic website serving, HTTP works pretty well. Not perfectly; but easily well enough that it isn't worth replacing.

If, though, your business model largely depends on creating webapp UIs that are good enough to compete with native local UIs, HTTP's latency and other issues are going to strike you as a fairly serious problem(particularly if the future is very likely going to involve a lot more clients connecting wirelessly via cell networks, where latency is already utter shit). Since that is pretty much exactly Google's situation, their motive seems pretty clear.

As for the "odd and immature" bit, if they had tried to roll this out as some sort of Web2.0 version of the old walled garden protocol setups(like the old "Microsoft network", before MSN became a normal ISP), then that would have been very odd and very immature. As it is, though, they've rolled out a potentially interesting project that fixes some problems that bug them under a liberal OSS licence. That seems like a fairly reasonable and inoffensive activity.

While we're at it ... (4, Interesting)

RAMMS+EIN (578166) | more than 4 years ago | (#30078336)

While we're at it, let's also make processing web pages faster.

We have a semantic language (HTML) and a language that describes how to present that (CSS), right? This is good, let's keep it that way.

But things aren't as good as they could be. On the semantic side, we have many elements in the language that don't really convey any semantic information, and a lot of semantics there isn't an element for. On the presentation side, well, suffice it to say that there are a _lot_ of things that cannot be done, and others that can be done, but only with ugly kludges. Meanwhile, processing and rendering HTML and CSS takes a lot of resources.

Here is my proposal:

  - For the semantics, let's introduce an extensible language. Imagine it as a sort of programming language, where the standard library has elements for common things like paragraphs, hyperlinks, headings, etc. and there are additional libraries which add more specialized elements, e.g. there could be a library for web fora (or blogs, if you prefer), a library for screenshot galleries, etc.

  - For the presentation, let's introduce something that actually supports the features of the presentation medium. For example, for presentation on desktop operating systems, you would have support for things like buttons and checkboxes, fonts, drawing primitives, and events like keypresses and mouse clicks. Again, this should be a modular system, where you can, for example, have a library to implement the look of your website, which you can then re-use in all your pages.

  - Introduce a standard for the distribution of the various modules, to facilitate re-use (no having to download a huge library on every page load).

  - It could be beneficial to define both a textual, human readable form and a binary form that can be efficiently parsed by computers. Combined with a mapping between the two, you can have the best of both worlds: efficient processing by machine, and readable by humans.

  - There needn't actually be separate languages for semantics, presentation and scripting; it can all be done in a single language, thus simplifying things

I'd be working on this if my job didn't take so much time and energy, but, as it is, I'm just throwing these ideas out here.

Re:While we're at it ... (1)

MichaelSmith (789609) | more than 4 years ago | (#30078528)

Well probably the ultimate way forward was the hotjava browser, which was just the JDK applet viewer running an applet which could display a web page, with the capability to load new classes to display other content. Unfortunately this idea is more microsofts wet dream than suns wet dream and nobody wants to trust microsoft to that extent.

Simple, kludgy ascii based protocols help to keep the web open.

Re:While we're at it ... (0)

Anonymous Coward | more than 4 years ago | (#30078536)

"For the semantics, let's introduce an extensible language"

Let me remind you that XHTML 2 failed due to this thing. Introduce a dozen of new libraries by random parties every month and you have a broken web.

"have a library to implement the look of your website"
And you should really stidy a bit more about CSS

The thing is, that the web is NOT for programmers only, hypertext languages have to be stupid, so that humans can read them, computers need not binaries to understand them, they can parse text since... forever?

Re:While we're at it ... (0)

Anonymous Coward | more than 4 years ago | (#30078592)

Something else that grinds my gears is why is HTML pull only?

It was a good first try. Why do we persist with this?

Re:While we're at it ... (0)

Anonymous Coward | more than 4 years ago | (#30078818)

somewhat like this failure

www.tin-tags.org

Don't be evil. Be swift and speedy. (1)

PDX (412820) | more than 4 years ago | (#30078460)

Then the customers will empty their wallets twice as fast!

The question is - why? (0)

Anonymous Coward | more than 4 years ago | (#30078466)

Do we need a faster http? Really?

And the whole mission statement for this thing takes me back to the days of WAP. In fact all of the optimisation stuff has already been done as part of WSP - but hey, go ahead and re-invent the wheel.

Re:The question is - why? (1)

amicusNYCL (1538833) | more than 4 years ago | (#30078742)

Do we need a faster http? Really?

Of course not. HTTP as it exists now is perfect and sublime, and people will still be using this exact implementation over the next thousand years.

SPDY is technically nonsensical (2, Insightful)

Anonymous Coward | more than 4 years ago | (#30078480)

"Single request per connection. Because HTTP can only fetch one resource at a time (HTTP pipelining helps, but still enforces only a FIFO queue)"

WTF you do realize that TCP is a head-of-line blocking protocol right? You can layer whatever the hell you want into a TCP channel and its still bound to TCPs shortcommings. If google really wanted to be productive they would leverge SCTP streams rather than reinventing crap that will never be optimal anyway... haha they even list this under "previous approaches" as if its somehow "legacy"

"Exclusively client-initiated requests. "
Nonsense, this was done in netscape 4.x

"Uncompressed request and response headers."
"Redundant headers"

gzip anyone?

"Optional data compression. Content should always be sent in a compressed format"

Its nice that google thinks it can dictate to operators.

If google really wanted to help speed up the fricking web they would discontinue adscense and google analyitics which adds extra RTTs to god knows what percentage of the entire web.

The real problem is all the commercial **CRAP** and too few selfless operators working to help people without expecting back in return.

I've never had to wait to bring up a wikipedia page.

Cell phones (1)

nexxuz (895394) | more than 4 years ago | (#30078490)

This sound like it would be perfect for cell phone browsers.

A novel idea (3, Interesting)

DaveV1.0 (203135) | more than 4 years ago | (#30078640)

How about we don't use HTTP/HTML for things they were not designed or ever intended to do? You know, that "right tool for the right job" thing.

Twice as fast doesn't justify it (1)

edelbrp (62429) | more than 4 years ago | (#30078820)

The nice thing about standards is that there are so many to choose from!

Why in the world implement a new standard whose purpose is to speed up the web yet only do it 2x under certain conditions? To be taken seriously, it would have to be orders of magnitude faster, but that's a huge hurdle because the root of the problem isn't the HTTP protocol, but what's happening on the web server (no pipelined connections? slow DB? uncompressed content? sloppy, inefficient coding?) and the end users' bandwidth. The one thing SPDY has going for it is compressing headers and eliminating redundant headers, but that's a small gain really.

In any case, you could simply wait and things will get naturally faster w/o new protocols because servers generally get faster and users' bandwidth increases. And by the same token the benefits of a new in-between protocol would diminish.

Cool stuff (1)

gozu (541069) | more than 4 years ago | (#30078858)

SPDY sounds like a really cool open source project if you ask me. Sure, it's not as cool as replacing TCP and HTTP completely but I bet I'm not the only one who's checking out the white paper and the implementation of the algorithms.

SSL for everything? (1)

colfer (619105) | more than 4 years ago | (#30078874)

It says the goal is

To make SSL the underlying transport protocol, for better security and compatibility with existing network infrastructure. Although SSL does introduce a latency penalty, we believe that the long-term future of the web depends on a secure network connection. In addition, the use of SSL is necessary to ensure that communication across existing proxies is not broken.

But the testing is on both TCP and SSL/TCP, since:

SSL poses other latency and deployment challenges. Among these are: the additional RTTs for the SSL handshake; encryption; difficulty of caching for some proxies. We need to do more SSL tuning.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?