Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

SPDY Not As Speedy As Hyped?

timothy posted more than 2 years ago | from the but-it-stays-late-on-weekends dept.

Firefox 135

Freshly Exhumed writes "Akamai's Guy Podjarny reveals after testing: SPDY is different than HTTP in many ways, but its primary value comes from being able to multiplex many requests/responses from client to server over a single (or few) TCP connections. Previous benchmarks tout great benefits, ranging from making pages load 2x faster to making mobile sites 23% faster using SPDY and HTTPS than over clear HTTP. However, when testing real world sites I did not see any such gains. In fact, my tests showed SPDY is only marginally faster than HTTPS and is slower than HTTP."

cancel ×

135 comments

Sorry! There are no comments related to the filter you selected.

ARRIBA !! ARRIBA !! (0)

Anonymous Coward | more than 2 years ago | (#40351143)

Speedy Gonzales has come to save the day !!

Different benchmarking environments? (-1)

Anonymous Coward | more than 2 years ago | (#40351147)

The benchmark environment probably used a certain special program that totally cleaned up their system and increased their speed...

Re:Different benchmarking environments? (5, Funny)

dmacleod808 (729707) | more than 2 years ago | (#40351171)

I use Mycleanpc.bulshit for all my benchmarking needs!

Re:Different benchmarking environments? (0, Offtopic)

Cylix (55374) | more than 2 years ago | (#40351305)

I use it to bury dead hookers!

Re:Different benchmarking environments? (0)

Anonymous Coward | more than 2 years ago | (#40352693)

Or vagina

Re:Different benchmarking environments? (-1)

Anonymous Coward | more than 2 years ago | (#40351269)

My PC came through with flying colors when no one else could!

Re:Different benchmarking environments? (-1)

Anonymous Coward | more than 2 years ago | (#40351605)

Why resort to slashdot - Yahoo has all the Answers:

my clean pc is not a scam technicly,but when they say free only the scan is free(the actual cleaning isnt),i havent used it my self but based on other programs like this the my clean pc program is only partially free.

also my cleanpc prob can clean and detect most vireses but isnt as accurate as standered antivirus programs.

if ur looking for a good antivirus program thats free i reccomend avg free.

and if ur lookign for a good free registry cleaner and file cleaner go to what dylan said above but ccleaner isnt for viruses.

Not so fast...YET (1, Insightful)

jampola (1994582) | more than 2 years ago | (#40351179)

You're not going to see the potential of SPDY before we have environments (browsers, CPU and your internet speed) that can take full advantage of it. Only in the most recent version of Firefox did we see SPDY support.

What's the moral of all this? It's early days yet. Let's talk in a few years when the rest of us catch up.

Re:Not so fast...YET (5, Insightful)

bakuun (976228) | more than 2 years ago | (#40351263)

You're not going to see the potential of SPDY before we have environments (browsers, CPU and your internet speed) that can take full advantage of it. Only in the most recent version of Firefox did we see SPDY support.

SPDY does not depend at all on CPUs or your "internet speed". It does depend on the browser (with both Firefox and Chrome supprting SPDY now) and, critically, the server. That last is also why the article author did not see much of a speedup - most content providers don't support SPDY yet. Going to non-SPDY servers and believing that it will evaluate SPDY for you is absolutely ridiculous.

Re:Not so fast...YET (5, Informative)

Anonymous Coward | more than 2 years ago | (#40351317)

I came here to say something like "read the article, the guy is from Akamai and would know to only use servers that serve SPDY - such as many of the Google properties". But then, I read the fine article (blog) and realized the guy doesn't know enough to do that and just used "the top 500 sites" - which means a very large chunk of them don't know what SPDY is and he only used it between himself and his proxy. Great test that was. So your point is well taken. Bogus test means bogus results.

Re:Not so fast...YET (5, Interesting)

Anonymous Coward | more than 2 years ago | (#40351397)

No, he proxied the sites through a SPDY proxy, but that's not really a good way to test. Most sites shard domains to improve performance. Unfortunately, that basically destroys SPDY's pipelining advantage. The author tried to compensate by doing simple sub-domain coalescing (which he admits significantly increased SPDY performance where it applied) but that's just too coarse of an approach, as sharding is rarely ever restricted to subdomains. He also doesn't understand how browser parsing and loading works, and specifically that script execution and resource loads can be deferred (which is another key to keeping SPDY's pipeline full).

Basically, his tests showed that SPDY isn't magic faster dust. You will need to modify your site a bit if you want to really see advantages from it. And, you may see a minor performance degradation on an HTTP site that's unoptimized or heavily optimized in a way that doesn't play well with SPDY. However, if your site is optimized correctly, SPDY is still a big win over HTTP/HTTPS.

Re:Not so fast...YET (1)

Anonymous Coward | more than 2 years ago | (#40351445)

You are not a very good reader then. He put a proxy in front of the web sites that implements the protocol.

Re:Not so fast...YET (5, Informative)

dave420 (699308) | more than 2 years ago | (#40351465)

And unless that proxy used SPDY to talk to the servers, or had them entirely cached, that means nothing.

What is the big deal with SPDY? (1, Interesting)

bussdriver (620565) | more than 2 years ago | (#40352519)

From what I read about SPDY it doesn't sound like a big benefit justifying a change in protocols.

HTTP pipeline support has been around for over a decade now and I'm unaware of the extent of it's usage but it produced real benefits back when I was using it in Firefox and apache about a decade ago. SPDY does pipelines; well so did HTTP: OPTIONALLY.
I've read arguments about the benefits of pipelines, been there, done that - it is not new. When you have a scalable solution you CAN'T run everything from 1-2 pipelines on 1 big server, if not for CPU limits it is the bandwidth limitations.

HTTP Deflate encoding has been a little bit of a mess (thank you Microsoft) but I've found huge benefits to gzipping static content on the server. SPDY does gzip; well so did HTTP: OPTIONALLY. Are there benefits to gzipping all your bandwidth? NO! because most of it is JPEG and PNG images; HTTP is mixed mode, I admit it has additional overhead (but its not a huge deal in page loading speed.)

SPDY has too much unnecessary encryption which wastes power and CPU time.

The #1 problem I've seen is EXTERNAL resources, usually AD SERVERS, TRACKERS/social networks. DNS is a huge speed loss even today with my own DNS caching server that bypasses my ISP which has purposely slow DNS. I've also noticed plenty of LARGE image files, not optimized or even oversized for the page. My newer browsers wait for images of unknown sizes while previously the page would visibly reflow also CSS noticeably slowed page rendering. This likely will get worse as people start including larger images as soon as iOS browsers utilize them... since they separate logical resolution from actual resolution (like scalable fonts do.) CSS3 also gives us 150k+ fonts that must be downloaded before page rendering. "web 2.0 / ajax" sites could benefit from web sockets since that seems to be the only way we are going to do intelligent server interaction. I won't even get into all the massive javascript libraries bloating everything. SPDY is not going to help with that stuff.

I VOTE TO KEEP HTTP 1.1 and work on HTTP 2.0 based upon UDP or TCPv2 for the future. We could use something for open connections; web sockets is too much power (and risk) just to solve the stateless problem.

Re:What is the big deal with SPDY? (5, Informative)

rekoil (168689) | more than 2 years ago | (#40352679)

1. HTTP Pipeline support proved very difficult to implement reliably; so much so that Opera was the only major browser to turn it on. It can be enabled in Chrome and Firefox but expect glitches. By all accounts SPDY's framing structure is far easier to reliably implement.

2. WIth SPDY, it's not just the content that's compressed but the HTTP headers themselves. When you look at the size of a lot of URLs and cookies that get passed back and forth, that's not a insignificant amount of data. And since it's text, it compresses quite well.

3. SSL is required for SPDY because the capability is negotiated in a TLS extension. Many people would argue that if this gets more sites to use SSL by default, that's a Good Thing.

4. If you're running SPDY, the practice of "spreading" site content across multiple hostnames, which improves performance with normal HTTP sites, actually works against you, since the browser still has to open a new TCP connection for each hostname. This is an implementation issue more than an issue with the protocol itself; I expect web developers to adjust their sites accordingly once client adoption rates increase.

5. The biggest gains you can get from SPDY, which few have implemented, is the server push and hint capability; this allows the server to send an inline resource to a browser before the client knows it needs it (i.e. before HTML or CSS is processed by the browser).

But as someone else as pointed out, the author's test isn't really valid, as he didn't test directly to sites that support SPDY natively, he went through a proxy.

The website I work for is supporting SPDY, and the gains we've seen are pretty close to the ~20-25% benchmarks reported by others. As many have pointed out, this author's methodology is way broken. I'd recommend testing to sites that are known to support SPDY (the best-known are Google and Twitter), with the capability enabled and then disabled (You can set this in Firefox's about:config, Chrome requires a command line lauch with --use-spdy=false in order to do this, though).

Re:What is the big deal with SPDY? (-1)

Anonymous Coward | more than 2 years ago | (#40353875)

The reason why Chrome added pipelining support is because Opera, with pipelining on by default, had much faster page load times for both mobile and desktop. And in Firefox you could change a single option in about:config or get an 'accelerator' add-on that did that, and you would have much faster page load times than Chrome. And it's very important to Google that users switch to Chrome, since then they can mold the user experience any way they want (ie more tracking, more advertising, less privacy).

Pipelining works just about everywhere, and provides basically the same benefits. You'll never see a SPDY vs pipelining claim from Google because in practice SPDY doesn't really improve on it.

Re:What is the big deal with SPDY? (2)

Bengie (1121981) | more than 2 years ago | (#40352699)

HTTP 1.1 pipelining does not allow more than one command to be outstanding. HTTP allows you to re-use the connection for your next command, but does not allow multiple commands at the same time.

HTTP1.1 is actually quite horrible for modern networks/computers. Choosing between HTTP1.1 and 1.0 is choosing between a giant douche and a turd sandwich. UDP won't happen because of the lack of congestion avoidance, which should be left up to the OS, not the app.

The biggest issue with HTTP is it does not allow multiplexing communication without opening up more connection, which is hard on servers, firewalls, and messes with TCP congestion detection.

Re:What is the big deal with SPDY? (0)

Anonymous Coward | more than 2 years ago | (#40352959)

Wait, I thought the whole point of pipelining was multiple requests at a time. If that isn't what pipelining is, then WTH is pipelining?

Re:Not so fast...YET (1)

Wierdy1024 (902573) | more than 2 years ago | (#40353395)

It is approximately valid. He put a bandwidth simulator between him and his proxy.

His comment about the average site requiring ~ 30 different SPDY connections seems excessive though. I suspect this is why he's seeing such bad results. Maybe he is assuming no benefit from the removal of domain sharding which providers would likely do if they rolled out SPDY.

Re:Not so fast...YET (1)

buddyglass (925859) | more than 2 years ago | (#40353923)

SPDY does not depend at all on CPUs or your "internet speed".

CPU is mostly irrelevant, true, but the characteristics of one's network connection are likely very relevant to the sort of speedup one should expect to see from SPDY. My understanding is that the greater the latency between client and server the greater the benefit the client is likely to experience from using SPDY.

Re:Not so fast...YET (1)

Samantha Wright (1324923) | more than 2 years ago | (#40351297)

Personally I'm a little sceptical about the testing methodology:

For a proxy, I used the Cotendo CDN (recently acquired by Akamai). Cotendo was one of the early adopters of SPDY, has production-grade SPDY support and high performance servers. Cotendo was used in three modes – HTTP, HTTPS and SPDY (meaning HTTPS+SPDY).

Surely that means that the proxy would have to download at least some of the pages from non-SPDY servers on demand, rendering this entire thing suspect? He said that he ran 5 replicates, but no attempt is offered at explaining why SPDY should be slower than plain ol' HTTP, only why it might not be faster. I could be wrong, but it looks like [ietf.org] the protocol is more concise on average even for a single-page request. Maybe Cotendo just has a bad implementation?

Re:Not so fast...YET (2, Informative)

Anonymous Coward | more than 2 years ago | (#40351553)

The reason SPDY doesn't help much is that your average web site now requires connections to a dozen other domains (each with only one or two requests), meaning that consolidating the requests to the main domain into a single connection just isn't that beneficial.

dom

Re:Not so fast...YET (1)

Samantha Wright (1324923) | more than 2 years ago | (#40352917)

Yes, but that shouldn't slow things down beyond 100% of HTTP's speed... should it?

Re:Not so fast...YET (0)

Anonymous Coward | more than 2 years ago | (#40352091)

SPDY adoption depends on Internet Explorer, which will NEVER support it.

Single domain? (1)

ThePhilips (752041) | more than 2 years ago | (#40351193)

SPDY optimizes on a per-domain basis. In an extreme case where every resource is hosted on a different domain, SPDY doesn’t help.

So the whole CDN thing has to be redone for SPDY to deliver on the promises?

Re:Single domain? (1)

Skinkie (815924) | more than 2 years ago | (#40351227)

Like the CDN had to be redone for the opendns stuff (non-geographic queries). And the HTTPS stuff had to be redone because Google thought FalseStart was a great idea :)

Re:Single domain? (2, Informative)

Anonymous Coward | more than 2 years ago | (#40351543)

This is not at all true. There were numerous stories about how Google worked with CDNs to ensure compatibility with OpenDNS. Here's one example from last year:
http://arstechnica.com/tech-policy/2011/08/opendns-and-google-working-with-cdns-on-dns-speedup/ [arstechnica.com]

As for SSL False Start, the problem was a handful of SSL terminators that violated the spec. Unfortunately, most of the manufacturers of those devices showed no interest in making a trivial fix to their implementations, and the few that did make the fix didn't bother to deploy it to existing customers. So, everyone had to suffer because 0.5% of SSL servers were broken:
http://arstechnica.com/business/2012/04/google-abandons-noble-experiment-to-make-ssl-less-painful/ [arstechnica.com]

Re:Single domain? (0)

Anonymous Coward | more than 2 years ago | (#40351471)

No, the whole CDN thing can work fine with SPDY. You just won't see much benefit if your using domain sharding, or unwilling to modify your site a bit to better take advantage of SPDY's strengths.

Re:Single domain? (1)

dmomo (256005) | more than 2 years ago | (#40352301)

Maybe that work could be given to browsers?

If a page used these resources:

http://static1.js.cdn.com/resource1.js [cdn.com]
http://static2.js.cdn.com/resource2.js [cdn.com]
http://static3.css.cdn.com/resource3.css [cdn.com]

perhaps the browser could automatically look for some file, the way it does with favicon:

http://static1.js.cdn.com/spdy.txt [cdn.com]

That would pass back a file containing a spdy domain to use?
spdy.cdn.com

It would mean more requests on a single page load, I guess. But they could be cached.

Re:Single domain? (1)

rekoil (168689) | more than 2 years ago | (#40352719)

That only works if all of those hostnames resolve to the same IP addresses. The main optimization in SPDY is the elimination of the need to make multiple TCP connections simultaneously, but all of those resources must live on the same server. If the resources have different hostnames, you might be able to detect hostnames that point to the same IP and then interleave those, but I don't know if the current implementations do that yet.

Most CDNs, however, return different IPs for nearly every query, and web developers use multiple hostnames pointing to the same resources to get non-SPDY multiplexing today. This sounds like an optimization that's easy to accomplish dynamically, though (if request is SPDY, don't spread the resources across different hostnames).

Amazon Silk (4, Interesting)

yelvington (8169) | more than 2 years ago | (#40351221)

Amazon's Silk browser, used in the Kindle Fire, implements SPDY and a reverse proxy cache in the Amazon cloud that is supposedly capable of predictive retrieval and caching. While it occasionally is faster than HTTP, on the whole it doesn't seem to mesh well with my browsing habits and I've disabled the so-called "accelerated page loading" on my KF. Judging from comments in the Amazon forums, my experience is not unusual.

the problem with SPDY... (1, Insightful)

Anonymous Coward | more than 2 years ago | (#40351253)

...is the usual problem you have when a single self-centred company which believes itself to be awash with superstars tries to take over a standard protocol: some ideas seductive but much questionable.

There are some good ideas in SPDY which have been proposed elsewhere and which could be introduced into the next version of HTTP: not repeating headers, header compression. There are some questionable ideas (consider cost and implementation complexity at both ends - some problems are illustrated by the rules re what can be multiplexed): multiplexing over a single connection, new security layer. There are some downright googlish ideas: pushing resources to the client which aren't requested.

In short, no thanks, Google.

The problem with the test ... (3, Informative)

sharepass11 (2569211) | more than 2 years ago | (#40351659)

IMHO, there is a huge issue with this test. On one hand its author claims : "However, when testing real world sites I did not see any such gains. In fact, my tests showed SPDY is only marginally faster than HTTPS and is slower than HTTP." But on the other hand at bottom lines it states : "This means SPDY doesn’t make a material difference for page load times, and more specifically does not offset the price of switching to SSL." SSL ? Wait a minute doc ... there is something wrong with the flux capacitor ! In fact to "SPDY"-fy the sites, the guy clarifies the technique he used : "For a proxy, I used the Cotendo CDN (recently acquired by Akamai). Cotendo was one of the early adopters of SPDY, has production-grade SPDY support and high performance servers. Cotendo was used in three modes – HTTP, HTTPS and SPDY (meaning HTTPS+SPDY)." You get it ? No, not the advertisement for cotendo ... ... He did not tested SPDY at all but tested TLS+SPDY !!! WTF ?!? Either the guy should upgrade all his article replacing SPDY with TLS+SPDY or he should trash it and make another one only with SPDY. Then we will start to discuss about some of his strange benchmark decision... Sad that such a odd article get so much press here :(

Re:The problem with the test ... (1)

Anonymous Coward | more than 2 years ago | (#40352739)

SPDY mandates TLS. Go read the spec.

Re:The problem with the test ... (5, Informative)

rekoil (168689) | more than 2 years ago | (#40352749)

SPDY as implemented requires SSL, since the protocol capability is negotiated by a TLS extension on port 443. There's no spec for negotiating SPDY on a standard HTTP port - it would only work if the capability was assumed on both sides before the connection (for example, URLs that start with spdy:// instead of http:/// [http] which connects to a different TCP port on the server).

Re:the problem with SPDY... (-1)

Anonymous Coward | more than 2 years ago | (#40351985)

How do you spot a Microsoft employee on /.? Spot the dumbass AC comment with irrational hate towards Google and modded up by shills.

Re:the problem with SPDY... (0)

Anonymous Coward | more than 2 years ago | (#40352315)

No. The Microsoft employee gets first post. And we're not actually talking about M$ employees, more like shady 3rd-party marketing groups. The upmodding at least in this instance can more easily be explained by stupidity not malice: someone posts an authorative-sounding contrary opinion, the next couple mods that come by haven't read TFA...

Microsoft gets a fair amount of backlash from using shills; the ROI can't be that great. There is almost always a reason for a large marketing push, I think we're tailing off on the Win8 promotions (but I haven't read the last few /. articles on the same).

Re:the problem with SPDY... (0)

Anonymous Coward | more than 2 years ago | (#40353503)

There are some downright googlish ideas: pushing resources to the client which aren't requested.

In short, no thanks, Google.

Which aren't requested yet. The server gave the client the HTML it's processing. Clearly the server knows what the client is going to do, since the client is working off the HTML it got from the server, so I don't see the problem with the server right away giving the client the information it knows the client will need soon. The only way I can make sense of your criticism is if you prefer your site to load slowly. In that case I do agree that you shouldn't use something called SPDY.

The average is meaningless (2)

MobyDisk (75490) | more than 2 years ago | (#40351255)

The average is meaningless without the raw data. Suppose it averaged 5%: is that because all sites were 5% faster, or because one site was 500% faster and the others were 2% faster? The former would mean that SPDY is mostly useless. The latter would mean that SPDY is immensely useful, but just not in all cases.

Re:The average is meaningless (0)

Anonymous Coward | more than 2 years ago | (#40351277)

The author's presentation of stats left something to be desired. But he also commented that SPDY didn't seem to make much of a difference, so probably seeing the detailed data wouldn't materially affect his conclusion.

It's the advertising out of control (5, Insightful)

Skapare (16644) | more than 2 years ago | (#40351293)

Web browsing experiences are slowing down from advertising. But it's not an issue around the images that advertising loads. Instead, it is a combination of the extra time needed to load Javascript from advertisers (whether it is to spy on you or just to rotate ads around), and programming defects in that Javascript (doesn't play well with others). Browsers have to stop and wait for scripts to finish loading before allowing everything to run or even be rendered. You can have a page freeze in a blank state when some advertiser's Javascript request isn't connecting or loading.

The solution SHOULD be that browsers DISALLOW loading Javascript (or any script as the case may be) from more than one different hostname per page (e.g. the page's own hostname not being counted against the limit of one). This would remain flexible enough to reference scripts from a different server, or even an advertising provider, without allowing it to get excessive. Browsers should also limit the time needed to load other scripts, though this may be complicated for scripts calling functions in other scripts. Perhaps the rule should be that even within a page, scripts are not allowed to call scripts loaded from a different hostname, except the scripts from the page's hostname itself can call scripts in one.

I've also noted quite many web sites that just get totally stuck and refuse to even render anything at all when they can't finish a connection for some script URL. Site scripting programmers need to get a better handle on error conditions. The advertising companies seem to be using too many unqualified programmers.

Re:It's the advertising out of control (1)

Anonymous Coward | more than 2 years ago | (#40351323)

If you have a better suggestion on how to finance all the free content out there, I'd like to hear it.

The current "everything is free" model on the web is only possible because of that advertising.

Re:It's the advertising out of control (1)

Anonymous Coward | more than 2 years ago | (#40351371)

If you had actually read the post you'd have seen that he considers advertising acceptable in itself.

Re:It's the advertising out of control (1)

Anonymous Coward | more than 2 years ago | (#40351409)

Life was fine before advertising blew up on the web, more of the content was generated by people than content farms and shills so the signal-to-noise was much better

Re:It's the advertising out of control (2)

Johann Lau (1040920) | more than 2 years ago | (#40351469)

The current "everything is free" model on the web is only possible because of that advertising.

It's not free, it's paid for by advertising. Hidden costs don't mean "no costs". Here's an idea: I pay my ISP for the downstream bandwidth. What if some of that money went to the owners of the servers I peruse, automically, AKA built-in micropayment? Dunno about you, but if that would double the price of my interwebs, it would still be fucking cheap interwebs. It wouldn't be a way to earn money with traffic either, just a way to not LOOSE money from it. Sure, it's sketchy and it involves no actual calculations (I wouldn't know where to start with those), who knows if it would work. But fuck it, I would be up for trying that.

Consider how public transportation is also partly paid for by ads. You pay for the ticket, yet you still get ads. However, I don't even want to know how much worse it would be if tickets costed no money at all. Just like I'd be interested if we can't make the internet an interesting place again, by putting advertisement, the mental product of those who HAVE no mental products, back in its place: last place. Which is still generous and you know it.

But more importantly, saying advertising shouldn't really on horrible clusterfucks of javascript, isn't even arguing against advertising. Even arguing that ads shouldn't spy, not even count views, and simply pay for click-throughs (then they're your customers, agreeing to your privacy policy, and you can do whatever the fuck you want), wouldn't be arguing against advertisement as a whole. I bit anyway, because this topic does interest me.

By the way, if anyone here is in advertising or marketing, kill yourself.

Just a little thought. I'm just trying to plant seeds. Maybe one day, they'll take root. I don't know. You try. You do what you can. Kill yourself.

Seriously, though. If you are, do. No, really. There's no rationalisation for what you do, and you are Satan's little helpers, okay? Kill yourself. Seriously. You are the ruiner of all things good, seriously. No, this is not a joke, if you're going: "There's going to be a joke coming." There's no fucking joke coming. You are Satan's spawn, filling the world with bile and garbage. You are fucked, and you are fucking us. Kill yourself, it's the only way to save your fucking soul. Kill yourself. Planting seeds.

I know all the marketing people are going: "He's doing a joke." There's no joke here whatsoever. Suck a tail-pipe, fucking hang yourself, borrow a gun from a Yank friend - I don't care how you do it. Rid the world of your evil fucking machinations.

I know what all the marketing people are thinking right now, too. "Oh, you know what Bill's doing? He's going for that anti-marketing dollar. That's a good market, he's very smart." Oh man. I am not doing that, you fucking evil scumbags! "Oh, you know what Bill's doing now? He's going for the righteous indignation dollar. That's a big dollar. Lot of people are feeling that indignation, we've done research. Huge market. He's doing a good thing." God damn it, I'm not doing that, you scumbags. Quit putting a goddamn dollar sign on every fucking thing on this planet! "Oh, the anger dollar. Huge. Huge in times of recession. Giant market, Bill's very bright to do that." God, I'm just caught in a fucking web. "Oh, the trapped dollar. Big dollar, huge dollar. Good market, look at our research. We see that many people feel trapped. If we play to that and then separate them into the trapped dollar ..."

How do you live like that? And I bet you sleep like fucking babies at night, don't you? "What did you do today, honey?" "Oh, we made arsenic childhood food. Now, good night. Yeah, we just said, you know, is your baby really too loud? You know ... yeah, the mums will love it, yeah." Sleep like fucking children, don't you? This is your world, isn't it?

Bill Hicks (I'll never tire of posting this, until and even for a while after it came to pass)

Re:It's the advertising out of control (1)

jonbryce (703250) | more than 2 years ago | (#40352229)

Serve all the scripts from your own website rather than from loads of different third party websites? The actual bandwidth requirements of a script file are not that great, most of the time is spent trying to contact all the different websites.

Re:It's the advertising out of control (1)

Anonymous Coward | more than 2 years ago | (#40352811)

Web was around before it became the polluted cesspit it is today. We don't need a billion images per page, of which 1 is related to the page you're looking at. We don't need instantly playing media divs for TV adverts as soon as you land on a page. We don't need a billion wanker bloggers and old media portals trying to be l337 and bring in ad dollars. If they all disappeared right now, nothing of value would be lost.

Re:It's the advertising out of control (2)

swillden (191260) | more than 2 years ago | (#40351373)

The solution SHOULD be that browsers DISALLOW loading Javascript (or any script as the case may be) from more than one different hostname per page (e.g. the page's own hostname not being counted against the limit of one). This would remain flexible enough to reference scripts from a different server, or even an advertising provider, without allowing it to get excessive.

This would cause problems for lots of sites that use javascript libraries, like jQuery et al, and load them from the canonical library sources. Of course, sites could host their libraries themselves, but that would require them to keep them up to date, and it would keep them from being cached and reused across different sites.

Plus, your suggestion basically boils down to "let's break sites of clueless web developers so they'll get a clue". I don't object to that on principle, but it's not practical. What happens when a browser implements your ideas? Lots of crappy web sites don't work any more for users of that browser. What you want is the developers to fix their site. What will happen is that users will switch to a browser that works.

Re:It's the advertising out of control (-1)

Anonymous Coward | more than 2 years ago | (#40351395)

This would cause problems for lots of sites that use javascript libraries, like jQuery et al, and load them from the canonical library sources.

Those sites are retarded, so it's fine.

Of course, sites could host their libraries themselves, but that would require them to keep them up to date, and it would keep them from being cached and reused across different sites.

Cry me a fucking river.

Re:It's the advertising out of control (2)

icebraining (1313345) | more than 2 years ago | (#40351711)

Loading from a canonical source means the client is much more likely to have it in cache and skip the whole download. How the fuck is that retarded?

Re:It's the advertising out of control (0)

Anonymous Coward | more than 2 years ago | (#40352459)

It's retarded because the tiny benefit of maybe not having to download the file sometimes is massively outweighed by defeating keep-alive and such, making the functioning of your website depend on the health of some random other server, and risking incompatible changes to the code that break your site.

Re:It's the advertising out of control (0)

Anonymous Coward | more than 2 years ago | (#40352995)

In practice it just ends up being another way for google to get the web browsing history of every user on the web. You can block their ad services, you can block their analytics, but if you block their javascript library hosting, you've broken half the internet.

Re:It's the advertising out of control (2)

PIBM (588930) | more than 2 years ago | (#40351401)

Actually, a lot of sites refuses to load the content if the advertisement is not yet displayed to prevent showing the content for free; they prefer you to reload the page and obtain that advertisement rather than risk losing that display.

Re:It's the advertising out of control (1)

tepples (727027) | more than 2 years ago | (#40351795)

Then why can't they include the advertisement as text in the page, rather than as a script that inserts an Adobe Flash Player object into the page?

Re:It's the advertising out of control (0)

Anonymous Coward | more than 2 years ago | (#40352105)

The script must run to retrieve the ad contents from the ad server, whether the contents is flash, image, or text. Until the call is made to the ad server, no one knows which ad to display. You couldn't hard code the ad text on the page because the ads change with each page load.

Re:It's the advertising out of control (1)

PIBM (588930) | more than 2 years ago | (#40352247)

Sometimes you have multiple providers of ads and you must rotate them yourself in (which is nice as you know from whom you will request the ad), and sometimes even those providers fill in blanks for specific regions/ips or too many request for a user by using subproviders, so the final 'page in a page in a page' can be pretty long and ugly, and totally impossible to predict at the first server level.

Re:It's the advertising out of control (1)

buddyglass (925859) | more than 2 years ago | (#40353943)

I just encountered this today, in fact. A blog I read uses Disqus to manage its comments. For some reason, on this particular blog, the comments wouldn't display until I selectively turned off blocking certain items on the page that part of the default list in Adblock Plus. Even then I could see the comments but not add my own. Had to create exceptions for a few additional items before the Disqus section would actually let me post.

Re:It's the advertising out of control (0)

Anonymous Coward | more than 2 years ago | (#40351453)

The advertising companies seem to be using too many unqualified programmers.

Who sais this is not the intended behaviour? Advertising companies don't give a rat's ass about the content the user is looking for, or the often cited "user experience", they want their ad's shown, plain and simple.

Err, no (1)

gaspyy (514539) | more than 2 years ago | (#40351483)

If you take the time to analyse a modern page, you'll see that the ads usually represent a relatively small chunk.

What really slows the pages down is over reliance on javascript frameworks. Pages that use both jquery and another framework such as scriptaculous are not uncommon. What's worse is that the developers often use these libraries for trivial effects, stuff that can easily done directly in javascript. Then most pages have scripts for at least 3-4 social media sites (Facebook Like box, Twitter counter, G+ counter, etc.), live tweet box, 5-8 different css files...

And then they spend a fortune on bandwidth, CDN services and faster servers, when by optimizing content they'd get more performance.

Re:Err, no (1)

KingMotley (944240) | more than 2 years ago | (#40351691)

For the sites that I've worked on, it's mostly the social media plugin crap that slows then down, with facebook being one of the worst offenders. As such, I take great pains to load the social media stuff only after the entire page has been rendered, which helps responsiveness greatly. Multiple css files = negligible. Multiple javascript frameworks = negligible.

Testing in multiple versions of IE (1)

tepples (727027) | more than 2 years ago | (#40351753)

What's worse is that the developers often use these libraries for trivial effects, stuff that can easily done directly in javascript.

Doing it directly in JavaScript would take valuable programmer time to code and test for proper HTML5 browsers and for each of the past few versions of IE.

Then most pages have scripts for at least 3-4 social media sites (Facebook Like box, Twitter counter, G+ counter, etc.), live tweet box

How would you recommend accomplishing the visibility that sharing of pages by users of social media sites gives without incurring the delay of loading those scripts?

5-8 different css files

One approach is to have one huge CSS file that covers all browsers and all parts of the site, forcing users to download a lot of data that does not pertain to a particular page. Another is to have one main CSS file for the site, one for the section, and one "fixes" file for each particular flavor of user agent. Assuming each is cached to expire a week in the future, which is better and why?

Re:Testing in multiple versions of IE (1)

Anonymous Coward | more than 2 years ago | (#40352495)

How would you recommend accomplishing the visibility that sharing of pages by users of social media sites gives without incurring the delay of loading those scripts?

Right, because users can't possibly make posts on their favourite faggot site by themselves without the help of some ridiculous time-wasting script.

Re:It's the advertising out of control (1)

DavidTC (10147) | more than 2 years ago | (#40351515)

I don't know about your solution, but you're dead right about the problem.

I work as a fairly-technically-inclined web developer, and I can't count how many times I've seen other developers say 'I can't figure out how to make my Joomla or Wordpress or whatever give out pages faster, perhaps I need some sort of caching or something'. I check, and it's a ten second load time.

And then I look at it in Firebug and point out their HTML arrives in half a second, and their images arrive in three seconds and are cached on all but the first page, and they were smart enough to include image dimensions to allow the page to render without them. The actual slowness is because they're loading loading a javascript from another server that is not cached, and that script has to get downloaded, and run, and then, at that point it goes and gets an actual ad...at which point the damn page finally renders. So there's nothing anyone can do about the speed.

Can we please figure out something besides this paradigm? I have no problem with ads, but this is insane. Could Google not set it up where you make the ads like <img id='google_ad1' src='/transparent.gif' height... />, and then asynchronously load some javascript after page load that swaps that with an ad?

All this Javascript is bad enough, but ads are specifically three times as bad simply because of the entire process of 'Connect to another server, download ad Javascript code, run that, it connects back to that server and downloads actual ads, and when it gets the actual ads the page can _finally_ render'.

Re:It's the advertising out of control (1)

KingMotley (944240) | more than 2 years ago | (#40351771)

There is nothing stopping you from doing that david. We do that and it works quite well for the off-site content.

Re:It's the advertising out of control (0)

Anonymous Coward | more than 2 years ago | (#40352133)

Google has had an asynchronous ad tag for nearly 8 months now. It's called the Google Publisher Tag.

Re:It's the advertising out of control (0)

Anonymous Coward | more than 2 years ago | (#40351529)

this is why I choose to browse the web with javascript turned off, an occasionally updated mvps hosts file (with some personal extra entries like googleadservices, hulu ad server, etc) and run privoxy to strip advertising out.

pages load a lot, lot faster with javascript disabled and without most of the advertising cruft.

my browser lets me save javascript /cookie settings per site, so I turn it on for the few sites I visit regularly that require javascript/cookies to function. It's also very easy to toggle javascript/cookies and proxy (one key) so that's nice.

It is appalling how many big/popular sites rely on javascript for basic navigation and functionality. it's really pretty sad. the only thing worse is the time waiting for ad servers to load.

Re:It's the advertising out of control (1)

fa2k (881632) | more than 2 years ago | (#40351557)

It has gotten a bit better for me over the last 6 months. I used to see "Waiting for google-analytics.com" for 1-2 seconds all the time, but it doesn't happen as much now. It could be because I moved to a different country and Google has more servers there.

Re:It's the advertising out of control (0)

Anonymous Coward | more than 2 years ago | (#40351769)

The solution SHOULD be that browsers DISALLOW loading Javascript (or any script as the case may be) from more than one different hostname per page (e.g. the page's own hostname not being counted against the limit of one).

You can already do this in Firefox. Use NoScript.

But be prepared for broken sites. Some of which try to load scripts from more than a dozen different servers (especially annoying on TV news station web sites) and it can be hard to determine which ones to turn on to get the damn site to work (I usually just leave if I can't get it; and they lose a viewer because of their stupidity).

Re:It's the advertising out of control (1)

Lazy Jones (8403) | more than 2 years ago | (#40351897)

Instead, it is a combination of the extra time needed to load Javascript from advertisers (whether it is to spy on you or just to rotate ads around), and programming defects in that Javascript (doesn't play well with others). Browsers have to stop and wait for scripts to finish loading before allowing everything to run or even be rendered. You can have a page freeze in a blank state when some advertiser's Javascript request isn't connecting or loading.

It is also a result of plain stupidity of some (major) ad server operating companies - here in Austria, we've heard things like "we try to disable caching for served media [images, swf etc.]" and we've actually seen images, animations, 200KB+ flash being loaded at every request because of this. Needless to say, the slow document.write() method of displaying ads is also still the norm ... If the W3C had any sense, it would have come up with an <AD> tag that worked like a restricted IFRAME (with mandatory caching) but could be turned on and off by the users (browsers). But like so many committees, the W3C is not interested in good technical solutions, but rather in the commercial interests of particular contributing corporations.

Re:It's the advertising out of control (0)

Anonymous Coward | more than 2 years ago | (#40352099)

The Google Publisher Tag, released a little over half a year ago, loads using ajax and does not block page loading.

http://support.google.com/dfp_premium/bin/answer.py?hl=en&answer=1650154

Re:It's the advertising out of control (1)

BronsCon (927697) | more than 2 years ago | (#40352633)

Another solution that could (should?) be implemented is: Use a locally-hosted script to load any externally-hosted scripts. Your page will load and render faster because everything required to render the page is loading from your own server, and you still get the 3rd-party-hosted shit^H^H^H^Hscripts on the page, just *after* it has rendered, so your users can, you know, actually start using the damned site.

Or maybe the proxy was the limit (2)

gstrickler (920733) | more than 2 years ago | (#40351331)

I used a Chrome browser as a client, and proxied the sites through Cotendo to control whether SPDY is used. Note that all the tests – whether HTTP, HTTPS or SPDY – were proxied through Cotendo, to ensure we’re comparing apples to apples.

Since they all ran through the same proxy, it might be the limiting factor. We would need to see tests the bypass the proxy to determine if his results have any meaning beyond that specific proxy.

Likewise, you have to ask the question, do all of the 500 sites he tested support SPDY?

Why not BEEP? (1)

Anonymous Coward | more than 2 years ago | (#40351365)

For the umpteenth time, if the main advantage of SPDY is multiplexing multiple streams over a single socket, why not use BEEP http://en.wikipedia.org/wiki/BEEP [wikipedia.org] ?

Re:Why not BEEP? (1)

rekoil (168689) | more than 2 years ago | (#40352761)

Because no one's bothered to ship a BEEP implementation in a major browser release?

Re:Why not BEEP? (0)

Anonymous Coward | more than 2 years ago | (#40352861)

Yes, because you'd need a beep capability at the server end to make use of it. Exactly like spdy does. Only if you want to add an infrastructure like these, which would you go for - an existing standard-conforming (rfc3080) implemenation that isn't tied to a browser or http (beep) and therefore may be of wider use, or a new shiny with a whole load of other stuff of dubious value (e.g. push content) thrown in (spdy) from a partisan vendor (google)?

Re:Why not BEEP? (0)

Anonymous Coward | more than 2 years ago | (#40353037)

Yes, because you'd need a beep capability at the server end to make use of it. Exactly like spdy does. Only if you want to add an infrastructure like these, which would you go for - an existing standard-conforming (rfc3080) implemenation that isn't tied to a browser or http (beep) and therefore may be of wider use, or a new shiny with a whole load of other stuff of dubious value (e.g. push content) thrown in (spdy) from a partisan vendor (google)?

Problems:
1) SPDY is drop-in backwards compatible. (It just wraps HTTP into a SPDY container, you can implement SPDY at the gateway and have the actual servers run unchanged talking to the gateway over HTTP which is just packed into a SPDY channel without modification)
2) A general purpose protocol is never useful as anything other than a base for implementing a more specific protocol. [Nobody "implements XML", they create a schema on top of XML and implement that]
3) BEEP is bidirectional without differentiating between client and server (according to Wikipedia) which means that it can push content as well.

Re:Why not BEEP? (0)

Anonymous Coward | more than 2 years ago | (#40353257)

Ok, I'm not a beep nor http expert, so this is interesting.

1) SPDY is drop-in backwards compatible. (It just wraps HTTP into a SPDY container, you can implement SPDY at the gateway and have the actual servers run unchanged talking to the gateway over HTTP which is just packed into a SPDY channel without modification)

I don't see why the same is not possible with beep. Indeed, I'd expect http to be wrapped beep exactly as you suggest with spdy. I was certainly not suggesting using it directly! My bad if it seemed that way.

2) A general purpose protocol is never useful as anything other than a base for implementing a more specific protocol. [Nobody "implements XML", they create a schema on top of XML and implement that]

Exactly. Beep is an infrastructure. You build on it. See point 1.

3) BEEP is bidirectional without differentiating between client and server (according to Wikipedia) which means that it can push content as well.

Now, I don't think I can argue with that. In which case... even less need to develop a new protocol from scratch. Even more reason to build on beep directly. No? Yes?

Domain Parallelization (0)

Anonymous Coward | more than 2 years ago | (#40351481)

The tests were performed on the top 500 websites (according to Alexa). Anybody running one of these sites (slashdot comes in at number 1749) almost certainly has some extremely capable people working on their site.

So, they will already be making use of domain parallelization. Hell, I'm lazy, Google can explain it better than me:

https://developers.google.com/speed/docs/best-practices/rtt

Parallelize downloads across hostnames

Overview

Serving resources from two different hostnames increases parallelization of downloads.
Details

The HTTP 1.1 specification (section 8.1.4) states that browsers should allow at most two concurrent connections per hostname (although newer browsers allow more than that: see Browserscope for a list). If an HTML document contains references to more resources (e.g. CSS, JavaScript, images, etc.) than the maximum allowed on one host, the browser issues requests for that number of resources, and queues the rest. As soon as some of the requests finish, the browser issues requests for the next number of resources in the queue. It repeats the process until it has downloaded all the resources. In other words, if a page references more than X external resources from a single host, where X is the maximum connections allowed per host, the browser must download them sequentially, X at a time, incurring 1 RTT for every X resources. The total round-trip time is N/X, where N is the number of resources to fetch from a host. For example, if a browser allows 4 concurrent connections per hostname, and a page references 100 resources on the same domain, it will incur 1 RTT for every 4 resources, and a total download time of 25 RTTs.

You can get around this restriction by serving resources from multiple hostnames. This "tricks" the browser into parallelizing additional downloads, which leads to faster page load times. However, using multiple concurrent connections can cause increased CPU usage on the client, and introduces additional round-trip time for each new TCP connection setup, as well as DNS lookup latency for clients with empty caches. Therefore, beyond a certain number of connections, this technique can actually degrade performance. The optimal number of hosts is generally believed to be between 2 and 5, depending on various factors such as the size of the files, bandwidth and so on. If your pages serve large numbers of static resources, such as images, from a single hostname, consider splitting them across multiple hostnames using DNS aliases. We recommend this technique for any page that serves more than 10 resources from a single host. (For pages that serve fewer resources than this, it's overkill.)

To set up additional hostnames, you can configure subdomains in your DNS database as CNAME records that point to a single A record, and then configure your web server to serve resources from the multiple hosts. For even better performance, if all or some of the resources don't make use of cookie data (which they usually don't), consider making all or some of the hosts subdomains of a cookieless domain. Be sure to evenly allocate all the resources to among the different hostnames, and in the pages that reference the resources, use the CNAMEd hostnames in the URLs.

If you host your static files using a CDN, your CDN may support serving these resources from more than one hostname. Contact your CDN to find out.

umm (0)

Anonymous Coward | more than 2 years ago | (#40351491)

Why is this in the Firefox category? SPDY isnt a Firefox only thing. Its not even a Mozillian invention.
Plus: the test is messed up. All he did test was the performance of his very own proxy.
It might also not be very suprising the SPDY, which is encrypted can be outperformed by the very unencrypted HTTP. Bottom line: If you dont understand the technology, dont run tests on it and blow the horn bout the results.

Too many connections (1)

chriswaco (37809) | more than 2 years ago | (#40351513)

SPDY solves *a* problem, but not *the* problem. The root of the problem today is that loading a simple web page requires 20 or more separate connections: images, ad networks, tracking systems, social network links, 3rd party comment systems, javascript libraries, css, etc. Somehow all of that content needs to be coalesced into fewer connections.

Re:Too many connections (1)

Lazy Jones (8403) | more than 2 years ago | (#40351809)

Somehow all of that content needs to be coalesced into fewer connections.

We did use fewer connections before CDNs, CDN-hosted JS libraries and cookie-less content domains became the norm and before web developers were advised by PageSpeed and other "authorities" to put content on multiple hosts to facilitate parallel access by browsers. Yes, it was a stupid idea to fix a minor implementation problem of browsers on the web server side.
Also, what contributes most to web page slowdown is ads and tracking code, both areas dominated by Google - who pretends to be somehow interested in speeding up the web (haha!).

Re:Too many connections (1)

rekoil (168689) | more than 2 years ago | (#40352777)

The good news there is that connections to Google's ad networks DO run over SPDY now, assuming a compatible browser.

Re:Too many connections (0)

Anonymous Coward | more than 2 years ago | (#40353799)

Except, per the article, SPDY's benefits come from opening fewer connections, and it can't do that if the hostname is different from the original request.

Re:Too many connections (1)

ls671 (1122017) | more than 2 years ago | (#40353227)

SPDY solves *a* problem, but not *the* problem. The root of the problem today is that loading a simple web page requires 20 or more separate connections: images, ad networks, tracking systems, social network links, 3rd party comment systems, javascript libraries, css, etc. Somehow all of that content needs to be coalesced into fewer connections.

You are wrong use netstat and a modern browser to test it out. With servers configured to do HTTP request keepalives, most browser open a maximum of 2 connections to one server and keep them up and everything is sent through those persistent connections.

The browser also needs to open at least one connection to any third party server without regards for the protocol used.

I see an average of 5 to 20 seconds timeout on most sites. The the server waits for other requests on the SAME connection. Use telnet and watch for the "close connection" delay after entering the query:

telnet www.yahoo.com 80
Trying 98.139.183.24...
Connected to www.yahoo.com.
Escape character is '^]'.
HEAD / HTTP/1.1
host: yahoo.com

HTTP/1.1 301 Moved Permanently
Date: Sun, 17 Jun 2012 18:49:19 GMT
Location: http://www.yahoo.com/ [yahoo.com]
Vary: Accept-Encoding
Content-Type: text/html; charset=utf-8
Cache-Control: private
Age: 0
Connection: keep-alive
Server: YTS/1.20.10

Connection closed by foreign host.

Yahoo's server took about fifteen seconds to close the connection.

http://httpd.apache.org/docs/2.4/mod/core.html#keepalivetimeout [apache.org]

Re:Too many connections (1)

ls671 (1122017) | more than 2 years ago | (#40353365)

The SPDY whitepaper suggest an keep alive timeout of 500ms while most sites do 5 to 20 seconds ?

No wonder why they think they are going to be so fast ;-)

Seriously although, I still see the point of this protocol but it might have been over-hyped a bit. Established things are hard to change without revolutionary gains to be expected.

http://www.chromium.org/spdy/spdy-whitepaper [chromium.org]

Single request per connection. Because HTTP can only fetch one resource at a time (HTTP pipelining helps, but still enforces only a FIFO queue), a server delay of 500 ms prevents reuse of the TCP channel for additional requests. Browsers work around this problem by using multiple connections. Since 2008, most browsers have finally moved from 2 connections per domain to 6.

Fixing the problem on the wrong layer (3)

Lisandro (799651) | more than 2 years ago | (#40351697)

Shouldn't we be working on adopting SCTP [wikipedia.org] instead?

Re:Fixing the problem on the wrong layer (0)

Anonymous Coward | more than 2 years ago | (#40351837)

I tried installing that last night, the brian cram driver. when i went to load the protocol into the network stack, windows said the inf was missing something wrong. this is was from nt 6.1 x64 msi. i have not looked at it since.

Re:Fixing the problem on the wrong layer (1)

kangsterizer (1698322) | more than 2 years ago | (#40353657)

it's more effort, thats why people use SPDY.
Changing all the gear to support SCTP is hard. Specially those pesky hp/cisco/juniper/you-name-it proprietary network hardware

That's also why Google is sometimes pushing for open source software on top of network hardware IMO.

Re:Fixing the problem on the wrong layer (1)

rb12345 (1170423) | more than 2 years ago | (#40353957)

While that's true, a standard (and popular library) for SCTP-over-UDP could be created. At most, you'd need a single well-known UDP port for inbound SCTP-over-UDP (9989 is suggested by the Internet draft [ietf.org] for this). SCTP ports would be used to distinguish between separate SCTP-using services on the server. I'm sure that the existing Linux and BSD SCTP stacks could support this with little effort. Firewalls that only permit HTTP/HTTPS would block this variant, but it would work well enough through NATs, especially if the multiple-endpoint parts of standard SCTP were left out.

Tor switched TBB FF to 10.0.5 ESR! (0)

Anonymous Coward | more than 2 years ago | (#40351729)

interesting, swabbies.. prior to this feature/version release in FF 13, Tor project turned from version 12->13 to 12->10.0.5 ESR! Is there something in this SPDY which would be a negative feature for privacy/security in TBB's?

Re:Tor switched TBB FF to 10.0.5 ESR! (1)

allo (1728082) | more than 2 years ago | (#40353281)

yeah. SPDY is on SSL, which cannot be filtered by privoxy (between firefox and tor)

SPDYs main issue: it's not needed (1)

Lazy Jones (8403) | more than 2 years ago | (#40351993)

HTTP and HTTPS are fast enough, it's the web servers / content generation (and ads) that limit the user experience and make web pages load slowly, followed by low bandwidth in some areas. If you really want to fix old protocols that actually need fixing, go look at SMTP first.

Re:SPDYs main issue: it's not needed (0)

Anonymous Coward | more than 2 years ago | (#40352487)

You're thinking like a desktop user. On the mobile side, bandwidth is still a huge factor. SPDY forces some best practices, like compression, with new features like header compression and multiplexing. Of course, none of this will really matter until Safari supports SPDY, but it's a nice boost for people with the latest Android with Chrome that supports it.

Re:SPDYs main issue: it's not needed (0)

Anonymous Coward | more than 2 years ago | (#40352717)

HTTP and HTTPS are fast enough, it's the web servers / content generation (and ads) that limit the user experience and make web pages load slowly, followed by low bandwidth in some areas. If you really want to fix old protocols that actually need fixing, go look at SMTP first.

Care to back that up with evidence? The researchers who created SPDY did have significant test suites and processes to make sure it did something beneficial and I rather doubt Google would have invested in getting it recognised as a standard if they didn't feel that it would actually help them in the long run.

For the lazy: The problem with HTTP is that it works like this: TCP SYN -> Server TCP SYN ACK -> TCP ACK -> HTTP Request Header -> Server Response -> TCP FIN (close channel). Go to 1 and load next image/script/frame; rinse, repeat for every single thing on the page.
SPDY was created because, through measurement of real world sites like google.com, it was determined that the TCP handshake was ridiculously slow; tearing down and recreating connections over and over for every single image/script/iframe/whatever causes wild amounts of round-trip latency. In short, the reason large sites are often slow isn't because they have too much crap packed on them (though that is certainly true in many cases), it's because HTTP was designed to work in an environment which was 99% HTML with only the occasional image or other embedded object, not the multimedia Internet of the present. SPDY fixes this by doing the TCP handshake once then recycling the same connection to transfer the entire site through the single TCP pipe instead.**

[Also, SSL handshakes are monstrously slow to set-up as well, they're like the TCP handshake but computationally heavy as bonus. This is why SPDY can get away with forcing SSL for everything, if you only set up the channel once then you only need to pay the SSL set-up once as well]

** Yes, HTTP Pipelining exists, however that doesn't work properly. IIRC, there is a limit of 4-8 files before the connection has to be torn down and, unlike SPDY, pipelining only transfers file A then file B then file C. SPDY can transfer bits of A, B and C interleaved with each other at the same time which helps saturate the connection better if the server is lagging on the PHP script that generates file A (you can receive files B and C whilst waiting for A to finish calculating, pipelining makes you wait for A to finish before it'll give you B or C).

Current HTTP Speedup Tricks Hurt SPDY (5, Interesting)

_Bunny (90075) | more than 2 years ago | (#40352481)

As someone who's job it is to work on things like this, there's a few things that must be pointed out.

- SPDY runs over SSL. There isn't an unencrypted version -- note that SPDY was in fact faster than HTTPS.

- Many of the tricks used today to speed up page delivery, such as domain sharding, actually hurt SPDY's performance. SPDY's main benefit is that it opens up a single TCP connection and channelizes requests for assets inside that connection. Forcing the browser to establish a lot of TCP connections defeats this entirely, and the overhead of spinning up an SSL connection is very high. (And again, it should be noted that SPDY *WAS* faster, even if just a little bit, than standard HTTPS.)

There are other features in SPDY that today remain largely untapped, such as a server hinting to a client that it knows it'll need some content ahead of time -- giving the client something to do while it'd normally be idle waiting for the server to respond while it's generating the HTML it requested. (Large DB query, or whatever.)

Web engineers are clever and a smart bunch. While it looks like there's not a lot of gain to rethinking HTTP 1.1 today, given the years of organic growth we've had and time spent optimizing an older protocol, as new technology comes along that take advantage of the new foundation, things this will change. Give it time.

To the folks complaining that this guy doesn't know what he's doing, uh, he's a Chief Product Architect at Akamai. Yes he does. The folks at Akamai know more about web delivery than just about anyone.

- Bunny

Re:Current HTTP Speedup Tricks Hurt SPDY (0)

Anonymous Coward | more than 2 years ago | (#40353737)

As you pointed out yourself, no, he doesn't exactly. Or worse, he benchmarks SPDY in a wrong way in order to make it look bad knowingly.

Since SPDY and such initiatives may make Akamai slightly less significant for small companies, maybe that's the reason.

So no, one should never, ever, judge people on their title, where their work, previous fame etc. They should judge on the work they present. Nothing else.

Re:Current HTTP Speedup Tricks Hurt SPDY (0)

Anonymous Coward | more than 2 years ago | (#40353929)

So no, one should never, ever, judge people on their title, where their work, previous fame etc. They should judge on the work they present. Nothing else.

Interesting, so let's judge this blog:

https://developers.google.com/speed/articles/spdy-for-mobile

They don't compare to pipelining, which now Chrome, Firefox, Opera, Safari, Android Browser all use or have the option use. The graphic they show is for the absolute best case from their test which they only identify as "one of the pages"; in fact it's load time ratio (0.6) is not even listed in the chart showing improvement.

This is a pure PR piece to hype or just really terrible research. Either way, it clearly purposely distorts the results in favor of SPDY, yet SPDY only comes out 23% faster than non-pipelined HTTP. Doesn't it make you wonder whether SPDY would even be relevant if pipelining was enabled in these browsers?

Re:Current HTTP Speedup Tricks Hurt SPDY (0)

Anonymous Coward | more than 2 years ago | (#40353741)

Web engineers are clever and a smart bunch.

SPDY wasn't developed by web engineers with very little oversight (follow the changelog when it was added to Chrome for instance). It was developed by two kids right out of college, one of which couldn't hack it at Google.

Every claim that from Google that SPDY is X faster than HTTP is based on plain HTTP not pipelining. Even the 23% faster for mobile (where you would expect it to really shine) is based on comparing to keep-alive without pipelining.

The reality is that pipelining helps almost as much as SPDY over a single connection, and when you add in several parallel connection there's virtually no difference. Pipelining works for pretty much every website, only failing on some oddball ones (like some bank in France for instance). Even if browsers just enabled pipelining by default, forcing the few invalid sites to upgrade their software or put some kind of bandaid on it, would solve the performance problem just as well. You'll read hype about 'head of line blocking', but with a few pipelined connections open a slow resource can only block 1/N or the currently outstanding requests -- pipelining can be improved to deal with this, but even so there's no real need to.

SPDY is a complex binary protocol, for instance it has an attempt at priorities and scheduling and flow control built in, but it is unnecessary. Google don't compare to pipelining because if they said something closer to the truth, "SPDY makes the internet 2% faster that pipelining" then nobody would give a crap about it.

Re:Current HTTP Speedup Tricks Hurt SPDY (0)

Anonymous Coward | more than 2 years ago | (#40353773)

Apologies for the bad spelling and grammar. I didn't realize it was already on preview mode so I missed the chance to proofread and edit.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?