Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Yahoo's YSlow Plug-in Tells You Why Your Site is Slow

CmdrTaco posted more than 7 years ago | from the like-i-need-a-plugin-to-tell-me-i-suck dept.

Yahoo! 103

Stoyan writes "Steve Souders, performance architect at Yahoo, announced today the public release of YSlow — a Firefox extension that adds a new panel to Firebug and reports page's performance score in addition to other performance-related features. Here is a review plus helpful tips how to make the scoring system match your needs.

Sorry! There are no comments related to the filter you selected.

/. gets a D (4, Funny)

LoadWB (592248) | more than 7 years ago | (#19982425)

Interesting utility. Slashdot gets a D on the homepage, F on a comments page. Many media sites score Fs, mostly thanks to numerous ad and cookie sites.

Re:/. gets a D (5, Funny)

JuanCarlosII (1086993) | more than 7 years ago | (#19982447)

Even better than that, http://developer.yahoo.com/yslow/ [yahoo.com] gets a D for performance.

Re:/. gets a D (4, Interesting)

jrumney (197329) | more than 7 years ago | (#19982683)

My own site also got a 'D', so that seems to be the standard grade. Everything that matters, it got an 'A' for, except for using non-inlined CSS which it got a 'B' for the test that said you shouldn't (to reduce HTTP requests), and an N/A for the test that says you should (to take advantage of caching). Then there were a whole lot of irrelevant things that it got an 'F' for. The fact that none of my site is hosted on a distributed network, the fact that I leave the browser cache to make its own decision about expiring pages since I don't know in advance when I'm going to next change it, and something about ETags, I'm not sure whether it is saying I should have more of them, or I should get rid of the ones I've got.

Re:/. gets a D (1)

mazarin5 (309432) | more than 7 years ago | (#19984671)

Everything that matters, it got an 'A' for, except for using non-inlined CSS which it got a 'B' for the test that said you shouldn't (to reduce HTTP requests), and an N/A for the test that says you should (to take advantage of caching).

That seems silly. Isn't one of the advantages of having a separate CSS file that you reduced redundancy across multiple pages? Sure, it's an additional file to load - the first time.

Re:/. gets a D (3, Interesting)

daeg (828071) | more than 6 years ago | (#19985535)

It depends on the headers (server), browser, and method, actually. Under some circumstances, for instance under SSL, full copies of all files will be downloaded for every request. As HTTP headers get more complex (some browsers with toolbars, etc, plus a plethora of cookies), the HTTP request/response cycle expands. It may not seem like a lot, but a .5kb request header multiplied by dozens of elements and you can quickly use up a lot of bandwidth. Firefox does a much better job than Internet Explorer under SSL, but not by much unless you enable disk-based caching.

Something I would love to see are some of the headers condensed by the browser and server. For instance, on first request the browser sends the full headers. In the reply headers, the server would set a X-SLIM-REQUEST header with a unique ID that represents that browser configuration's set of optional headers (Accept, Accept-language, Accept-encoding, Accept-charset, User-agent, and other static headers). Further requests from that browser would then simply send the X-SLIM-REQUEST header and unique ID and the server would handle unpacking it -- if the headers are even needed. Servers that don't supply the header would continue to receive full requests, preserving full backward and forward compatibility.

There are a few things to reduce request sizes for web applications. MOD_ASIS is one of the best ones. We use it as one of the last steps of our deployment process. All images are read in via script, compressed if they are over a certain threshold, and minimal headers are added. Apache then delivers them as-is -- reducing load on Apache as well as the network (the only thing Apache adds is the Server: and Date: lines). ETags and last-modified dates are calculated in advance. Also certain responses such as simple HTTP Moved (Location:) responses, GZip isn't used -- GZiping the response actually *adds* to the size due to their very small document size.

Re:/. gets a D (1)

jrumney (197329) | more than 6 years ago | (#19988385)

Most requests will fit into a single TCP/IP packet anyway so its not worth complicating the HTTP protocol with a requirement that servers remember information about browser capabilities for indeterminate time and the extra round trip for the "308 Forgot your Headers" responses that would be needed to recover from such situations would undo any savings that you'd gain.

Re:/. gets a D (1)

myowntrueself (607117) | more than 6 years ago | (#19988235)

My own site also got a 'D', so that seems to be the standard grade.

*Your* site got a 'D' and *therefore* that seems to be the standard grade?

I think I see a flaw in your logic there, batman.

Re:/. gets a D (1)

jrumney (197329) | more than 6 years ago | (#19988797)

I'll give you the benefit of the doubt and assume that you filter out all +5 Funny posts, so didn't see the two ancestors to my post commenting about the D grades of both Slashdot and Yahoo. It seems like mostly A and F grades get handed out for specific tests, since you either do something already or you don't. And it also seems like any mixture of A's and F's results in an overall D grade. Hence my comment that it seems to be the standard grade (at least two of the tests are mutually exclusive, so straight A's is out).

Re:/. gets a D (4, Interesting)

mr_mischief (456295) | more than 7 years ago | (#19990311)

I've killed some time on this since it's a pretty interesting idea. It turns out there are plenty outside the D and F range. It does seem to like pages with a single Flash object and not much else, so that's bad. It also makes some pretty arbitrary decisions which don't mean squat to many sites. There are some sites that get enough traffic that speed is a factor but not so much that a content delivery network is really necessary, for example.

I skipped the actual link and score on sites that are pretty much just representative of the sites around them. I wanted to include them by name, though, to show where they fall. I've stuck mostly to main index pages, and I've noted where I've gone deeper.

A: Google [google.com] (99%), Altavista main page [altavista.com] (98%), Altavista Babelfish [altavista.com] (90%) (including upon doing a translation from English to French), Craigslist [craiglist.org] (96%), Pricewatch [pricewatch.com] (93%), Slackware Linux [slackware.com] , OpenBSD [openbsd.org] , Led Zeppelin site at Atlantic [ledzeppelin.com] (100%), supremecommander.com, w3m web browser site [w3m.org] (96%)

B: Apache.org [apache.org] (87%), the lighttpd web server [lighttpd.net] (84%), Google Maps, which also got a C once [google.com] (84% in most cases), Perlmonks [perlmonks.org] (84%), Dragonfly BSD [dragonflybsd.org] (85%), Butthole Surfers band page [buttholesurfers.com] (81%), 37 Signals [37signals.com]

C: One Laptop Per Child, [olpc.com] , ESR's homepage [catb.org] , the Open Source Initiative [opensource.org] (78%), Google News [google.com] (73%), Lucid CMS [lucidcms.net] (74%), Perl.org [perl.org] (75%), lucasfilm.com, Charred Dirt game [charreddirt.com]

D: gnu.org, The Register [theregister.co.uk] , A9 [a9.com] (66%), kernel.org [kernel.org] , Akamai [akamai.com] (64%), kuro5hin.org, freshmeat.net, linuxcd.org, Movable Type [movabletype.org] (61%), Postnuke [postnuke.com] , blogster.com, Joel on Software [joelonsoftware.com] (67%), Fog Creek Software [fogcreek.com] , metallica.com, gaspowered.com, Scorched 3D [scorched3d.co.uk] (68%), id software [idsoftware.com] (64%), ISBN.nu book search [isbn.nu]

F: MS IIS [microsoft.com] (49%), microsoft.com, msn.com, linux.com, fsf.org, discovery.com, newegg.com, rackspace.com, the Simtel archive [simtel.net] (26%), CNet Download [download.com] (29%), Adobe [adobe.com] (58%), savvis.com, mtv.com, sun.com, pclinuxos.com, freebsd.org, phpnuke.org, use.perl.org, ruby-lang.org, python.org, java.com, Rolling Stones band page [rollingstones.com] (56%), powellsbooks.com, amazon.com, barnesandnoble.com, getfirefox.com

My site for my company (96%) gets an A (no, I'm not going to get it slashdotted) which is pretty simple but has a pic and some Javascript on it. Several sites I have done or have helped design with someone else get C or D ratings.

Re:/. gets a D (1)

reed (19777) | more than 7 years ago | (#19995745)

I always use CSS files by reference when the stylesheet is shared by multiple pages. You know, caching and stuff...

Re:/. gets a D (1)

chinhnt2k3 (1123697) | more than 7 years ago | (#19993541)

Crappy thing. I tried it a few times on http://developer.yahoo.com/yslow/ [yahoo.com] and got a few different results ranging from C to F.

Re:/. gets a D (2, Informative)

MinorFault (1132861) | more than 7 years ago | (#19983001)

We started with websiteoptimize here at Zillow, but Steve's tool is much more useful. His upcoming O'Reilly book is also quite good. We've taken seconds off our of our user response time with it. Steve came and spoke and it was very well attended and liked by a bunch of Seattle Web 2.0 folks.

My site gets a D too (1)

grahamsz (150076) | more than 7 years ago | (#19983435)

Interesting they rate down the comments pages on bannination.com [bannination.com] because they have stylesheets outside the document head, yet why i look at the code the stylesheets are where they are supposed to be... weird.

Re:My site gets a D too (2, Insightful)

jamie (78724) | more than 7 years ago | (#19983567)

Yeah it says the same for Slashdot's css files, which are indeed in the head. Guess that's a YSlow bug.

Re:My site gets a D too (3, Informative)

grahamsz (150076) | more than 7 years ago | (#19983853)

If you put the links to the CSS at the very top of the head section then that grade will jump from an F to an A.

I doubt moving them above title makes any noticeable difference in the real world though.

Expires header (1)

grahamsz (150076) | more than 7 years ago | (#19983927)

I'm sending something showing that the file is already expired (since it's completely dynamic) and apparently that still gets an F.

Not too impressed

Re:/. gets a D (1)

UbuntuDupe (970646) | more than 7 years ago | (#19983793)

Yeah, those ads that I turned off due to being a subscriber yet still see...

Right, right, "for accounting purposes". Shut up, you "anti-advertising" frauds.

Re:/. gets a D (1)

billcopc (196330) | more than 7 years ago | (#19990267)

These are all common-sense tips, but having them all automated and tallied is a great little helper. I'll most definitely be checking all my current sites with YSlow to see how my design practices hold up.

Especially for "indie" sites with small audiences, responsiveness can be a big selling point because you don't have that brand "power" to draw people in, but a snappy site will be noticed.

Can it tell me why it takes so long... (0, Offtopic)

The_Fire_Horse (552422) | more than 7 years ago | (#19982431)

... to get a first post?

Sure but (3, Funny)

loconet (415875) | more than 7 years ago | (#19982433)

I bet it doesn't actually tell you your site is being /.ed

God Smack Your Ass !! (-1, Offtopic)

Anonymous Coward | more than 7 years ago | (#19982435)



God Smack Your Ass !!

Another tool (2, Informative)

Klaidas (981300) | more than 7 years ago | (#19982437)

Web developer [mozilla.org] (a must-have) has a speed analyzing tool by default (well, more of a link to a website that does the job), I prefer to use that one. Here's an example [websiteoptimization.com] of slashdot's report.

Re:Another tool (2, Insightful)

gblues (90260) | more than 7 years ago | (#19982599)

I see your point, but keep in mind that the website server iikely has a far better uplink to the Internet than you do. A plug-in like this gives you real-world performance data if you're using it on, say, a residential DSL line.

Re:Another tool (2, Informative)

Klaidas (981300) | more than 7 years ago | (#19982633)

It provides download times for all kinds of connections, from 14.4K to 1.44Mbps. Also, seperate download times for objects.

Re:Another tool (1)

mr_mischief (456295) | more than 6 years ago | (#19989203)

If you limit is local, then it's not really reflecting the speed of your site. It's reflecting the speed of your local connection. Only when the limiting factor is the site, or when a reliably stable transfer speed has been established, can the speed of the site relative to another site be reliably tested.

Re:Another tool (4, Funny)

danbert8 (1024253) | more than 7 years ago | (#19982611)

I think you slashdotted a website efficiency report of Slashdot. Shouldn't that cause a black hole or something?

Re:Another tool (0)

Anonymous Coward | more than 7 years ago | (#19982953)

Web developer is far more useful, simply by showing you the size of all the elements you can make an intelligent choice about where to optimize.

For example, YSlow seems to recommend you use inline instead of CSS background images. Using, CSS background images is a design choice that I've always made with a full understanding of impact on page load times. Furthermore, there are techniques whereby the HTTP overhead of multiple image requests can be reduced [alistapart.com] via CSS.

That doesn't scratch the surface of what's wrong with the Yahoo tool. If I get a single email saying that YSlow gives my site a poor grade, implying that an automated tool has a better understanding of optimization or caching requirements than me - I'm going to let loose with the insults. YSlow is of benefit strictly for the clueless.

Re:Another tool (0)

Anonymous Coward | more than 6 years ago | (#19987783)

Well, yes. But there ARE a lot of cluless web people out there, so they could do a lot worse than just going with YSlow's recommendations. They really don't have a clue what they're doing.

(Realistically though it can highlight things or bugs you're not aware of (like you might have that external CSS file, think it's going to be cached, but the CSS file has (due to a bug or something) a bad or missing expiry date meaning the browser requests it each time anyway - the worst of both worlds. It might find something you're not aware of.)

Re:Another tool (1, Interesting)

Anonymous Coward | more than 7 years ago | (#19982985)

I tried the piggiest page on my own site (and thank you for the link BTW) just out of curiosity. Note that all images are almost completely necessary (it is, after all, about visual art). And I wrote it way back in 1998. IIRC there is a reprint somewhere on K5, sans graphics.

URL: http://mcgrew.info/Art/ [mcgrew.info]
Title: Steve's School of Fine Art
Date: Report run on Wed Jul 25 09:10:42CDT2007

Total HTML: 1
Total HTML Images: 13
Total CSS Images: 0
Total Images: 13
Total Scripts: 1
Total CSS imports: 0
Total Frames: 0
Total Iframes: 0

Connection Rate Download Time
14.4K 384.00 seconds [wow that's six minutes! But as height and width attributes of the graphics are specified, the text loads first]
28.8K 193.50 seconds
33.6K 166.28 seconds
56K 100.97 seconds
ISDN 128K 33.00 seconds
T1 1.44Mbps 5.60 seconds

  • TOTAL_HTML - Congratulations, the total number of HTML files on this page (including the main HTML file) is 1 which most browsers can multithread. Minimizing HTTP requests is key for web site optimization.
  • TOTAL_OBJECTS - Warning! The total number of objects on this page is 15 - consider reducing this to a more reasonable number. Combine, refine, and optimize your external objects. Replace graphic rollovers with CSS rollovers to speed display and minimize HTTP requests.
  • TOTAL_IMAGES - Warning! The total number of images on this page is 13 , consider reducing this to a more reasonable number. Combine, refine, and optimize your graphics. Replace graphic rollovers with CSS rollovers to speed display and minimize HTTP requests.
  • TOTAL_SIZE - Warning! The total size of this page is 491579 bytes, which will load in 100.97 seconds on a 56Kbps modem. Consider reducing total page size to less than 30K to achieve sub eight second response times on 56K connections. Pages over 100K exceed most attention thresholds at 56Kbps, even with feedback. Consider contacting us about our optimization services.
  • TOTAL_SCRIPT - Congratulations, the total number of external script files on this page is 1 . External scripts are less reliably cached than CSS files so consider combining scripts into one, or even embedding them into high-traffic pages. [google ad, added later]
  • HTML_SIZE - Caution. The total size of this HTML file is 27045 bytes, which is above 20K but below 100K. With a 10K ad and a logo this means that your page will load in over 8.6 seconds. Consider optimizing your HTML and eliminating unnecessary features. To give your users feedback, consider layering your page or using positioning to display useful content within the first two seconds.
  • IMAGES_SIZE - Warning! The total size of your images is 460375 bytes, which is over 30K. Consider optimizing your images for size, combining them, and replacing graphic rollovers with CSS. [no redundant images or image rollovers here!]
  • SCRIPT_SIZE - Caution. The total size of your external scripts is 4159 bytes, which is above 4080 bytes and less than 8K. Consider optimizing your scripts and eliminating features to reduce this to a more reasonable size. [blame Google!]
  • MULTIM_SIZE - Congratulations, the total size of all your external multimedia files is 0 bytes, which is less than 4K.


I guess I flunk!

-mcgrew

Re:Another tool (1)

Celandro (595953) | more than 7 years ago | (#19985037)

The big problem I have with this is that it doesn't work for HTTPS requests, and I wouldn't want it to. Relying on an external website to test your secure site's performance is not a great idea.

wondeful. except that's not why it's slow (2, Insightful)

brunascle (994197) | more than 7 years ago | (#19982451)

that's all well and good, but it's slow because of the server-side scripts, not anything client side. and no browser plugin will ever know that.

Re:wondeful. except that's not why it's slow (1)

awb131 (159522) | more than 7 years ago | (#19983181)

that's all well and good, but it's slow because of the server-side scripts, not anything client side. and no browser plugin will ever know that.

Why not? Couldn't a browser-side plugin simply measure the wallclock seconds it takes for the http request to complete? It could figure out what's being dynamically generated and what's being served from static by comparing all the requests for the same host and comparing the transfer rates.

Re:wondeful. except that's not why it's slow (1)

imaginaryelf (862886) | more than 6 years ago | (#19986659)

How does it differentiate between a slow server side script and a slow network easily.

Re:wondeful. except that's not why it's slow (1)

edwdig (47888) | more than 6 years ago | (#19986929)

How does it differentiate between a slow server side script and a slow network easily.

Unless you're dynamically ALL of the content, some things will load faster than others. Odds are most of your images are static, so when your 10 KB HTML page takes longer to transfer than your 30 KB images, you blame the server side scripting. If you design a site in ColdFusion, it won't send any page data until the script finishes running. In scenarios like that, the delay before receiving data is an indication that something is wrong on the server side scripting, if you didn't have similar delays on the images.

Re:wondeful. except that's not why it's slow (1)

mabinogi (74033) | more than 7 years ago | (#19990185)

You haven't even looked have you?
Dynamic server side performance is very rarely the main cause of speed problems - http latency from too many objects and poor placement of scripts and CSS are usually the problem.

Even if it takes two whole seconds for the server to generate the page, that's still a small fraction of the fifteen seconds it takes to completely download and render some more complicated sites.

first post (-1, Offtopic)

Anonymous Coward | more than 7 years ago | (#19982467)

I, for one, welcome our new slow overlords.

Re:first post (1, Funny)

Rod Beauvex (832040) | more than 7 years ago | (#19982619)

Don't you mean Slowverlords? :D

nice plugin (1)

fearanddread (836731) | more than 7 years ago | (#19982479)

Saw this demoed at web2.0 This is a very useful plugin. Especially so for developers who may not be familiar with a lot of the reasons sites can load or feel slow.

More to the point (0)

Anonymous Coward | more than 7 years ago | (#19982545)

Why are people developing web apps commercially when they can't even be bothered reading the HTTP RFC's?

Firebug not Firefox (1)

140Mandak262Jamuna (970587) | more than 7 years ago | (#19982499)

The damned article makes a point to say it is an extension to Firebug not Firefox. Whats the difference?

Re:Firebug not Firefox (4, Informative)

JuanCarlosII (1086993) | more than 7 years ago | (#19982517)

YSlow require Firebug to previously be installed in order to run. It is an extension of the capabilities of Firebug and so is an extension of an extension, a meta-extension if you will.

Re:Firebug not Firefox (1)

shystershep (643874) | more than 7 years ago | (#19982543)

Firebug is a plugin for Firefox; Yslow is an extension to Firebug.

Re:Firebug not Firefox (1)

poot_rootbeer (188613) | more than 7 years ago | (#19982645)

The damned article makes a point to say it is an extension to Firebug not Firefox. Whats the difference?

I cannot install YSlow as a browser extension unless I also have the Firebug extension enabled.

And since Firebug for some reason causes my browser to climb to 100% CPU and become unresponsive if I leave it enabled too long, I guess I won't be giving YSlow a try.

Web site optimization for dummies (1, Insightful)

Anonymous Coward | more than 7 years ago | (#19982501)

Nice one Yahoo. Now people can optimize their website without bothering to read up on HTTP and thinking about what they're doing.

Since 9/10 web developers can't even be bothered using a validator, I predict great success for this tool.

Why is this a troll? (5, Insightful)

kat_skan (5219) | more than 7 years ago | (#19985077)

The Anonymous Coward here is spot on. This thing gives awful, awful advice. Some of these in particular I really hated as a dialup user.

CSS Sprites are the preferred method for reducing the number of image requests. Combine all the images in your page into a single image and use the CSS background-image and background-position properties to display the desired image segment.

This is only a win if your images are tiny. Why are you optimizing for this? Tiny images do not take long to download, even on dialup, because they are tiny. Frankly I would prefer to have all the site's little icons progressively appear as they become available than have to wait while a single image thirty times the size of any one of them loads. Or, perhaps, fails to load, so that I have to download the whole thing again instead of just the parts I have.

Inline images use the data: URL scheme to embed the image data in the actual page. This can increase the size of your HTML document. Combining inline images into your (cached) stylesheets is a way to reduce HTTP requests and avoid increasing the size of your pages.

This is hands down the stupidest idea I have ever heard. Ignoring for the moment that it won't even work for the 70% of your visitors using IE, sending the same image multiple times as base64-encoded text will completely swamp any overhead that would have been introduced by the HTTP headers.

Combined files are a way to reduce the number of HTTP requests by combining all scripts into a single script, and similarly combining all stylesheets into a single stylesheet.

Less egregious than suggesting CSS Sprites, but it still suffers from the same problems. These are not large files, and if they are large files, the headers are not larger.

As described in Tenni Theurer's blog Browser Cache Usage - Exposed!, 40-60% of daily visitors to your site come in with an empty cache. Making your page fast for these first time visitors is key to a better user experience.

What, seriously? Are you really optimizing for your visitors who load one and only one page before their cache is cleared? Even though you "measured... and found the number of page views with a primed cache is 75-85%"?

Add an Expires Header

...

Browsers (and proxies) use a cache to reduce the number and size of HTTP requests, making web pages load faster. A web server uses the Expires header in the HTTP response to tell the client how long a component can be cached. This is a far future Expires header, telling the browser that this response won't be stale until April 15, 2010.

Expires: Thu, 15 Apr 2010 20:00:00 GMT

...

Keep in mind, if you use a far future Expires header you have to change the component's filename whenever the component changes.

And if you ever change something but forget to change the file name, your visitors will have to reload everything on the damn page to get the current version of the one thing you changed. Assuming, of course, they even realize there should be a newer version than the one they're seeing. And assuming that they actually know how to do that.

Put CSS at the Top

While researching performance at Yahoo!, we discovered that moving stylesheets to the document HEAD makes pages load faster. This is because putting stylesheets in the HEAD allows the page to render progressively.

Um. Duh? link elements are not valid in the body. style elements are not valid in the body. Who even does this?

Configure ETags

...

The problem with ETags is that they typically are constructed using attributes that make them unique to a specific server hosting a site. ETags won't match when a browser gets the original component from one server and later tries to validate that component on a different server, a situation that is all too common on Web sites that use a cluster of servers to handle requests.

And of course, instead of just downloading the file again and checking to see if changing ETags are actually a problem or just something you should be aware of, let's just unilaterally fail this test if anything uses ETags.

So yeah, people who don't know what they're doing will run this (not so much because they are the only ones who need profiling tools, but more because this gives them a place to start), and they will make the situation worse trying to pass these frankly asinine tests.

Re:Why is this a troll? (1)

Evets (629327) | more than 6 years ago | (#19986435)

CSS Sprites - agreed. These aren't that useful, but in terms of a simple page and a long/slow connection they can improve performance a bit. I see them in the wild very rarely. "A list apart" has an implementation article somewhere thats worth a gander if you can find it.

Inline Images - agreed. Dead on. Quite stupid.

Combined Files - I've flip-flopped a great deal about this one myself. While a single file can greatly reduce data transfer overhead by eliminating headers and ensuring packets are their fullest, it makes for difficult file and version management. It also means that a simple change to what used to be a single small file requires that your entire script be re-downloaded and cached, making for a slower site experience. Also, your end users end up having a lot of fairly large cached files no longer in use if you are actively developing and that's just not nice. At the very least, this should be discussed rather than advised on way or another.

Expires Header / ETags - contrarily, I think these are by far the most effective site performance things that you can do. They do require forward planning in order to use them effectively, but migrating small static files to a properly tuned http server that pushes out these headers properly really makes a difference. Spreading them over multiple static servers is slightly more effective, but that depends on how heavy the pages are to begin with.

CSS at the Top - Firefox and IE have different rendering experiences, but looking at IE - the page displays as soon as it has "layout" - which is essentially all the information it needs to show it. You can place enough css information inline to yield the fastest rendering speed and use a few tricks to hide the requirement of external CSS files. The best thing to do (IMO) is to play around a little bit using a proxy server that you can pause at each file (like burpsuite) and watch how many file downloads happen prior to page display and examine what the page looks like in each browser after each file is downloaded.

I don't see mentioned a recommendation against third party services. Adsense, Analytics, etc. may seem fast most of the time to you, but realize that those connections aren't always as smooth for your end users. My roadrunner connection has had very slow connectivity to the google servers in particular a good 12 times this year - which means that most sites with adsense won't display anything until the connection times out. If they are declared towards the end of the page, at least the stuff declared prior to these things can display. Again, something like burpsuite is very helpful in determining perceived performance.

Re:Why is this a troll? (1)

imroy (755) | more than 6 years ago | (#19986535)

This is only a win if your images are tiny. Why are you optimizing for this? Tiny images do not take long to download, even on dialup, because they are tiny.

And because they are tiny and numerous, the overhead from the HTTP headers is huge. Headers can easily be a few hundred bytes. Looking at the default 'icons' that come with Apache, the majority are little GIF's under 400 bytes. So if you go and download them with individual HTTP requests, you're throwing away 30-50% of your bandwidth just in HTTP overhead. Not to mention the delay as the request is sent and handled by the server, or TCP connection overhead, although hopefully your web/proxy server supports keep-alives and pipelining.

Your point about having a single 'sprite' image fail and losing lots of page graphics stands. But if you make sure the image is nice and cacheable, it will hang around longer and there will be fewer opportunities to fail.

Using data: URL's for inline graphics does sound stupid. Because of the Base64 encoding you wouldn't want to use it on anything too big. And it couldn't be for anything you use on a lot of pages, because then it would make more sense to put it in a file and allow it to be cached. Just odd.

Using an Expires: header with distant dates also sounds dodgy to me. You'd really only want to do that with static content. And like they noticed, be sure to increment some version/revision number in the filename/URL.

The Etag advice was a little discouraging. The gist was basically: Apache and IIS can produce inconsistent Etags on server farms, invalidating the whole purpose behind Etags. I imagine this is the case because they're operating directly from the filesystem and don't have much information to use. I'm working on a wiki engine and it uses Etags but generates them itself from the revision number of the page/resource requested. Etags are a simple and good mechanism to reduce bandwidth and help caching of content, but they must be generated well.

Re:Why is this a troll? (1)

kat_skan (5219) | more than 6 years ago | (#19987935)

This is only a win if your images are tiny. Why are you optimizing for this? Tiny images do not take long to download, even on dialup, because they are tiny.

And because they are tiny and numerous, the overhead from the HTTP headers is huge. Headers can easily be a few hundred bytes. Looking at the default 'icons' that come with Apache, the majority are little GIF's under 400 bytes. So if you go and download them with individual HTTP requests, you're throwing away 30-50% of your bandwidth just in HTTP overhead. Not to mention the delay as the request is sent and handled by the server, or TCP connection overhead, although hopefully your web/proxy server supports keep-alives and pipelining.

Yes, it certainly is. Further, the overhead of the image header is also huge. The icons on Yahoo's page would be 4x the size if they were individual files. There's a lot of completely unused resources in their sprite files, but it's actually smaller on the whole.

But my experience on dialup was that larger images tended to saturate my connection. The icons might load faster, but it would be at the expense of the actual content. Since it doesn't take long to download a few dozen individual 1k files, and since each one you successfully download is cached and subsequently not downloaded at all, I again have to question the value of this optimization.

Your point about having a single 'sprite' image fail and losing lots of page graphics stands. But if you make sure the image is nice and cacheable, it will hang around longer and there will be fewer opportunities to fail.

Well, it's not even the case that you can just make the image easy to cache and be home free. Eventually you're going to want to change some part of the image. Consider the list of links down the left side of the page. The icons for that list are ideal for this technique; about 900B each when split into 22x22x8bpp GIFs rather than 213B as part of that composite image. But the link in that list (labeled "OMG") has a little "New" flag floating beside it. When they added that link, everybody had to download that 15k file again instead of just 900B of the new icon.

Using an Expires: header with distant dates also sounds dodgy to me. You'd really only want to do that with static content. And like they noticed, be sure to increment some version/revision number in the filename/URL.

Honestly, the more I think about this strategy, the less sense it makes. If you have to change the name of the resource to invalidate everyone's eternally-cached copies, that means you have to change everything that uses it as well. Maybe not a big deal if your pages were dynamic and not cachable to begin with, but if they weren't you have to blow away their cached copies as well.

Re:Why is this a troll? (1)

imroy (755) | more than 6 years ago | (#19988371)

Perhaps the problem of lots of little images vs a single 'sprite' is more psychological. Perhaps it just appears fast seeing lots of individual images load.

Well, it's not even the case that you can just make the image easy to cache and be home free. Eventually you're going to want to change some part of the image.

True. You'd really only want to use 'sprites' on site graphics that don't change very often.

Honestly, the more I think about this strategy, the less sense it makes. If you have to change the name of the resource to invalidate everyone's eternally-cached copies, that means you have to change everything that uses it as well.

Yeah, come to think of it, it really sounds more like a dodgy work-around to proper caching. Surely simply using last-modifed and Etags, handling if-modified-since and if-none-match, and giving out 304 responses is a much more reliable and flexible scheme. Assuming, of course, that browsers and caching proxies also do the proper thing. That's always the weak point and might be what long-ranged Expires values + versioned filenames is trying to work around.

Re:Why is this a troll? (1)

kat_skan (5219) | more than 6 years ago | (#19988475)

Perhaps the problem of lots of little images vs a single 'sprite' is more psychological. Perhaps it just appears fast seeing lots of individual images load.

I would agree with this. As I said, loading a big image tended to make the content itself take longer. If I'm reading while the images load, I'll not notice or honestly even care if the page as a whole is 100% larger. Conversely, if you've done something to cut the load time in half, but I have to wait for the entire thing before I can actually use any of it, you've in a pragmatic way actually made the page slower.

Re:Why is this a troll? (0)

Anonymous Coward | more than 6 years ago | (#19987073)

>Tiny images do not take long to download, even on dialup, because they are tiny.

[You request the header, you get the header,] you request the file and then you get the file. Between that you get some extra delay. With 56k the latency isn't exactly awesome. Sure there are multiple threads, but there are only so many of em and if the bandwidth is taken the requests or responses have to wait.

So, CSS sprites can help here a bit... eventually. Eg if you have a dozen flag images it's quite nice to put them into a single 3kb png.

>[Inlining images] is hands down the stupidest idea I have ever heard.

This can improve performance drastically, because there is less communication involved and because you can actually reach full transmission speed. With TCP/IP it always starts slowly and the speed gets ramped up step by step. With small files the full speed often isn't reached, because the download is complete way earlier.

That means that downloading a single file (even if it is ~33% bigger) can be a lot faster.

The big downside is that inlined images and sprite sheets are damn annoying to handle. It's only worth the trouble in extreme corner cases.

Re:Why is this a troll? (1)

kat_skan (5219) | more than 6 years ago | (#19987973)

[Inlining images] is hands down the stupidest idea I have ever heard.

This can improve performance drastically, because there is less communication involved and because you can actually reach full transmission speed. With TCP/IP it always starts slowly and the speed gets ramped up step by step. With small files the full speed often isn't reached, because the download is complete way earlier.

That means that downloading a single file (even if it is ~33% bigger) can be a lot faster.

Googling isn't turning anything up (not suprising, since it's pretty impossible to google anything when "http" is one of the keywords), so I'd be very interested if you could provide some numbers that support this. I can imagine small images falling within the slow start period (heck, some of them would fit wholesale in a single datagram), but normally you wouldn't establish a brand new connection to download the image, you'd just issue a second GET over the connection you already have.

So I would expect slow start to have the same impact either way, but this is honestly an OSI layer or two above where I normally work, and I'd love to see some hard figures.

Both numbers can be true... (1)

nick_davison (217681) | more than 6 years ago | (#19987515)

As described in Tenni Theurer's blog Browser Cache Usage - Exposed!, 40-60% of daily visitors to your site come in with an empty cache. Making your page fast for these first time visitors is key to a better user experience.

What, seriously? Are you really optimizing for your visitors who load one and only one page before their cache is cleared? Even though you "measured... and found the number of page views with a primed cache is 75-85%"?
Daily Visitors != Page Views

Making up random numbers and fudging to a perfect caching system for convenience:

10 people hit your site on a given day.

3 have never been there before, have an empty cache, say, "Damn, this shit's slow," and leave.
2 have never been there before, have an empty cache but endure, surfing 5 pages each.
The other five are regular users and have files cached. They surf the same 5 pages.

Total: 3x1 + 2x(1+4) + 5x5 = 38 total pages.

(5 out of 10) 50% of daily visitors had an empty cache.
(5 out of 38) 13% of page requests hit with an empty cache.
(33 out of 38) 87% of page requests hit with a primed cache.

So, both quotes are correct: 50% of daily unique visitors came in with an empty cache, 87% of total page requests were made with a primed cache.

Obviously those numbers are pulled out of my anatomical /dev/null and make some major assumptions - but they do help illustrate how Unique Visitors is not the same as Page Views.

Both numbers are important. By looking at the gulf between them, you can start to build up an impression of what new users vs. what returning users do with your site.

Similarly, both types of users are essential:

Sure, you want to ensure new users love the site and become repeat users. So you want to optimize for them. But you don't want to do this at the cost of returning users or they'll come, love the experience, decide to return, discover it sucks, then leave again.

In the same way, you want to ensure your existing users have a site they love and don't defect. So you want to optimize for them. But you don't want to do this at the cost of the site sucking so much for new users none of them convert in to repeat users in the first place.

It's a classic case of all things in moderation: If you're going to one or other extreme, you're probably hamstringing yourself. If you're picking somewhere in the middle, informing yourself with the statistics, getting the best understanding of where that sweet spot lies, you're probably going to be far more successful.

The people who fail are the ones who say, "Statistics show that 85% of page views are hitting with a primed cache! We should always plan for that." and go off after what's actually only 50% of total users - they'll lose all new traffic and their site will slowly die. So will the people who say, "50% of daily visitors have an empty cache! We should always plan for that." and go after what're only 15% of the total site's experience. The wiser people say, "Hmm, 50% of unique users hit with an empty cache, 85% of page views have a primed cache. That implies we're getting a lot of new users coming in that we should accomodate but that most of our total traffic is to established users. We should find a healthy balance point for both."

Re:Both numbers can be true... (1)

kat_skan (5219) | more than 6 years ago | (#19988349)

So, both quotes are correct: 50% of daily unique visitors came in with an empty cache, 87% of total page requests were made with a primed cache.

Sorry, I didn't mean to suggest that their numbers didn't add up, just that small optimizations that service half your visitors don't make sense when they are something that only has any impact on the first request. The disadvantages of aggregating files together in the manner they are suggesting just outweigh that small benefit.

It's a classic case of all things in moderation: If you're going to one or other extreme, you're probably hamstringing yourself. If you're picking somewhere in the middle, informing yourself with the statistics, getting the best understanding of where that sweet spot lies, you're probably going to be far more successful.

Yes, thank you. I think this sums up my real objection to the tool better than anything. If you did nothing at all, chances are you're already on that middle ground, since your HTTP server would be by default configured for the generally-useful case. If you blindly follow the advice of this tool, you've gone to such extremes that you're actually making things worse.

Between the letter grades, the hard line taken by the tool, and the complete absence of any guidelines on the website to help you decide when its advice should not be followed, I almost think the thing was designed to have that effect. Not in a mustache-twirling villain kind of way, of course, but more in an "I am an expert and this is what I say you should do and this is why I am right and did I mention I have a book?" kind of way.

OT: A right device for everything (1)

mi (197448) | more than 7 years ago | (#19998739)

Obviously those numbers are pulled out of my anatomical /dev/null and make some major assumptions

You can't read (much) from /dev/null, and your numbers don't look like they come from /dev/zero either — those would be rather repetative.

I think, you meant /dev/random...

Re:Why is this a troll? (0)

Anonymous Coward | more than 6 years ago | (#19988253)

Speaking as the grandparent AC, CSS sprites are actually effective when used for icons and rollover images. A typical minimal HTTP response is ~150bytes, even for a 304. Plus when you have several PNG or GIF images they each have their own file headers. I use pngcrush or optipng to reduce filesize and CSS image slicing to reduce the request overhead.

The rest I more or less agree with (esp the etags thing) but there really are no hard and fast rules, just trade-offs. Looking at the problem from the client side completely misses the biggest bottlenecks. The real objective is to have your servers with minimum open connections waiting to serve the next request ASAP. Once a developer is thinking like that, the rest is obvious.

As such, this ass-backwards Yahoo tool is no substitute for working knowledge and may even prevent developers from applying the knowledge they do have.

Re:Why is this a troll? (0)

Anonymous Coward | more than 7 years ago | (#19992155)

The point of CSS sprites and combined files is to reduce the number of requests which take up a lot of the user's time... even if they are on broadband. Downloading one large image is faster than downloading a whole bunch of small images that equal in size to the large image. Combining this technique with some other ones result in a perceptible decrease in load time. Here [die.net] are some numbers if you don't believe me.

Re:Why is this a troll? (1)

kat_skan (5219) | more than 7 years ago | (#19992663)

Well, let me put it to you this way. Let's say your little icons are 400B apiece. And say your headers are another 400B each way, bringing the effective size of the file up to just shy of 1.2KB. It's an ideal application for a CSS sprite, because you'll no question get huge wins in file size. On lousy 28.8 dialup I can still download three of them every second. Alternately, if the page is still loading, the bandwidth can be divided between loading the images and the page without much perceptible impact on either.

Downloading a single 15k file when you hit a site, on the other hand, is painful on dialup. It takes five seconds in the best case, and if you've got anything downloading at the same time (note: Firefox defaults to 8 simultaneous connections to any given server and 24 overall), it's only going to be worse. When the file breaks about 50k, you start running into servers that just drop link midway through because they can't be assed to wait for you to finish downloading the image. So you get to try again, and probably get cut off again. And the gain that is supposed to justify this is a scant few tenths of a second per image. And even that will only actually be realized on the first visit.

I hate to see how popular this technique is getting, because it's flat out a dumb optimization. If people doing this were actually interacting with their page over a dialup connection instead of just looking at a pretty graph, they wouldn't bother.

Re:Why is this a troll? (1)

JazFresh (146585) | more than 7 years ago | (#19993949)

> [CSS Sprites are] only a win if your images are tiny. Why are you optimizing for this? Tiny images do not take long to download, even on dialup, because they are tiny.

The point is to reduce the number of HTTP connections, and thus avoid pointless latency. A TCP connection takes time to set up because there's a back-and-forth, and if the client is far from the server this can introduce a significant delay in loading static resources. Not to mention that the browser may have to reflow the page as the new images come in, which looks ugly. (Albeit that can be mitigated if image width/height are specified in the HTML).

> [Combined CSS/JS files are] less egregious than suggesting CSS Sprites, but it still suffers from the same problems. These are not large files, and if they are large files, the headers are not larger.

Again, the point is to reduce the number of HTTP connections, not to reduce the total header size. We did this on our site and got a pretty significant speedup.

> What, seriously? Are you really optimizing for your visitors who load one and only one page before their cache is cleared? Even though you "measured... and found the number of page views with a primed cache is 75-85%"?

For a public site, this can be important. A user coming to your site for the first time may only be there on a whim: if it's taking a while to load, then they may either abandon the load or get a bad first impression that your site runs slowly.

>> Keep in mind, if you use a far future Expires header you have to change the component's filename whenever the component changes.
> And if you ever change something but forget to change the file name, your visitors will have to reload everything on the damn page to get the current version of the one thing you changed.

True, but a good engineer can solve this problem. We solved it by including each resource's last-updated-version (from source control) in the URL, via some clever template and filesystem magick. This means the developer doesn't have to remember to change the file name, which would be a version control nightmare anyway.

Re:Why is this a troll? (1)

kat_skan (5219) | more than 7 years ago | (#19997069)

[CSS Sprites are] only a win if your images are tiny. Why are you optimizing for this? Tiny images do not take long to download, even on dialup, because they are tiny.

The point is to reduce the number of HTTP connections, and thus avoid pointless latency. A TCP connection takes time to set up because there's a back-and-forth, and if the client is far from the server this can introduce a significant delay in loading static resources.

It's significant in relative terms, but not so much in actual wall time. We're talking about sub-1k files here. They don't take long to transmit, even if you do it the stupidest way possible (to wit: the way HTTP does it :) ).

A 15k file will always, always, always take longer to retrieve from the server than a 1k file would have. That's 15 times longer that you have to wait until the bandwidth being used to transmit that file becomes available if you need it for something more important, such as the portion of the actual content of the page that hasn't finished loading.

You can make your page load faster overall and still end up making it slower for the purpose the page was originally loaded in the first place. You don't see it so much on broadband, since you have bandwidth to burn on two large files, but on dialup where everything larger than a few KB takes full seconds to load under the best of circumstances, it's rather apparent.

Not to mention that the browser may have to reflow the page as the new images come in, which looks ugly. (Albeit that can be mitigated if image width/height are specified in the HTML).

If you're using CSS sprites you have to specify the width/height, so it's only fair to assume you are willing to do the same with normal images.

What, seriously? Are you really optimizing for your visitors who load one and only one page before their cache is cleared? Even though you "measured... and found the number of page views with a primed cache is 75-85%"?

For a public site, this can be important. A user coming to your site for the first time may only be there on a whim: if it's taking a while to load, then they may either abandon the load or get a bad first impression that your site runs slowly.

If you could improve the experience for your visitors with empty caches with no impact on those who return regularly, that'd be a no-brainer. I'm not convinced, though, that that is the overall effect these guidelines, naively-implemented, would have. And they would inevitably be naively-implemented, since the documentation for each test is written as a justification for the test, rather than a discussion of the situations where the technique recommended is and is not appropriate.

Keep in mind, if you use a far future Expires header you have to change the component's filename whenever the component changes.

And if you ever change something but forget to change the file name, your visitors will have to reload everything on the damn page to get the current version of the one thing you changed.

True, but a good engineer can solve this problem. We solved it by including each resource's last-updated-version (from source control) in the URL, via some clever template and filesystem magick. This means the developer doesn't have to remember to change the file name, which would be a version control nightmare anyway.

I'm trying to imagine how this might be implemented. I'm assuming a source control system such as Subversion, where the entire tree has the same revision number, since this would be pretty deep magic on a system like CVS where every file has its own independent version number.

The most direct way I can think of is to just include the revision number as an otherwise unused parameter in the URL of every resource. That would require that your source control system expands keywords at checkout time, rather than at checkin. Subversion specifically doesn't do this [tigris.org] , so if that's what you're using, there would need to be a build process that does some macro substitution to update the revision number everywhere.

That wouldn't be very onerous, but it seems like it would be a lot easier to just set the Expires header for all your resources to something reasonable, such as midnight the following day, and include an ETag. Then every day or so, browsers would verify that the cached version is still good, and only download it if it's actually changed.

Re:Why is this a troll? (1)

TrueKonrads (580974) | more than 7 years ago | (#19994185)

As somebody, who has to explain to clients, that an odd performance metric from some miracle site is not Alpha And Omega of judgement,here I go...

And if you ever change something but forget to change the file name, your visitors will have to reload everything on the damn page to get the current version of the one thing you changed. Assuming, of course, they even realize there should be a newer version than the one they're seeing. And assuming that they actually know how to do that.

For one, having the Expires header reduced the load-latency - Your JS and CSS files are unlikely to change within a scope of a day or an hour. In theory, the browser does not have to re-validate the files that have an Expires: header set. So, when the user clicks from one link to another in Your site, he makes less HTTP requests. This improves latency, especially as there is the two connections per host rule. As for site upgrades append the version number to a file (automated build scripts are there to help) - it is not difficult.
So, developers and admins: please set the Expires header, it is a Good Thing[tm]

And of course, instead of just downloading the file again and checking to see if changing ETags are actually a problem or just something you should be aware of, let's just unilaterally fail this test if anything uses ETags. So yeah, people who don't know what they're doing will run this (not so much because they are the only ones who need profiling tools, but more because this gives them a place to start), and they will make the situation worse trying to pass these frankly asinine tests.

ETags is good and useful. Dynamically generated pages can report same ETag if information hasn't changed, so no need for client to reload the entire page. It is just as valid marker as Last-Modified. So, failing because something uses ETags is plain stupid.

Re:Why is this a troll? (1)

kat_skan (5219) | more than 7 years ago | (#19996989)

And if you ever change something but forget to change the file name, your visitors will have to reload everything on the damn page to get the current version of the one thing you changed. Assuming, of course, they even realize there should be a newer version than the one they're seeing. And assuming that they actually know how to do that.

For one, having the Expires header reduced the load-latency - Your JS and CSS files are unlikely to change within a scope of a day or an hour. In theory, the browser does not have to re-validate the files that have an Expires: header set. So, when the user clicks from one link to another in Your site, he makes less HTTP requests. This improves latency, especially as there is the two connections per host rule. As for site upgrades append the version number to a file (automated build scripts are there to help) - it is not difficult. So, developers and admins: please set the Expires header, it is a Good Thing[tm]

Right, but they are not merely advocating for the Expires header, but for setting a completely absurd Expires header in the far future, and later hacking around the problems that causes by playing games with the resource name. They wouldn't have to do that if they didn't arbitrarily hate ETags.

Why sites are slow (2, Interesting)

Anonymous Coward | more than 7 years ago | (#19982525)

Sites are only as fast as the slowest path through the site.

If your site has 10 different affiliate links/sponsors, all hosted on different providers, your site will be slow.

Similarly, if your site has 100 different java/javascript crapplets,widgets, your site will be even slower.

Here is a simple guide for site creators:

1. Don't overload on ads, I'm not going to view them anyway
2. Put some actual content I'm interested in on your site
3. Don't overload me with java/javascript crap, I don't care what my mouse pointer looks like, just let me click
4. Not everything needs a php/mysql front/back end.

Feel free to use this as a guide, and I might just visit those sites.

Re:Why sites are slow (1)

Klaidas (981300) | more than 7 years ago | (#19982609)

Most of those should be understood by default, they're simply common sense, but nowadays not many developers do that. I just hate it when, for example, slashdot point me to a website with an article, but before I even see the title, I must scroll down "by two screens worth of space". Sometimes that might be a good excuse to not RTFA (I kid, i kid!)
When building a photogallery (sig), I thought it'll be pretty much like a photo in the center, and then two buttons to view the previous/next one. Yet, when I finished, it still got filled with additional stuff (login/registration/menu/footer/etc). Well, at least the photo's in the center :)

Re:Why sites are slow (1)

140Mandak262Jamuna (970587) | more than 7 years ago | (#19982729)

People are moving away from simple mother's maiden name and last four digits of ssn to biometric authentication. And you publish your cornea for the whole world to see. Your id will be stolen in a moments notice buster.

You have to. (1)

iknownuttin (1099999) | more than 7 years ago | (#19982857)

3. Don't overload me with java/javascript crap, I don't care what my mouse pointer looks like, just let me click
4. Not everything needs a php/mysql front/back end.

You have to build up your resume some how in order to keep your job or to get a better one. What better way than to develop shit that the project really doesn't need but will sure look great on a resume!

And it's not just techies. Back in the mid nineties, it seemed that every CIO was moving his system from mainframe to distributed architecture. And then in the late 90s, it became moving the company onto the web. One project I was on had three different CIOs by the time it was done because they all went on to better things - thanks to the project building their resumes.

That's the way it is. IT is so "latest and greatest technology" centered that you have no choice. Otherwise, you're out of date and work dries up REAL fast. You always have to keep your skills up to date! Huh my fellow old timers?

Re:Why sites are slow (1)

Sparr0 (451780) | more than 7 years ago | (#19983063)

Uhm, how/why would 10 affiliate links/sponsors slow down your site?

Re:Why sites are slow (0)

Anonymous Coward | more than 7 years ago | (#19983693)

Because each of those links consumes bandwidth, as they generally contact different servers, requesting images, based on some sort of code or whatever.

I don't care for affiliate programs, etc. I don't care how much 'per click' I get paid. Most commercial sites should not care. Find a new revenue model, or die.

Re:Why sites are slow (1)

Chris Mattern (191822) | more than 7 years ago | (#19984509)

Uhm, how/why would 10 affiliate links/sponsors slow down your site?


He means having banners or other content that is actually retrieved from
the affiliate/sponsor's site, thereby insuring your page will load at
the response rate of the *slowest* of those ten sites.

Chris Mattern

Re:Why sites are slow (1)

Sparr0 (451780) | more than 6 years ago | (#19986023)

Hate to break it to you, but a properly designed web page will not wait for one (or ten) image to load before showing you the content.

Re:Why sites are slow (1)

Foolicious (895952) | more than 7 years ago | (#19984137)

Here is a simple guide for site creators:

1) Throw out the baby with the bathwater and pretend it's still 1996 . . . so that you can increase the number of impossible-to-please-anyways slashdot ACs that visit your site.

Yeah - that sounds like a real good plan.

Re:Why sites are slow (1)

pooh666 (624584) | more than 7 years ago | (#20001393)

I would second this on ads. I see a lot of very big sites, that are fine, except waiting for the banners...

slashdot effect? (-1, Redundant)

sootman (158191) | more than 7 years ago | (#19982591)

Is "currently being slashdotted" one of the diagnoses?

F: You are co-located at 365 Main. (4, Funny)

jea6 (117959) | more than 7 years ago | (#19982709)

F: You are co-located at 365 Main.

hmmm... (4, Insightful)

Tom (822) | more than 7 years ago | (#19982741)

Interesting approach, with lots of flaws.

For example "use CDN" (aka Akamai, etc.) - yeah, right. For Yahoo.com that's an idea. For my private website, that's bullshit. If they really use this internally to rate sites, their rating sucks by definition.

So in summary there are a couple good points there, and a couple that are not really appropriate. Expires: Headers are a nice idea for static webpages. But YSlow still gives me an F for not using one on a PHP page that really does change every time you load it.

Re:hmmm... (1)

Ant P. (974313) | more than 7 years ago | (#19982823)

For most websites it's BS anyway, Coral seems to take 5 minutes to load anything.

Re:hmmm... (1)

Jugalator (259273) | more than 7 years ago | (#19982955)

Well, from my experiences, Akamai is good, Coral is bad.

Re:hmmm... (1)

DavidTC (10147) | more than 7 years ago | (#19983599)

Yeah, many of these are stupid.

Not only do they recommend CDNs, which is absurd for any page that gets less than a million hits a day, they also complain about ETags, despite all the stuff I want cached actually having Etags. They whine that 'different servers can produce different etags' or something, like my site is random distributed over a dozen servers where images and CSS randomly get sent from different ones. Um, nope, just one server, as you apparently figured out when complaining about not using CDNs.

Of course, who knows how the hell they'd know if that was true and I'd actually set up the Etags correctly to be in sync.

And they whine about images and stuff that have Expires headers that aren't 'in the far future'. WTF? They're not in the far future for a reason. They're in the near future of a day or two, which is more than enough to handle a single visit. If someone comes back, they can make a damn HEAD request and see if it's changed, using less than 200 bytes. That seems a really dumb thing to complain about.

Some of these tests are good, it's nice to have a gzip and cachability report right there, and some other checks are at least moderately useful, like 'Move Scripts to bottom'. (Although if I had a page with so much javascript it mattered, I'd write a tiny loader function and have it pull in everything after pageload.)

But some of these are just goofy, and they definately need a 'No, the server is not overloaded by a huge amount of traffic, don't invent problems that would only affect, for example, yahoo.com' checkbox. I think the Web Developer Tools/View Speed Report is more useful on the whole, as that actually hits actual causes of slowdown like having the page too damn big!

Re:hmmm... (1)

nologin (256407) | more than 7 years ago | (#19984185)

Well, from the YSlow web page itself...

YSlow analyzes web pages and tells you why they're slow based on the rules for high performance web sites.

This criteria can be subjective (as to what a high performance web site is). I would certainly expect that Yahoo's tool would likely grade sites that have the same magnitude of number of hits that they would get. I don't even think that slashdot.org would even qualify in that category.

Their tips definitely do make sense if you have a site in the "millions of hits" scale. But they are definitely overkill for anything below that mark.

Re:hmmm... (0)

Anonymous Coward | more than 7 years ago | (#19984421)

Obviously CDN is great for Yahoo and Google and Slashdot but inappropriate for Joe Random's Home Page. If you had RTFM, you would have learned that YSlow enables you to change the weighting or shut off any of these rules.

Also, the far-future expires header is very much for dynamic sites. You put the header on all your images, CSS, and scripts, NOT the PHP page itself. But hey -- if you want to force your visitors to keep trying to fetch all your static crap every time, if you'd rather not just give them a quick 304, you go right ahead then.

Web optimization made clear (1)

athloi (1075845) | more than 7 years ago | (#19982775)

Finally, someone tells what web developers have known for years [yahoo.com] : optimizing the site is not a matter of splitting your content into as many images as possible over an enterprise app, but good clean design and code.

For years, as a web designer, every time I got ready to deploy I encountered some nitwit who would say, "You're going to break up that giant image, aren't you? We can put it on nine servers!" -- creating organizational havoc, a completely unmanageable asset mess of a project, and driving everyone nuts. The Souders-Yahoo approach is different. He suggests the obvious, which is have fewer page elements, stick them into the HTML code if possible, and trim that ragged mess of Java and CSS.

Also, as a technical writer, I'm impressed whenever someone gets paid to write down the obvious.

Re:Web optimization made clear (1)

LoadWB (592248) | more than 7 years ago | (#19982887)

It works for Dr. Phil :)

And yes... (1)

thatskinnyguy (1129515) | more than 7 years ago | (#19982991)

it does run on Linux. :-)

Maybe Yahoo should use it themselves... (1)

NewbieProgrammerMan (558327) | more than 7 years ago | (#19983035)

Lets you figure out why your site is slow, eh? Cool! Now if only the web developers at Yahoo could use this wonderful tool to learn how to make their script-laden web pages (yes, Yahoo Mail Beta, I'm looking at you) load on my laptop in under 30 seconds. :)

Slow news day? (0)

Anonymous Coward | more than 7 years ago | (#19983107)

If ever there was a time for this tag, this is it!

website testing (0)

Anonymous Coward | more than 7 years ago | (#19983219)

I've said this for a long time, website developers need to test their sites with a dialup account. In the US this is still at least 30% of the web surfing public using dialup (mostly from no choice in the matter). If you'll optimize and test for different browsers, even those with tiny marketshare, why not for speed as well? Just because the devs might have a fast net connection and an over 20 inch monitor with the latest videocard(s) installed and a fast machine with gigs of RAM, doesn't mean your potential customers have all that. I know to me, being on dialup, it's maddening sometimes, if I want to view a site that requires a lot of images and scripting turned on, it *does* take minutes sometimes for a single page to load, and that damn Flash is about the worst there is now. At least provide a low-res version of your site, with an obvious easy to see link on the very first page so folks who need to can switch over. Example of where they "get it" on this idea, nasa.gov, a very nice site that has three versions, flash, no flash, or real low res, the latter being much better for slower connections, slower machines or to make the site more accessible for those with visual problems, etc. Depending on your needs, you can pick which version you want and still access content reasonably, and still navigate. To me, that is the sign of real professional web developers.

Re:website testing (2, Insightful)

QuickFox (311231) | more than 6 years ago | (#19985291)

and that damn Flash is about the worst there is now.
The Firefox plugin Flashblock [mozilla.org] is quite wonderful. Flash items are replaced with a clickable surface. You get the option to click on the very few Flash items that you do want to view.

To me, that is the sign of real professional web developers.
More like a professional organization. If it were up to us developers, pages would be much better than they are.

Friendlier Reporting (3, Funny)

HitekHobo (1132869) | more than 7 years ago | (#19983503)

I think I'd prefer it to use a bit more realistic reporting. How about: 1) Your web developer is a complete incompetent. 2) Buy more hardware, tightwad. 3) There is no need to add every script plugin you come across. 4) Animated gif's are annoying as well as slow to load. 5) Yes, it does take time to download and render an entire book in html.

Re:Friendlier Reporting (1)

fishdan (569872) | more than 6 years ago | (#19986345)

You forgot 6) Flash content is often filtered out at the corporate router level 7) Flash is great for compression of audio/video but terrible for navigation/text

Just the start of their new plugin scanners (3, Funny)

192939495969798999 (58312) | more than 7 years ago | (#19984991)

YSucks - reveals why your site sucks.
YMe - translates your site into emo-speak.

Re:Just the start of their new plugin scanners (1)

StupiderThanYou (896020) | more than 7 years ago | (#19990275)

YNot - does nothing.

Source code of the YSlow tool (1)

this great guy (922511) | more than 6 years ago | (#19985167)

#!/usr/bin/perl -w
use strict;
print "You website is slow because: your (average) webmaster/sysadmin/architect cannot " .
"tell the difference between www.thedailywtf.com and good code\n";

load order effects perceived slowness (2, Insightful)

kiick (102190) | more than 6 years ago | (#19986179)

In my experience "slow" is a very subjective measure of a web site. It really depends on how quickly the content is displayed, not how quickly the entire page is loaded and rendered.

Lets say you visit, oh, dilbert.com (just to pick on a geeky site) to get your daily dose of dilbert. If the first thing that is rendered on your screen is the actual comic, you don't really care that it takes another 10-20 seconds to display the buttons, menus, sidebars, topbars, bottombars, animations, ads and ads for ads. It can do that while you chuckle over the comic.

On the other hand, if you have to sit there and drum your fingers while all the other crap loads first before you get to look at todays dilbert, then you are going to be muttering "why is this site so freaking slow?" And if wwww.weselladstoadserversbythebillions.com got it's DNS server taken out by a freak lightning strike, you could be sitting there a while.

Would it be possible to have a plug-in or extension, so that I could right click on the actual content of a site and say "next time I visit here, load this bit first?" Yes, I could just block everything else on the site, but then they'll change it a week later, and some of the non-content stuff might actually be useful on occasion. I don't want to have to be in an arms race with a million web-monkeys on a thousand different sites just to browse my RDA of surfing.

Re:load order effects perceived slowness (1)

univgeek (442857) | more than 7 years ago | (#20007841)

Brilliant idea for a firefox extension! Although the interface would be key to it's usefulness.

Yslow? Because you didn't pay... (1)

zerofoo (262795) | more than 6 years ago | (#19987731)

...AT&T protection money for your packets.

-ted

Nice utility (1)

QuietLagoon (813062) | more than 6 years ago | (#19988601)

Now, if Yahoo would only use it on their own sites to find out why they are always so darn slow.

YSpy? (1)

BillGatesLoveChild (1046184) | more than 6 years ago | (#19989883)

Would you really trust anything that Yahoo puts out? Yahoo has previously ratted on journalists and bloggers to the Chinese Authorities. Worse: They were unapologetic about it, and kept doing it. One Yahoo 'satisfied customer' got ten years jail for criticizing the Government.

So when Yahoo trundles along offering me neat tracking software, umm, no thanks. There's no telling where you might end up reading about it. Now sure, in the U.S. you don't get locked up for criticizing the government, but things do get leaked or given to the wrong people. Anyone who has ever written a comment that was less that P.R.-worthy should consider that. Yahoo has shown itself to be less than trustworthy.

http://www.csmonitor.com/2005/0909/p01s03-woap.htm l [csmonitor.com]
http://www.rsf.org/article.php3?id_article=14884 [rsf.org]
http://www.nytimes.com/2005/09/12/business/worldbu siness/12search.html?ex=1185508800&en=a0a01819d3ec c0ca&ei=5070 [nytimes.com]

... Re: YSpy? (1)

joe_n_bloe (244407) | more than 7 years ago | (#19990793)

Dude, the guys at Exceptional Performance aren't some kind of secret cabal.

Re:YSpy? (0)

Anonymous Coward | more than 7 years ago | (#19990903)

the tool is open source, so if you install it and poke into Application Data / Mozilla you can check all the secret stuff it's doing

Re:YSpy? (1)

BillGatesLoveChild (1046184) | more than 7 years ago | (#19991959)

A personal decision, but I'd rather stay away from everything Yahoo touches for the above reasons. They've sold out customers in the past and been unrepentant for it. There might be something you miss, or they might slip in something later. Do you really trust them?

If the Nazi Party bought out Nazi-brand Milk(TM), even if it's perfectly good milk, nahh... Same with Yahoo and privacy. The brand is tainted.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?