Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Google To Promote Web Speed On New Dev Site

Soulskill posted more than 5 years ago | from the patience-no-longer-a-virtue dept.

Google 106

CWmike writes "Google has created a Web site for developers that is focused exclusively on making Web applications, sites and browsers faster. The site will allow developers to submit ideas, suggestions and questions via a discussion forum and by using Google's Moderator tool. Google hopes developers will join it in improving core online technologies such as HTML and TCP/IP. For Google, a prime example of how Web performance can be enhanced is the development of HTML 5, which provides a major improvement in how Web applications process Javascript, Google believes. 'We're hoping the community will spend some time on the basic protocols of the Internet,' Google product manager Richard Rabbat said. 'There's quite a bit of optimization that can be done [in that area].'"

cancel ×

106 comments

Sorry! There are no comments related to the filter you selected.

Why Do They Ignore Their Own Advice? (5, Interesting)

eldavojohn (898314) | more than 5 years ago | (#28452083)

Most of this is helpful but from the HTML piece:

HTML - as opposed to XHTML, even when delivered with the MIME type text/html - allows authors to omit certain tags. According to the HTML 4 DTD [w3.org] , you can omit the following tags (tags of so-called "void" - empty - elements are marked as strikethrough):

  • </area>
  • </base>
  • <body>
  • </body>
  • (Void Element) </br>
  • </col>
  • </colgroup>
  • </dd>
  • </dt>
  • <head>
  • </head>
  • (Void Element) </hr>
  • <html>
  • </html>
  • (Void Element) </img>
  • (Void Element) </input>
  • </li>
  • (Void Element) </link>
  • (Void Element) </meta>
  • </option>
  • </p>
  • </param>
  • <tbody>
  • </tbody>
  • </td>
  • </tfoot>
  • </th>
  • </thead>
  • </tr>

For example, if you have a list of items marked up as <li>List item</li>, you could instead just write <li>List item. Or instead of a paragraph that you'd usually close with via </p>, you could just use <p>My paragraph. This even works with html, head, and body, which are not required in HTML. (Make sure you feel comfortable with this before making it your standard coding practice.)

Omitting optional tags keeps your HTML formally valid, while decreasing your file size. In a typical document, this can mean 5-20 % savings.

Now, my first reaction was simply "that cannot be valid!" But, of course, it is. What I found interesting is that when I looked at the source for that tutorial they themselves are using </li> and </p>. Interesting, huh? You would hope that Google would follow the very advice they are trying to give you.

Some of these suggestions may come at the cost of readability and maintainability. There's something about web pages being nice tidy properly formatted XML documents with proper closing tags that I like.

Re:Why Do They Ignore Their Own Advice? (4, Interesting)

gbjbaanb (229885) | more than 5 years ago | (#28452177)

The trouble with web pages is that they are source and 'released binary' all in one file, so if you put comments in (as you always should), and meaningful tag and variable names, then your download gets quite bigger.

What you really need is a system to 'compile' the source pages to something less readable, but significantly smaller - removing comments, replacing the unneeded end tags, shortening the variable names. If that was automated - so your source files were deployed to the server via this translator, then you'd never even know the difference, except your users on low-bandwidth (ie mobile) devices would love you more.

We used a primitive one many years ago, but I don't know if there's any improvements to the state of web-page optimisers today.

Re:Why Do They Ignore Their Own Advice? (2, Insightful)

Anonymous Coward | more than 5 years ago | (#28452429)

Come on now. The price of downloading html and javascript source is peanuts compared to images and flash animations. The solution is better web design, not another layer of complexity in the process. There is no shortage of low-hanging fruit to be picked here. Metric tons, you could say.

Re:Why Do They Ignore Their Own Advice? (5, Informative)

BZ (40346) | more than 5 years ago | (#28452563)

> The price of downloading html and javascript source is peanuts compared to images and
> flash animations

That may or may not be true... Last I checked, a number of popular portal sites (things like cnn.com) included scripts totaling several hundred kilobytes as part of the page. The problem is that unlike images those scripts prevent the browser from doing certain things while the script is downloading (because you never know when that 200kb script you're waiting on will decide to do a document.write and compeletely change what you're supposed to do with all the HTML that follows it). So the cost of downloading scripts is _very_ palpable...

Re:Why Do They Ignore Their Own Advice? (1)

phillips321 (955784) | more than 5 years ago | (#28453167)

I agree, i still write all my code in raw HTML, i used to use distribute all of my websites via a home connection with only 90Ks upload. Optimised pages and images make the difference

Re:Why Do They Ignore Their Own Advice? (0)

Anonymous Coward | more than 5 years ago | (#28453879)

Several hundred megabytes worth of javascript source? On a front page?
Case in point here. The solution is better web design.

Here's a little test anyone can do. Try downloading a web page with javascript disabled, and then again with javascript enabled. If the total time to final rendering is significantly longer, then there is room for improvement on web design.

This is one reason why I browse the web with the noscript firefox extension activated at all times, until I come across a site which (irresponsibly) refuses to function without javascript. This makes the whole browsing experience so much more pleasant, especially on my netbook. Of course, flashblock is just as important.

The vast majority of javascript found on the web is completely unnecessary, even ridiculous. It's like refusing to change a hard drive without a cordless electric screwdriver, while the regular old screwdriver sits within arm's reach.

Re:Why Do They Ignore Their Own Advice? (1)

zoips (576749) | more than 5 years ago | (#28454143)

Javascript is cacheable, so unless you are disabling your cache, the cost is upfront and then not paid again until the cache expiration.

Re:Why Do They Ignore Their Own Advice? (1)

BZ (40346) | more than 5 years ago | (#28454401)

Sure, but a lot of sites set the cache expiration time pretty low so they can all roll out updates often. In the best case, that just means a single conditional GET and 304 response, which isn't too bad.

But I think the one of the main ideas being discussed here is optimizing initial pageload of the site, and the cache doesn't help with that. If it's an SSL site and doesn't explicitly allow persistent caching of its data, it doesn't even help across browser restarts.

Re:Why Do They Ignore Their Own Advice? (1)

R.Mo_Robert (737913) | more than 5 years ago | (#28456929)

The problem is that unlike images those scripts prevent the browser from doing certain things while the script is downloading (because you never know when that 200kb script you're waiting on will decide to do a document.write and compeletely change what you're supposed to do with all the HTML that follows it). So the cost of downloading scripts is _very_ palpable...

All the more reason to avoid document.write and use JavaScript with the DOM to update the content of your pages instead.

Re:Why Do They Ignore Their Own Advice? (1)

phantomcircuit (938963) | more than 5 years ago | (#28457963)

Sure but remotely included scripts are cached, so the page loads slower only on the first load.

Is that really such a big deal?

Re:Why Do They Ignore Their Own Advice? (5, Informative)

Serious Callers Only (1022605) | more than 5 years ago | (#28452433)

What you really need is a system to 'compile' the source pages to something less readable, but significantly smaller - removing comments, replacing the unneeded end tags, shortening the variable names. If that was automated...

Something like gzip compression [apache.org] perhaps?

Re:Why Do They Ignore Their Own Advice? (1)

Flammon (4726) | more than 5 years ago | (#28452603)

Mod parent up! Where are my mod points when I need them.

Re:Why Do They Ignore Their Own Advice? (1)

Xiterion (809456) | more than 5 years ago | (#28453283)

Compression is one way to help, but unless the compression algorithm is particularly smart cutting the size of input to it should shrink the output even more. The question that then remains is if the size savings achieved by both cropping and zipping the files before sending them is worthwhile.

Re:Why Do They Ignore Their Own Advice? (2, Insightful)

Tokerat (150341) | more than 5 years ago | (#28455625)

If you save 320 bytes per file, serving 200 different files 750,000 times per day each (imagine some HTML docs that load a bunch of images, JavaScript, and CSS), that's 1.3TB over the course of 30 days. It adds up fast.

320 was chosen out of the air, as the total length of removed JavaScript comments (320 bytes is the content of 2 full SMS messages), trimmed image pixels, or extraneous tabs in an HTML document. Of course some files will see more page hits than others, some days will see less traffic on the site, and some files/file types are likely to be reduced by different amounts. The question still remains - how you would like to reduce your bandwidth bill and have your users be happier with your site all at the same time? Less traffic, maybe you don't need to bother with it. 500 hits/day sure paints a different picture (915MB/month), but upper-mid-sized sites which rely on leased hosting should really be keeping an eye on this, and it certainly would be good netiquette for everyone to ensure optimized traffic.

Re:Why Do They Ignore Their Own Advice? (3, Insightful)

quanticle (843097) | more than 5 years ago | (#28453683)

The problem with gzip compression (in this case) is that its not lossy. All of the "unnecessary" things that you have (e.g. the unneeded closing tags on some elements) will still be there when you decompress the transmitted data. I think the grandparent wants a compression algorithm that's "intelligently lossy"; in other words, smart enough to strip off all the unneeded data (comments, extra tags, etc.) and then gzip the result for additional savings.

Re:Why Do They Ignore Their Own Advice? (0)

Anonymous Coward | more than 5 years ago | (#28454057)

The problem with gzip compression (in this case) is that its not lossy. All of the "unnecessary" things that you have (e.g. the unneeded closing tags on some elements) will still be there when you decompress the transmitted data. I think the grandparent wants a compression algorithm that's "intelligently lossy"; in other words, smart enough to strip off all the unneeded data (comments, extra tags, etc.) and then gzip the result for additional savings.

Sounds like you're talking about server-side HTMLTidy. Jesus, how are you supposed to troubleshoot if your page doesn't publish/render the same way as you develop it? I guess "LAZY" is the answer. If that was a good idea I think the W3C would've mandated it with HTML 3.0.

Re:Why Do They Ignore Their Own Advice? (1)

Tokerat (150341) | more than 5 years ago | (#28455149)

Sounds like you're talking about server-side HTMLTidy. Jesus, how are you supposed to troubleshoot if your page doesn't publish/render the same way as you develop it? I guess "LAZY" is the answer. If that was a good idea I think the W3C would've mandated it with HTML 3.0.

Turn it off in your dev environment until you're ready to debug issues that come up with it (i.e. after you feel everything is ready otherwise). Sure it's an extra cycle of development, but if HTMLTidy (or whatever you use) isn't doing something really weird, everything should work exactly the same as it does without it being turned on.

A 15k savings per page load on a site that gets 15 million hits per day = 429.15GB less traffic per month. How much do you pay per GB of traffic? Would this be worth it? What if you could reduce the load size further? There are definitely major, high-traffic sites out there that could reduce their page load footprint by more than 15k/load, especially if they started using the browser cache properly...

Re:Why Do They Ignore Their Own Advice? (1)

Tokerat (150341) | more than 5 years ago | (#28454995)

Don't forget, those "unneeded" closing tags are needed in HTML 5. The days of newline tag closings are numbered.

Re:Why Do They Ignore Their Own Advice? (1)

R.Mo_Robert (737913) | more than 5 years ago | (#28456871)

New lines have never been a substitute for a closing tag in HTML. Context, such as starting another <p> before formally closing the previous one, has. (Paragraphs, by the specification, cannot contain other block-level elements, including other paragraphs. The specifications allow for the omission of certain elements when other parts of the specification preclude ambiguity.)

Of course, most authors would put a new line in their code at this point for readability, but that's another story.

Re:Why Do They Ignore Their Own Advice? (1)

Serious Callers Only (1022605) | more than 5 years ago | (#28455969)

Closing tags like li are going to compress down nicely with gzip if there are enough to take up lots of space.

I suspect that any kind of HTMLTidy approach on web pages is not going to be very successful at saving space, compared to something like gzip, or even in addition to it. For example leaving out end tags on lists won't save much space at all if those are already stored with a small token after compression, being so common. It's kind of like compressing a file twice - you're not going to see massive gains from doing this and doing gzip too, and it's a hassle and obfuscates the source for your users, which is a shame in a format that is designed to be human readable.

The only thing that could take up loads more space and is not compressible is comments, so if you do leave lots of comments in your HTML, then it might be good to provide a minified version of it. Typically for HTML that's just not a problem (if your markup is so complex that it needs comments, *that's* a problem), and it is more hassle than its worth (please do some tests and let us know if that's not the case, has been in my experience).

Perhaps in Javascript you'd want extensive comments, but there are various minify tools around for javascript that do this already - here's one [yahoo.com] . Typically libraries keep around a copy with comments, and also provide a minified version for production to cut download times.

So anyway, gzip nicely solves the "source and 'released binary' " problem that the grandparent brought up, by producing a binary representation of the source files automatically, without you having to think about it or post-process your HTML.

Re:Why Do They Ignore Their Own Advice? (1)

Tokerat (150341) | more than 5 years ago | (#28454975)

While you're absolutely right (there is no excuse to not support gzip compression on your web server these days), a file loaded with comments and unnecessary whitespace is still going to compress down to a larger size than one with all the comments, out-of-tag whitespace removed. There is simply less data to compress in the first place. (Note: things such as long CSS ids are of no matter, because they'll be pattern-matched and take up the same space as a shorter name, anyway)

Re:Why Do They Ignore Their Own Advice? (1)

fuzzyfuzzyfungus (1223518) | more than 5 years ago | (#28452437)

It appears [blogoscoped.com] that google is already doing some of that, which is understandable(given how many times a day the main google.com page gets loaded, cutting a few bytes out probably saves more bandwidth than many companies use).

Re:Why Do They Ignore Their Own Advice? (3, Informative)

Anonymous Coward | more than 5 years ago | (#28452449)

This is what's called mod_deflate on Apache 2

I'm using it on a couple small sites I maintain. The text portions of those sites get compressed to less than 50% of their original site. Obviously it does not compress images, pdfs,...
Obviously there is no need to compress those as they are already properly prepared before they are available online.

Re:Why Do They Ignore Their Own Advice? (1)

davecb (6526) | more than 5 years ago | (#28452477)

A lot of the elision allowed was for human beings writing html code by hand. Interpreting it in a DFA without the extra closing tags takes no more cycles at run-time, but more work writing the interpreter.

Logically, the only thing you're gaining by leaving out tags is the time to read , so this part of the optimization isn't going to give you a lot of performance.

--dave

Re:Why Do They Ignore Their Own Advice? (1)

zhiwenchong (155773) | more than 5 years ago | (#28452491)

Well, all that would be unnecessary if server-side gzip were turned on. I consider that a type of web page optimization, and you don't really have to anything special with the HTML.

I believe there is a case to be made for compression even for very dynamic websites. It works very well for mobile devices like Blackberries.

Re:Why Do They Ignore Their Own Advice? (1)

SIR_Taco (467460) | more than 5 years ago | (#28452545)

I have a script that pulls the comments out of my html, css, and js files before uploading them to the server for this reason entirely.

For simple (read: small) it's not a huge problem (adds 1k or so) but it can become a problem for larger pages. The repositories for the files contain all the comments needed to develop and maintain the code, but the pages that are actually viewed by the end-user don't. As much as the inquisitive end-user may like to have commented html/css/js to look at, it's much more practical to squeeze that little bit extra out of the files to make them load just that fraction of a second faster.

As far as shortening variable/function names... yes I could see it dropping the size of the file, but it wouldn't change the size that much relative to pulling the comments out. Again, it can be done a regex script that would run through your files and say replace drawImage() with X() and loadContent() with Y(), I'm sure you could shave a few k of the file size (and further obfuscate your code to prying eyes). It's not something that I think will make that big of a difference.

I certainly don't agree with omitting closing tags. If anything, the fact that HTML has been so lax (or at least the HTML renderers/browsers) is one of the reasons why html has become so sloppy. It should be considered like a scripting and/or programming language, strict.

Re:Why Do They Ignore Their Own Advice? (1)

Paul Carver (4555) | more than 5 years ago | (#28452641)

You hardly need comments if you write clean HTML. Most of the complicated stuff that makes the web slow is the super convoluted javascript and flash garbage that is mostly intended to hamper users from accessing content. The sort of people/companies that produce these sites aren't really concerned about their vistors' convenience. They're interested in controlling and monitoring their visitors. I'm having trouble believing these people care much about how slow and miserable their sites are.

If you're one of these people I don't much care about your inconvenience in managing comments that help you manage your own convoluted nightmare.

Re:Why Do They Ignore Their Own Advice? (1)

mr sharpoblunto (1079851) | more than 5 years ago | (#28458313)

What you really need is a system to 'compile' the source pages to something less readable, but significantly smaller - removing comments, replacing the unneeded end tags, shortening the variable names. If that was automated - so your source files were deployed to the server via this translator, then you'd never even know the difference, except your users on low-bandwidth (ie mobile) devices would love you more.

We used a primitive one many years ago, but I don't know if there's any improvements to the state of web-page optimisers today.

The Aptimize website accelerator (www.aptimize.com [aptimize.com] ) does exactly this, except its implemented as an output filter on your webserver so content is dynamically optimized as it is sent to the user, obviating the need for a separate optimizing deployment process. It does things like combining css and javascript references on a page, inlining css background images, and combining images into css mosaics to reduce request counts, minifying css and javascript files, and adding proper cache headers to all page resources to reduce redundant requests on warm loads of the page. Typically this can reduce page load times by 50% or more, especially over high latency connections.

Re:Why Do They Ignore Their Own Advice? (1)

asylumx (881307) | more than 5 years ago | (#28452843)

Most sites have gzip set up on their outbound transfers. Seems like gzip would eliminate a lot of these duplicate tags -- unless they are suggesting that gzip itself is slowing the entire process down?

Re:Why Do They Ignore Their Own Advice? (1)

Hurricane78 (562437) | more than 5 years ago | (#28454685)

Well, it still is a verbosity joke.

I internally use a format that is derived from EBML. Matroska's internal generic binary markup format.
I simply added a mapping header, that maps tag names and parameter names to the tag ids.
That way I can easily convert, and edit the files, with any text editor, and transform from XML and back without any hassle at all.
It's just like ASCII is a mapping of numbers to characters. Just on one level higher.

It's nearly too simple an obvious. So I think it should be come a new standard. :)

Re:Why Do They Ignore Their Own Advice? (1)

aamcf (651492) | more than 5 years ago | (#28457995)

Now, my first reaction was simply "that cannot be valid!" But, of course, it is.

You can do Interesting Things with HTML and tag and attribute minimization. This is a valid web page:

<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN"><title//<p//

I wrote about this [aamcf.co.uk] a few years ago.

Re:Why Do They Ignore Their Own Advice? (1)

hesaigo999ca (786966) | more than 5 years ago | (#28465205)

Yes, but not closing your tags is not being xhtml complient, and google has an image to upkeep!
They show off what they know, but they want to remain politically correct.

What's that sound? (2, Interesting)

conner_bw (120497) | more than 5 years ago | (#28452093)

/**
* What's the sound of 1 million simultaneous PHP commits to SourceForge?
* @see: http://code.google.com/speed/articles/optimizing-php.html
*/

echo 'substituting' , 'concatenation' , 'with' , 'commas';

Re:What's that sound? (1)

TheThiefMaster (992038) | more than 5 years ago | (#28452363)

Gah!

Say it with me: "Use prepared SQL queries not concatenation!"

Their video is dynamically building the SQL statement, which is full of injection possibilities.

Re:What's that sound? (0)

Anonymous Coward | more than 5 years ago | (#28452445)

The echo case is when you're doing something like

echo "This code is ${howslow} slow";

Do

echo 'This code is', $howslow, 'slow';

instead, using ' instead of " to reduce the processing needed to parse the strings and , to echo out all of the pieces without actually doing the extra processing of concatenating them into a single string object that you don't want anyway.

Re:What's that sound? (2, Informative)

conner_bw (120497) | more than 5 years ago | (#28452547)

Their video is dynamically building the SQL statement, which is full of injection possibilities.

The first part of the statement is true, but the second (in regards to the video) is false. The variables used in the video are local and manually typed. They don't come from anywhere except the programmer's own mind. There is no injection possibility.

Prepared statements are slower than regular queries. In MySQL, a prepared statement makes two round-trips to the server, which can slow down simple queries that are only executed a single time.

So using prepared SQL everywhere is kind of a blanket statement; especially when it comes to speed. With that said, prepared statements can lead to a speed increase if you need to run the same query many times over an over. It also adds a level of security if you aren't pre-sanitizing your variables. But if the developer is in control of the query, and not repeating it, prepared statements are kind of wasteful.

Re:What's that sound? (1)

TheThiefMaster (992038) | more than 5 years ago | (#28452717)

An ideal solution would be some way to store the prepared version of a query on the server.

Re:What's that sound? (1)

Shados (741919) | more than 5 years ago | (#28453133)

I don't know about MySQL, but prepared statements in most major RDBMs will allow the database to cache the query plan as well as being more easily optimized. So they actually are much -faster- if you need to execute the query over and over (especially if you can reuse the same query object). Many database APIs will also let you use statement objects that have the same capabilities as prepared statements in term of query plan caching and safety, but do not do the first roundtrip to optimize the query on the server, so if you are just going to execue the query once, you can still use a parameterized query without the double roundtrip.

ORM (Object relational mappers) will use these methods internally and are fairly pervasive in high performance application.

And its not wasteful even if you control the sql, again, because of the query plan caching abilities that it provides.

Re:What's that sound? (1)

conner_bw (120497) | more than 5 years ago | (#28453889)

Well, wasteful might have been a strong word, sure.

For the record I use prepared statements more than less (via PDO).

I was just trying to point out that there are no injection issues with the code in the example, and that using prepared statements everywhere is a blanket statement; there are trade-offs that need to be considered.

Re:What's that sound? (1)

barzok (26681) | more than 5 years ago | (#28453419)

Prepared statements are slower than regular queries. In MySQL, a prepared statement makes two round-trips to the server, which can slow down simple queries that are only executed a single time.

So using prepared SQL everywhere is kind of a blanket statement; especially when it comes to speed. With that said, prepared statements can lead to a speed increase if you need to run the same query many times over an over. It also adds a level of security if you aren't pre-sanitizing your variables. But if the developer is in control of the query, and not repeating it, prepared statements are kind of wasteful.

Most web sites/apps are running the same queries many times over. And eliminating the need for the developer to pre-sanitize input both simplifies code and helps protect against bugs & missed cases.

Or you could go all the way and use stored procedures and/or views for your most common queries.

Re:What's that sound? (1)

conner_bw (120497) | more than 5 years ago | (#28453655)

To clarify my original post, I see the value in prepared statements and I use them pretty much all the time.

I was just replying to that specific post, and explaining why this specific example (a relatively simple script) might not have used them; but still wasn't susceptible to SQL injections.

In fact, on second look, the code snippet is using the mysql extension (instead of mysqli or PDO) which doesn't even support prepared statements. So the discussion is kind of futile.

When it comes to speed, there are trade-offs inherit to the tools being used. Prepared statements in PHP/MySQL is one of them.

Re:What's that sound? (2, Interesting)

Dragonslicer (991472) | more than 5 years ago | (#28453677)

That article says "It's better to use concatenation than double-quoted strings." Every legitimate benchmark I've seen has shown that the difference is zero to negligible. In tests that I've run myself, concatenation actually scales worse; a dozen concatenation operations are slower than one double-quoted string.

As for using commas with echo, why aren't you using a template engine?

Re:What's that sound? (2, Insightful)

quanticle (843097) | more than 5 years ago | (#28453839)

From TFA:

Sometimes PHP novices attempt to make their code "cleaner" by copying predefined variables to variables with shorter names. What this actually results in is doubled memory consumption, and therefore, slow scripts.

It seems to me that this is a flaw in the PHP interpreter, not the PHP programmer. The way I see it, the interpreter should be lazily copying data in this case. In other words, the "copy" should be a pointer to the original variable until the script calls for the copy to be changed. At that point the variable should be copied and changed. I believe this is how Python handles assignments, and I'm surprised that PHP does not do it the same way.

Re:What's that sound? (1, Informative)

Anonymous Coward | more than 5 years ago | (#28455837)

It does.

From Chapter 2.3.4. Garbage Collection in Orelly'S Programming PHP:

PHP uses reference counting and copy-on-write to manage memory. Copy-on-write ensures that memory isn't wasted when you copy values between variables, and reference counting ensures that memory is returned to the operating system when it is no longer needed.

To understand memory management in PHP, you must first understand the idea of a symbol table . There are two parts to a variable--its name (e.g., $name), and its value (e.g., "Fred"). A symbol table is an array that maps variable names to the positions of their values in memory.

When you copy a value from one variable to another, PHP doesn't get more memory for a copy of the value. Instead, it updates the symbol table to say "both of these variables are names for the same chunk of memory." So the following code doesn't actually create a new array:

$worker = array("Fred", 35, "Wilma");
$other = $worker; // array isn't copied

[... snip ...]

http://docstore.mik.ua/orelly/webprog/php/ch02_03.htm [docstore.mik.ua]

C'mon slashdot, get working (2, Funny)

Anonymous Coward | more than 5 years ago | (#28452145)

We've got to slashdot their site for ultimate irony! :)

Re:C'mon slashdot, get working (1)

Goaway (82658) | more than 5 years ago | (#28452663)

You're about half a decade too late, sorry. Slashdot hasn't been able to slashdot anything beyond the puniest tin-can-and-string servers for a long time.

Re:C'mon slashdot, get working (0)

Anonymous Coward | more than 5 years ago | (#28454453)

What would be the irony? That a site with pages slow as molasses can take out a site about optimized HTML? Well, it can't. Get working indeed, on Slashcode.

Re:C'mon slashdot, get working (1)

chabotc (22496) | more than 5 years ago | (#28454841)

You want to slashdot a google.com site?

I think your sense of scale might be a bit off here, but good luck with that anyhow :)

Why the change? (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#28452157)

When did the White House press corps switch from priding themselves on their freedom and ability to hammer the president with tough, often inconvenient, and equally often inane questions, to racing each other to see who can verbally fellate the president the best? Oh, that's right. When the messiah was chosen.

http://www.washingtonpost.com/wp-dyn/content/article/2009/06/23/AR2009062303262.html [washingtonpost.com]

Re:Why the change? (-1, Troll)

Anonymous Coward | more than 5 years ago | (#28452263)

Giving a president the title of Messiah (or false Messiah) can only happen in the self-centered lunacy known as America.

Pro-tip: There are 195 countries in the world of which the USA is 1. He's a president, just like every other, get over yourself.

Start by eliminating the zero bits (2, Funny)

Anonymous Coward | more than 5 years ago | (#28452211)

The skinnier ones compress much easier.

Re:Start by eliminating the zero bits (1)

suggsjc (726146) | more than 5 years ago | (#28453473)

I would think that you have it just backwards. I would think that skinny ones would have less to compress. Also, wouldn't the ones be skinny and the zeros be fat?

Re:Start by eliminating the zero bits (3, Funny)

Tokerat (150341) | more than 5 years ago | (#28456563)

Yes, the 1s are skinny and the 0s are fat. You see, there is more space to compress between a line of evenly spaced 1s than between a line of evenly spaced 0s. If you compress wth too much force, the 0s get "squished", they'll turn into 1s, and this can screw up the formatting and cause segfaults and kernel panics, even in the newer Linux builds. There isn't much that can be done about this, even with today's protected memory designs, so we're limited to removing the space in between. It might help you to think of this technique as the giant laser thingie in "Honey, I Shrunk The Kids!" which recuded the space between atoms of objects in order to shrink them.

ROR compression (a variation of the .rar format) uses this particular method, replacing the 0s with a counter of how many 0s in a row where replaced, and then compressing the 1s together. This is called "Packed Binary Coding".

Similar methods where developed by American researchers (Dynamic Unicode Hypertensioning), but instead of simply compressing the 1s, they are instead converted into a pipe character ("|") so as to prevent the tick mark adorning the shaft of the 1 to prevent further compression (or errors resulting from "tilt" when the ones are pressed together too forcefully).

These are second-year Comp Sci concepts. What is /. coming too when we're not even keeping up with the basics? It's a sad day for geeks everywhere.

Re:Start by eliminating the zero bits (1)

Zaiff Urgulbunger (591514) | more than 5 years ago | (#28459899)

You can fit two 1's in the space of a zero. Plus, you can line them up vertically for even better compression!

All this is pretty obvious though... but experts in the field such as myself know that for the best compression, you need to use a san-serif font. All the serifs on the ends can otherwise take up extra space so you can fit less in a packet.

The other curious thing about this is that by using 1's instead of 0's, you get better compression by using more bits. But if you find you actually need to use 0's for a legitimate purpose (I use them as eye's in my more elaborate ascii-art for example), then the best thing to do is XOR your data before transmission to flip the bits. This brings the additional benefit of securely encrypting all of your data too.

Revolutionary idea (5, Funny)

fph il quozientatore (971015) | more than 5 years ago | (#28452223)

I have this great and innovative idea. Take your browser-based e-mail client and word processor, rewrite them in native machine code and run them alongside the browser, as a separate app, instead of inside it. For even more speedup, the data could be stored on the hard drive instead of downloaded from a remote web-site. Never seen before!

Re:Revolutionary idea (0)

Anonymous Coward | more than 5 years ago | (#28453205)

Race you to the patent office?

Re:Revolutionary idea (1)

jo42 (227475) | more than 5 years ago | (#28453785)

You mean what Microsoft has been doing since 1990-something?

Re:Revolutionary idea (2, Insightful)

JohnnyBGod (1088549) | more than 5 years ago | (#28454669)

WHOOSH

Re:Revolutionary idea (0)

Anonymous Coward | more than 5 years ago | (#28453951)

And how are you supposed to sell advertising space if your application isn't always connected to the Internet?

Re:Revolutionary idea (1)

harmonise (1484057) | more than 5 years ago | (#28455929)

No one said anything about it not being connected to the internet.

It's a plague. (1)

BlueKitties (1541613) | more than 5 years ago | (#28452265)

I remember the golden days, when I had limited online time allowed because AOL was metering my parents dial-up connection; webpages actually loaded on dial up. Yes, you heard me, dial up could load any web page on the Internet. After broad-band came up, certain web pages started slowly taking longer and longer to load; today, dial up just doesn't cut it. I suppose we have the same problem with processing resources.

Re:It's a plague. (2, Insightful)

gmuslera (3436) | more than 5 years ago | (#28452483)

I remember when the recommendation was that your webpage in total (counting all resources that includes, code, graphics, etc) couldn't weight more than 50k. What is the average total page size today? 500k? 1Mb? And that loading a lot of resouces between main page, style sheets, javascripts and graphics both small and big (and that gets only worse with flash apps/movies),

Technology is advancing (i think i read somewhere there that JS processing is 100x faster in modern browsers) and there are a lot of developers tools that give advices on how to improve responsiveness of your site (yes, most of them linked from that google site), so maybe the good part of the web could improve speed in a near future.

Google--look to your own failings first (0)

Anonymous Coward | more than 5 years ago | (#28452277)

While the ideas are good, I'd be more impressed if they fixed their own timewasters.

For example, the search function for google groups (That's the history of the internet, since 1983, from long before the WWW). It's been broken for almost 2 years.

My immediate thought ... (0)

Anonymous Coward | more than 5 years ago | (#28452301)

making Web applications, sites and browsers faster.

Anybody up for sending the Slashdot developers there?

Those Google engineers sure are a sexy bunch! (1, Funny)

Anonymous Coward | more than 5 years ago | (#28452315)

Why oh why was this in video format?

good idea (2, Funny)

burris (122191) | more than 5 years ago | (#28452331)

As any open source developer knows, what's needed is more ideas, suggestions, and questions. Later, once the discussion group has come to consensus, we'll write some code.

WebSpeed? (1)

Itninja (937614) | more than 5 years ago | (#28452377)

You mean like Progress? Since OE10 Webspeed errors have dropped off considerably... http://web.progress.com/openedge/webspeed-worshop.html [progress.com]

Re:WebSpeed? (1)

HAKdragon (193605) | more than 5 years ago | (#28456701)

Considering we use Progress, I was thinking the same thing...

mod xUp (-1, Troll)

Anonymous Coward | more than 5 years ago | (#28452461)

Just write a native client-side app (1, Interesting)

ickleberry (864871) | more than 5 years ago | (#28452501)

HTML/HTTP were never designed as a method for running remote applications and shouldn't be used as such. We spent all these years upgrading to the latest Core 2 Trio so we could make the internet connection the new bottleneck.

Yes I realise that for n00bs its all about the convenience of web apps but client-side apps need not be inconvenient. Look at the iPhone app store, n00bs love it and its full of client-server applications. If there was something like it for Windows and OS X we'd never need to work with a horrible "web application" ever again. Linux doesn't need any, package managers could do with a bit more eye-candy and buttons with round edges for n00bs but for the rest its fine.

I'm all for optimising web pages but one should focus on minimalism, only use AJAX in cases where it actually saves bandwidth rather than using it for useless playthings. Use a CSS compressor, gzip compression, strip out useless eye-candy and effects, use as little javascript as you can get away with.

Modern web design thrives on feature-creep and making one's own site look better (and more bloated) than the competitor's. The web devs have a skewed perception of how long it takes to load because most of them are using decent machines and accessing the server through 192.168.1.x

Re:Just write a native client-side app (0)

Anonymous Coward | more than 5 years ago | (#28453389)

Viva la lynx. I notice you are wearing an onion on your belt, as was the fashion at the time that HTTP was static text linking to other static text.

Re:Just write a native client-side app (1)

99BottlesOfBeerInMyF (813746) | more than 5 years ago | (#28453439)

HTML/HTTP were never designed as a method for running remote applications and shouldn't be used as such.

Developers use the best tool for the job and (sadly) Web apps are more functional and useful to people than native clients in many instances.

Yes I realise that for n00bs its all about the convenience of web apps but client-side apps need not be inconvenient. Look at the iPhone app store, n00bs love it and its full of client-server applications.

This is part of an interesting shift in computer technology. Mainly it is the shift to portable computing owned by the user. This contrasts with work provided computers controlled by them and public terminals. When you want to check your personal e-mail at work, using your work provided desktop, well a Web application is really convenient. When you want to check your personal e-mail at work and you own an iPhone, the game changes. When you want to check your personal e-mail at home, using the same thing you do at work is convenient.

Further, Web applications are cross platform. They work on all the different versions of Windows and OS X and Linux and anything else and you don't have to pay for and separately install the program on each device. You don't have to learn separate interfaces on each device. You don't have to worry about synching data. The truth is, Microsoft has a lot of power and they've spent the last decade trying to prevent easy cross-platform computing and serving as a road block to anything that might make the Web a more important chunk of computing than their OS. With the OS market so broken the Web is an attempt by the free market to route around the damage.

As I see it the fight between Web applications and native applications depends upon how the market/ecosystem evolves. As alternative devices and OS's like the iPhone, blackberry, Linux netbooks, OS X computers, etc. become more popular we'll see a shift back towards native applications. However, at the same time if Web technologies move forward in actual implementation and IE loses market share (which will accompany a shift towards the aforementioned devices) the Web will become a better medium for delivering useful applications and it will become an easier target for developers. We could see an alternative cross platform development strategy become dominant, such as Java or other VMs, but it is doubtful since MS will do everything they can to block such a technology and they still have a lot of power. More likely we'll see hybrid applications/services like the ones Google promotes. Send e-mail or chat via standard services through their Web interface when convenient or use a native client when you have access to a device you control.

Re:Just write a native client-side app (1)

vertinox (846076) | more than 5 years ago | (#28453493)

HTML/HTTP were never designed as a method for running remote applications and shouldn't be used as such. We spent all these years upgrading to the latest Core 2 Trio so we could make the internet connection the new bottleneck.

Well yeah. It was designed to serve content but to downplay server side content is to discount the whole reason PHP, CGI, and ASP was made.

There is a dramatic need for web hosts and web developers to control the platform in which your application will run. Your only alternative is to create an app which may or maynot run on your user's hardware and OS platform.

Sure a lot of people have fast CPUs but you have no guarantee that they all do and not only that but issues with drivers and almost infinite problems with OS issues that go with creating and maintaining source for multiple platforms.

Logistically it would be easier for the developer to run all the code server side and send only pertinent information to the user which usually reduces the problem with the bandwidth.

From a support standpoint, thin clients are easier to support since if you need to do troubleshooting you don't have to mess with the client computer that much. (Go Terminal Server/Citrix!)

They should start with their ads (1)

JorgeFierro (1304567) | more than 5 years ago | (#28452555)

There was an article here on /. some time ago that affirmed that google ads/analysis was slowing down the web. In my personal experience, this has normally been true, most of the time when a major webpage is taking time to load up, I see 'Waiting for [insert something google]...'.

Javascript Sucks (-1, Troll)

Anonymous Coward | more than 5 years ago | (#28452633)

Javascript and therefore AJAX sucks! Why would anyone with half a brain want to use a platform/browser specific language? Huge if/then trees to check which browser then you have to rewrite your code every time a new browser version comes out. Get a clue! Server side rules!

Stop using off-site crap (1)

rho (6063) | more than 5 years ago | (#28452803)

Like Google Analytics, or Google Ads. When Google went pear-shaped some time back it made a significant portion of the Web unusable. If your own server is down, no big deal. If other sites depend on your server, then it's a problem.

While I'm slagging off Google, why don't they stop Doing Cool New Stuff and improve their fucking search engine instead?

Yahoo has a good page, too (2, Informative)

JBL2 (994604) | more than 5 years ago | (#28452901)

Yahoo! has a handy page (http://developer.yahoo.com/performance/ [yahoo.com] ) with lots of good info. It includes YSlow (a Firefox add-on), a set of "Best Practices," and some good research. Also references a couple of O'Reilly books (which, to be fair, I haven't read).

More specifically, CSS sprites (see http://www.alistapart.com/articles/sprites/ [alistapart.com] ) and consolidating Javascript may be back (reducing HTTP requests), and a few other things that may surprise or inform.

Re:Yahoo has a good page, too (2, Interesting)

POWRSURG (755318) | more than 5 years ago | (#28453277)

I am honestly torn on the idea of CSS sprites. While yes, they do decrease the number of HTTP requests, they increase the complexity of maintaining the site. Recently, Vladimir VukiÄeviÄ pointed out how a CSS sprite could use up to 75MB of RAM to display [vlad1.com] . One could argue that a 1299x15,000 PNG is quite a pain, but in my experience sprites end up being pretty damned wide (or long) if you have images that will need to be repeated or are using a faux columns technique.

Some times it gets to be a better idea to make a few extra initial requests, then configure your server to send out those images with a far future expires header (which you should do for the sprite anyway). At that point you're just talking about the initial page request, and then subsequent visits get the smaller sized. With one site I am working on the initial page view is hitting 265 KB on the initial view, 4.75 KB for the next month.

I don't see this mentioned anywhere, but Google has already switched to the HTML5 Doctype. It is much shorter the other flavors.

external resources in HTML pages (3, Insightful)

reed (19777) | more than 5 years ago | (#28453147)

The number one slowdown I see on pages is linking to all kinds of external resources: images, flash movies, iframes, CSS, bits of javascript. Each of these requires at least another DNS lookup and a new HTTP connection, and often those external servers take a really long time to respond (because they're busy doing the same for all those other websites using them). Why is this going on in each users browser? It should all be done behind the scenes on the web server. Why would you put the basic user experience of your users or customers in the hands of random partners who are also doing the same for competing sites? It takes some load off your server, but I think the real reason that people just link in external resources as images, objects, etc is just that it's easier than implementing it in the back end. If you really want to offload work, then design a mechanism that addresses that need specifically.

We've ended up with a broken idea of what a web server is. Because it was the easiest way to get started, we now seem to be stuck with the basic idea that a web server is something that maps request URLs directly to files on the server's hard disk that are either returned as is or executed as scripts. This needs to change (and it is a little bit, as those "CGI scripts" have now evolved into scripts which are using real web app frameworks.)

Re:external resources in HTML pages (1)

Tokerat (150341) | more than 5 years ago | (#28456167)

  1. Set up account with 3rd-Party advertiser and create web-site backend that loads the ads into my own servers for display to the user.
  2. Write script and throw it in the cron to increment the ads displayed counter once per second, while generating convincing referrer-logs.
  3. Profit! Move back to Nigeria.

Re:external resources in HTML pages (0)

Anonymous Coward | more than 5 years ago | (#28458053)

One advantage of using a common external script is that if the user has already visited another site using it, then it's already in their browser's cache.

Opera Unite - resourcefetcher.js (1, Interesting)

Anonymous Coward | more than 5 years ago | (#28453615)

I was having a look over Opera Unite services when looking to write one of my own, and i noticed this handy little function.

It fetches all the external page objects after the initial page has loaded.
Sadly, the example (homepage) failed in the sense that the basic CSS was not always the first thing to be loaded, which resulted in buckled pages on tests for slow upload speeds. (and some things weren't obviously clickable objects before images were loaded in)

So, in this way, an initial page could be loaded that is, at a minimum, functional, then load in all the fancy-shmancy stuff if they have JavaScript enabled.
I would love to see more people take advantage of that since a good deal of the time, websites are sitting there loading loads of crap that end up going unnoticed anyway.
Always always always load the most important stuff first. But sadly, "most important" has gone from page content to stupid sponsor stuff, crappy flash ads, useless headers that take up an eighth of the screen, shiny flowing menus, etc. (some of these being the reason i want Flash to die a painful death since it is one of the major causes of slowdown, DEATH TO PLUGINS!)

off-topic
Also, i am loving the way Unite services are created so far.
JavaScript, HTML, CSS and XML, none of that PHP, or Python or anything else, just all native browser technology.

Some very slow sites: Slashdot and Facebook (2, Interesting)

Animats (122034) | more than 5 years ago | (#28453763)

More and more, sites are generating the message "A script on this page is running too slowly" from Firefox. Not because the site is hung; just because it's insanely slow. Slashdot is one of the worst offenders. The problem seems to be in ad code; Slashdot has some convoluted Javascript for loading Google text ads. Anyway, hitting "cancel" when Slashdot generates that message doesn't hurt anything that matters.

Facebook is even worse. Facebook's "send message" message composition box is so slow that CPU usage goes to 100% when typing in a message. Open a CPU monitor window and try it. I've been trying to figure out what's going on, but the Javascript loads more Javascript which loads more Javascript, and I don't want to spend the debugger time to figure it out.

Re:Some very slow sites: Slashdot and Facebook (1)

Tokerat (150341) | more than 5 years ago | (#28456203)

Facebook needs to step back and optimize, optimize, optimize. They're well ahead of MySpace, and with the reputation MySpace is getting, Facebook would do well to keep things clean and fast; there isn't really a danger of competitor innovation destroying them (in the short term).

Re:Some very slow sites: Slashdot and Facebook (3, Interesting)

WebmasterNeal (1163683) | more than 5 years ago | (#28462407)

Look at these lovely stats from my Facebook profile:

Documents (3 files) 7 KB (592 KB uncompressed)
Images (111 files) 215 KB
Objects (1 file) 701 bytes
Scripts (27 files) 321 KB (1102 KB uncompressed)
Style Sheets (12 files) 69 KB (303 KB uncompressed)
Total 613 KB (2213 KB uncompressed)

So is this new Google initiative... (1)

pongo000 (97357) | more than 5 years ago | (#28454031)

...available to Google developers? Because some of the slowest applications on the planet are Google apps: The gmail and adwords applications come immediately to mind.

I think it's somewhat disingenuous to imply that slow web interfaces are someone else's problem when in fact Google is probably one of the worst perpetrators when it comes to slow interfaces.

Don't discard any information! (0)

Anonymous Coward | more than 5 years ago | (#28454375)

I'm very wary of anything lossy, or compiled, or anything that "strips out useless parts" of the HTML. HTML, javascript, etc is great because it is open and largely human-readable, anyone who downloads it can also analyze the code. This makes it safer, more controllable, and more understandable, at least for the end user. If anything is to be done about the size of the download it should be some sort of lossless compression algorithm optimized for HTML/javascript/etc. If the process is not fully reversible on the user's end, I think it will ultimately be harmful to the internet.

Just think how hard it would be to block ads if each page was a compiled program instead of human-readable code.

If only JavaScript history was different. (0)

Anonymous Coward | more than 5 years ago | (#28454591)

If only it wasn't being blocked by so many people because of the abuse by other peolpe, who were really abusing the fact that most browsers that have JavaScript support were designed so terribly, the world wide web would have been so much better than the crap we see today.

But no, we have websites designed around spamming new windows and alert boxes on all of them.
THANKS WEB BROWSERS VENDORS, WE COULDN'T HAVE DONE IT WITHOUT YA'.
All of them are to blame, every damn one of them.
And i blame Mozilla more for the fact that THEY never done a damn thing to change the rules.
Fuck W3C, why the fuck does anyone even listen to them anymore? They ruined the web time and time again, and STILL ARE. They have shown many times that they are incapable of deciding what is good for the web.

Google, while they are doing things, are only doing it because they want browsers to catch up with things like the terribly optimized iGoogle and other services.
Seriously, fix the damn iGoogle page, it doesn't need to do half the shit it does. Start using the "dynamic" shit you are trying to push, would ya'? (also, the fact that someone above mentioned them going against their own guidelines with respect to certain elements.)

Just think, we could be compressing pages in JavaScript, delivering them, decompressing them and bham, saved a ton of bandwidth.
We could have had JavaScript written pages actually showing up in View Source pages, instead of the horribly coded examples we currently have that REfetch pages. (most cases)

Of course, the worst offender is always going to be Microsoft. They are the ones that led the others down the road of buckled support for years, eventually killing some of them off because losing control of the web could* have eaten away at the desktop market.
*It now is, and has been for a few years now with more and more software being sold online instead of shops, or being offered for free, or being entirely hosted through a webpage.

Yslow vs. Speed (2, Informative)

kbahey (102895) | more than 5 years ago | (#28454763)

For those who are into web site performance, like me, the standard tool for everyone was Yslow [yahoo.com] , which is a Firefox extension that measured front end (browser) page loading speed, assigned a score to your site/page and then gave a set of recommendations on improving the user experience.

Now Google has the similar Page speed [google.com] Firefox extension.

However, when I tried it, with 5+ windows and 100+ tabs open, Firefox kept eating away memory, and then the laptop swapped and swapped and I had to kill Firefox, and go in its configuration files by hand and disable Page Speed. I have Yslow on the same configuration with no ill effects.

PHP advice legitimity (1)

Benbrizzi (1295505) | more than 5 years ago | (#28455095)

I'm no PHP guru, but reading some of their advice on PHP made me flinch.

Don't copy variables for no reason.

Sometimes PHP novices attempt to make their code "cleaner" by copying predefined variables to variables with shorter names. What this actually results in is doubled memory consumption, and therefore, slow scripts. In the following example, imagine if a malicious user had inserted 512KB worth of characters into a textarea field. This would result in 1MB of memory being used!

BAD:
$description = $_POST['description'];
echo $description;

GOOD:
echo $_POST['description'];

Now I would never question the almighty Google, but the Rasmus Lerdorf taught me that PHP uses copy-on-write. Quoting from his O'Reilly Programming PHP book:

When you copy a value from one variable to another, PHP doesn't get more memory for a copy of the value. Instead, it updates the symbol table to say "both of these variables are names for the same chunk of memory."

So who's right? I tend to believe M. Lerdorf since he pretty much invented PHP but like I said before I'm not an expert and my book is pretty old so (PHP 4.1.0) so maybe that has changed since (although I doubt it)...

Re:PHP advice legitimity (1)

JobyOne (1578377) | more than 5 years ago | (#28460661)

I believe the correct answer is, as usual, "it depends."

In that particular case, it just might. If the code were to later modify $description it would require a whole new entry in the symbol table. Then it would be using up twice the memory.

I'm pretty sure that PHP also has some pretty slick automatic unloading too though, so it might look ahead, see that you don't use $_POST['description'] again and promptly drop it out of memory (FYI: I am in no way advocating the "fugheddaboudit" approach to memory usage).

Personally, I would approach that particular problem with validation. Also, if your input has been properly validated and you know it isn't big enough to cause memory problems, it's often just plain convenient to copy variables around for a number of reasons, like building an array to pass into a function or something.

mod 3o3n (-1, Redundant)

Anonymous Coward | more than 5 years ago | (#28455161)

sc4emes. Frankly

Double-buffering (0)

Anonymous Coward | more than 5 years ago | (#28456407)

Hypercard used to have this handy feature - prior to making any big changes, you could call 'lock screen', then mess with the display to your heart's content, then 'unlock screen' (optionally with pretty transition effects). Maybe what we need is something similar - let's face it, most large web sites are fairly unusable when they're loading, as they get randomly reformatted as various resources get loaded and start messing with the page.

Simple double-buffering primitives would allow smooth loading, and probably speed things up a lot as the browser could suppress unnecessary redraws.

Headers being sent. (0)

Anonymous Coward | more than 5 years ago | (#28456537)

When you do a POST|GET you send the request as is uncompressed. The server normally replies with the data compressed. We should have a method to allow sending compressed headers. This alone would save tons of b/w. More so when dealing with AJAX sort of requests. Think about it, you do something tiny to change a flag or a few characters of text and massive headers are being sent. More often than not your up-stream b/w is going to be way lower than down. Also, when serving content it is best to have it on another domain name or virtual host of sorts. If you are serving all your data from the same server then you will be sending massive cookie headers even when fetching images for the page.

Re:Headers being sent. (1)

DiLLeMaN (324946) | more than 5 years ago | (#28460461)

Unless I'm wrong -- and I could be -- compression is usually less effective on small payloads, in some cases even making the payload bigger. POSTs might be big, but GETs usually aren't. Compressing that won't help you a lot.

new (old) file formats still needed (1)

Tumbleweed (3706) | more than 5 years ago | (#28458513)

Whatever happened to JPEG2000? (Patent problems?)

SVG should've been here long ago. IE will continue to slow SVG adoption in the real world.

If we could get JPEG2000 (or something like it) and SVG in 95+% of browsers, I think we'd be golden. That and getting rid of IE6 with its broken box model (among many other problems), would go a long way towards modernizing the Web. Take HTML5 and add the top 10 features or so of CSS3, and it's party-time for web devs once again. MS needs to shit or get off the pot with IE.

Be nice to dig in. (1)

Gagek (1230792) | more than 5 years ago | (#28459767)

Im excited about this, been working on a site Impostor Magazine and been using flex and different tools... nice to have a place to play and test.

Why do XML closing tags contain the tag name? (2, Interesting)

Zaiff Urgulbunger (591514) | more than 5 years ago | (#28460167)

One thing I've never really understood is why closing tags in XML have the tag name? Surely the angle brackets with slash inside would be enough since (assuming the markup is valid) it is obvious to the parser which tag is being closed: e.g. (I've used underscores to indent... I can't make the slash-code use spaces!!)
<html>
__<head>
____<title>Example</>
__</>
__<body>
____<h1>Example</>
____<p><em>This <strong>is</> an</> example.</>
__</>
</>

I know this makes it hard for a human to see opening/closing tags, but if XML parsers (including those in browsers) were able to accept markup with short close tags or the normal named close tags, then we could: 1. benefit where the markup is machine generated and, 2. easily pre-process manually created markup.... it's easy enough to convert back and forth.

But maybe there's a good reason for not doing this that I'm missing... but it's always bothered me! :D

Always in favor of optimization (1)

JobyOne (1578377) | more than 5 years ago | (#28460551)

I'm glad to see that at least web development is still concerned with optimization. The glut of RAM and processing speed has made desktop developers lazy and sloppy, and it has become the norm for software to be bloated and inefficient.

<sarcasm>Why bother finding a more efficient way to do [whatever] when you're talking microseconds at the user's end?</sarcasm>

I'm actually sort of surprised a glut of bandwidth and server power hasn't led to a similar "kitchen sink" approach to web technology.

Then again, I suppose it has. Just look at any given Web 2.0 Ajax monster...and on the web we're often talking WHOLE SECONDS lost to poor optimization and badly thought out apps.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?