Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

AJAX Applications vs Server Load?

Cliff posted more than 8 years ago | from the functionality-and-used-clock-cycles-directly-proportional dept.

Programming 95

Squink asks: "I've got the fun job of having to recode a medium sized (500-1000 users) community site from the ground up. For this project, gratuitous use of XMLHttpRequest appears to be in order. However - with the all of the hyperbole surrounding AJAX, I've not been able to find any useful information regarding server load [Apache + MySQL] when using some of the more useful AJAX applications, such as autocomplete. Is this really a non-issue, or are people neglecting to discuss this for fear of popping the Web2.0 bubble?"

cancel ×

95 comments

Sorry! There are no comments related to the filter you selected.

Cache, cache and CACHE (5, Informative)

Anonymous Coward | more than 8 years ago | (#14189705)

After doing quite a bit of AJAX type work for my employer, that's the best advice I can give you. The most common things will be queried the most often, so caching is the key. If you're using PHP and MySQL, use something like eAccelerator for PHP (less important) and MySQL's query cache (most important!) properly tuned. And remember, not everything AJAX has to query a database.

Re:Cache, cache and CACHE (3, Informative)

captainclever (568610) | more than 8 years ago | (#14189767)

And Memcached [danga.com] !

Astounding Conclusions (1)

Saeed al-Sahaf (665390) | more than 8 years ago | (#14189833)

The most common things will be queried the most often

I'm sorry, what?

Re:Astounding Conclusions (0)

Anonymous Coward | more than 8 years ago | (#14189866)

It's how caching works.

Re:Astounding Conclusions (1, Funny)

Anonymous Coward | more than 8 years ago | (#14189937)

it's how "common" works.

Re:Astounding Conclusions (1)

moro_666 (414422) | more than 8 years ago | (#14192926)


isnt it just so common that the most common things are the most common ones used ....


anyway, i suggest you forget about php in the first place. ajax will do much better if you have persistance on the server side. so if you're old-school, go java, if you're new and ambitious, try python or the dotnot thingy (i'm just emotionally not able use the latter but i would recommend the 2 first ones).

simplest example ? 2 users type in E with 2 seconds difference, java and python applications can cache this in their own variable for that short amount of time, whereas php will have to fetch it from somewhere (in some cases you can use some horrible shmem solution but it's usage is not fast in any way..., nor comfortable). java and python can share the data across sessions and across threads, something that's just impossible in php.

a) try to get a language that has support for persistant objects and that is fast enough on the serverside
b) choose a database that quick in selects and optimize it's configuration.
c) write code
c.1) optimize your code
d) see what went wrong, write code again
e) optimize again && profit!

bad advice: try BENCHMARK BENCHMARK BENCHMARK (2, Informative)

Anonymous Coward | more than 8 years ago | (#14190089)

Uhm, you should never give blanket advice like that. This is the simplest brute-force way optimize an app:

STEP 1: develop a set of benchmarks.

STEP 2: adjust something

STEP 3: see if it improves your benchmarks. If not, roll it back. REPEAT STEP 2

How can you possibly improve your app if you can't even tell when you've improved it? PHP accelerators may or may not help (actually I would recommend AVOIDING PHP because of the difficulty in dealing with persistent compiled PHP code). MySQL query cache may or may not help (in some of my apps, the query cache *lowered* performance).

You can improve on this basic formula by the way. For instance, you can use benchmarks to *identify* which parts of the app are the slowest so you can focus your energy just on the slowest stuff. But the basic premise is the same: benchmarks are the most important thing you can do. Devote 1/3 of your time and budget, at least, to developing the benchmarks. Throw in some automated testing too, while you're there.

Re:bad advice: try BENCHMARK BENCHMARK BENCHMARK (2, Informative)

James_Aguilar (890772) | more than 8 years ago | (#14191858)

Better still than benchmarks by far would be code profiling, which can tell you exactly where your code is spending the most of its time, rather than just suggesting the general area as benchmarks do.

not bad advice (1)

willCode4Beer.com (783783) | more than 8 years ago | (#14192972)

The poster stated his claim based on experience.
This experience holds valid for most web applications (AJAX or not) as anyone who has worked on any large web applications can attest. Creative use of caching has shown time and again the most effective way to reduce server load. (for some reason spitting out a byte array is faster than calling a database and building a document with the results)

I'm curious how "you can use benchmarks to *identify* which parts of the app are the slowest". This could be done by *profiling*, not benchmarking. As for benchmarks, they are effective at measuring the effectiveness of a change. They don't tell you where to make it.
I think spending 1/3 of your dev time on benchmarks is more than a bit overkill. These should take very little time and should be based on expected use cases. Developing load tests for very large applications rarely takes more than a week or two.

As for optimization, the general rule is
run a standard use case or load (JMeter?)
profile
optimize the code with the heaviest use (identified by the profiler)
repeat

Common sense is useful too (1)

smagruder (207953) | more than 8 years ago | (#14194952)

For example, if you're pulling site news or software changelog entries from a table, and those entries don't get updated very often (like less than twice a day), then it certainly makes sense to cache that data. As, even if the news/changelog that appears to the user is slightly out of date, the cache would update itself within a short period of time and any new info would then appear. Nobody loses anything just because new news doesn't immediately show up on a site. But your server does save a lot.

Re:not bad advice (1)

vidarh (309115) | more than 8 years ago | (#14209900)

Profiling is just a form of benchmarking specifically intended to identify parts of code. When the code path taken is well understood and various parts can be exercised by changing parameters passed in, normal benchmarking techniques can easily be used to gather the same data as "proper" profiling.

As always, it depends (4, Interesting)

/ASCII (86998) | more than 8 years ago | (#14189764)

I've been toying around a bit with AJAX, and it really depends on what you are doing. Autocomplete should ideally be implemented using an indexed table of common words, or something like that, since if it does anything complex, it will be dog slow because of the large number of transactions. Also, client-side caching is good to make sure the amount of network trafic doesn't get out of hand. You can do some cool things with very little JavaScript, like my english to elvish interactive translator [no-ip.org] .

Other AJAX concepts actually make things faster. I've been implementing a forum that never reloads. When you write an entry and press the submit button, an XmlHTTP request is sent containing the new post and the id of the last recieved post. The reply contains all new posts, which are then appended to the innerHTML of the content div-tag. Less CPU-time is spent regenerating virtually identical pages over and over, and less data is sent over the network.

Re:As always, it depends (2, Funny)

e7 (117450) | more than 8 years ago | (#14193872)

I've been implementing a forum that never reloads.

I have one of those. I also have a server admin that never responds. Not good.

Re:As always, it depends (1)

gru3hunt3r (782984) | more than 8 years ago | (#14194325)

Yes, i'm doing the same thing right now [pre-computing results].
Built a dictionary of commonly searched for terms, those are the only ones which appear on the autocomplete. Cache the list, change the AJAX call so it references a static list (basically a two layer alphanumeric hashing structure) .. but it pulls the file statically.

The only problem is that all the lame-o ajax frameworks cropping up don't offer that type of flexability without being considered bloatware. I did finally manage to implement it using Perl+HTML::Prototype.

Bottom line we serve out around 15rps across 25 servers, so anythign which can be precomputed must be - saves a lot of clock cycles and provides faster results.

Ajax is also highly sensitive to latency - so the time involved in doing a database and doing any type of select will make the application feel slower. Every millisecond counts!

Appropriate AJAX? (1)

WebCowboy (196209) | more than 8 years ago | (#14205488)

I've been implementing a forum that never reloads.

Quite interesting, but I'd have my reservations with an all-AJAX forum. IMHO forums shouldn't break the REST behaviour of the traditional web model in very many places. I want the navigation buttons to work, and I want to be able to bookmark the URL and feel confident that when I visit that URL it will contain the posting that was there before. Yes, there are ways around this in AJAX but to fix what it inherently breaks takes more effort than I think should be required (I think those are the most important things to address--if you insist on AJAX make sure it takes care of the browser history/navigation and bookmark support).

As for minimising traffic, I've found I get the biggest "return on investment" (time and money-wise) by properly using CSS. Being an established standard, CSS suffers from fewer cross-browser issues, it degrades more gracefully and doesn't break important web behaviours. The CPU time to generate a well-formed HTML document that uses CSS for all its layout is not worth worrying about, and network traffic is reduced significantly because the CSS is cached, so the number of TCP/IP packets used to send the bare HTML document is not that much different from all the XmlHTTPRequest chatter.

That being said, your project looks very intriguing for other reasons. If your forum is more of a chatroom than a slash-style website then it could be very cool indeed--like MSN or Yahoo chatrooms with threads all updating in realtime. Perhaps you have features like that in mind. However it ends up, best of luck to you!

Depends (5, Insightful)

Bogtha (906264) | more than 8 years ago | (#14189800)

There isn't any useful information out there because it all depends on what you are doing.

Take a typical web application for filling in forms. One part of the form requires you to pick from a list of things, but the list depends on something you entered elsewhere in the form. In this instance, you might put the choice on a subsequent page. That's one extra page load, and needless complication for the end user. Or, you can save the form state, let them pick from the list, and bring them back to the form. That's two extra page views and saving state. Or, you can use AJAX, and populate the list dynamically once the necessary information has been filled in. That's no extra page views, but a (usually smaller) JSON or XML load.

In this instance, using AJAX will usually reduce server load. On the other hand, something like Google Suggest will probably increase page load. Without knowing your application and its common use patterns, it's impossible to say. Even using the exact same feature in two different applications can vary - autocomplete can reduce server load when it reduces the overall number of searches, but that's dependent upon the types of searches people are doing, how often they make mistakes, how usual it is for people to search for the same thing, and so on.

Ajax is the death of the internet (-1)

Anonymous Coward | more than 8 years ago | (#14189804)

When ajax has finally killed the internet, Xforms will rise from the ashes of this pathetic excuse for an internet technology and smite the heathens that worshipped at AJAX's alter.

The revolution will swift. Slightly Bloated, but swift.

Delay the autocomplete (4, Informative)

Albanach (527650) | more than 8 years ago | (#14189848)

Another suggestion is only to auto-complete after .5 seconds with no typing - that way rather than autom completing s sl sla slas slash slasd slasdo the user who knew exactly what they wanted doesn't load down your server with spurious requests.

Re:Delay the autocomplete (1)

acroyear (5882) | more than 8 years ago | (#14195772)

a similar approach is built into ajaxtags already -- a minimum threshold of keystrokes before it attempts to update the dropdown items.

Really... (2, Interesting)

wetfeetl33t (935949) | more than 8 years ago | (#14190079)

For this project, gratuitous use of XMLHttpRequest appears to be in order.

All this hyperbole surrounding AJAX is just that - hyperbole. I dunno exactly what your requirements are, but the first thing you can do to ensure that AJAX XML requests don't bog down your system is to decide whether you really need all that fancy AJAX stuff. It's neat stuff, but the majority of web apps can still be done the conventional way, without wasting time on AJAX code

Re:Really... (1)

Eustace Tilley (23991) | more than 8 years ago | (#14190977)

All this hyperbole surrounding AJAX is just that - hyperbole.


It is futile to claim that there is no important difference between Google Maps and the mapping web applications that preceded it, because the difference is screamingly obvious.

Re:Really... (2, Insightful)

houseofzeus (836938) | more than 8 years ago | (#14191525)

For every website/application you can name that was made fantastic by the use of AJAX it is possible to list at least ten that didn't need it and only have it to try cash in on the latest fad.

The GP's point was a valid one, it is important that people sit down and work out whether AJAX will actually benefit their application.

Despite all the crap being spouted about AJAX it is NOT some magic wand that works for every given situation.

Re:Really... (0, Troll)

Eustace Tilley (23991) | more than 8 years ago | (#14191659)

For every website/application you can name that was made fantastic by the use of AJAX it is possible to list at least ten that didn't need it and only have it to try cash in on the latest fad.
Prove you are not engaging in your own hyperbole. Name ten web applications that are using AJAX that don't need to.
The GP's point was a valid one, it is important that people sit down and work out whether AJAX will actually benefit their application.
I love the smell of stagnation in the morning!
Despite all the crap being spouted about AJAX it is NOT some magic wand that works for every given situation.
Please provide a link to AJAX being recommended as a panacea.

Re:Really... (1)

houseofzeus (836938) | more than 8 years ago | (#14191853)

'Prove you are not engaging in your own hyperbole. Name ten web applications that are using AJAX that don't need to.'

You made my task easier by using the word 'need' since we had the following kinds of services before the tools of AJAX became widely used (specifically the first A) then we can assume that the following don't *need* AJAX:

  • Gmail Google maps Digg

I'll leave it at that because your response (if there is one) will no doubt be 'oh but I meant name ten applications that use AJAX and it doesn't improve the site'. While this is a better questions it's not what you asked. Listing ten sites that use AJAX in a manner that makes the site worse than it would be otherwise is admittedly more difficult (but definitely not hard).

The main reason this is harder is that much like the disgusting overuse of flash (and other fads) that went before them a site that uses AJAX badly turns the user off almost immediately. They certainly don't make my bookmarks list.

'Please provide a link to AJAX being recommended as a panacea.'

Are we even reading the same thread?

Re:Really... (1)

houseofzeus (836938) | more than 8 years ago | (#14191861)

Also, as you can see, I'm a pro with the preview button and the list tags ;).

Re:Really... (1)

Eustace Tilley (23991) | more than 8 years ago | (#14192529)

You made my task easier by using the word 'need' since we had the following kinds of services before the tools of AJAX became widely used (specifically the first A) then we can assume that the following don't *need* AJAX:

        * Gmail Google maps Digg


So it is reasonable to conclude from this list that the rule "sit down and decide whether your application needs AJAX" would have forbidden AJAX to Gmail, Google maps, and Digg.

What a lossy rule.

Re:Really... (1)

houseofzeus (836938) | more than 8 years ago | (#14192961)

Not really. Of the three examples Google maps is the only one that uses AJAX in a manner that provides major benefits over a traditional implementation.

My point remains that developers should be asking themselves why they need to use AJAX and whether it will provide any real benefit. Often the answer is not only no, but also that an AJAX implementation might actually detract from the application (be it in performance or user experience).

As a developer, arguing that this question isn't important because of the millions of sites on the web three happen to use it in a reasonable manner is letting both yourself and the customer down.

Re:Really... (1)

masklinn (823351) | more than 8 years ago | (#14194291)

Or they should just create the application in the "legacy" way, then check if there is any area of the current application that could use AJAX (or other advanced Javascript technique) to improve usability, comfort or response time (user-wise) and layer it on top of the existing and working application.

This is the principle behind the Progressive Enhancement philosophy, and it allows your application to work fine in any and every context, be it your local nerd's text browser, your mom's Internet Explorer, your own Safapera 0.61.12.5 alpha or your gf's mobile phone (with it's probabilistic implementation of Javascript and it's 120*200px display)

Some people will have smoother interfaces, one or two more shinies, but everyone will have something that works, and that works (mainly) the same way.

Re:Really... (4, Insightful)

kingkade (584184) | more than 8 years ago | (#14197991)

I think we're all saying the same thing here, try to see if AJAX (ugh i feel dirty) makes sense in your webapp. Hard to see sometimes (see below)

Of the three examples Google maps is the only one that uses AJAX in a manner that provides major benefits over a traditional implementation

Now hold on. Gmail is another perfect example of how AJAX can help. Say I have an inbox with 50 emails and I want to trash, archive, or otherwise do something to one message/thread that would involve it being removed from my view, the rest of the inbox (not to mention all the other peripheral UI elements) shouldn't change. In the old way we'd re-request all this tremendous information (say ~95% of the UI) that didn't even change! And this is even more obvious when you remember that each seemingly tiny, simple piece of the UI (say a message line) may use a bunch of HTML (not to mention scripts, css, etc) behind the scenes to make it look/feel a certain way. In the AJAX version we'd just have to add some scripting to remove that DOM element from the page and send a simple, say 0.5KB, HTTP message like "[...]deleteMsg.do?msgid=x[...]" to the server. You still have to suffer the TCP round trip latency (but less so), but the difference can be dramatic, no?

Re:Really... (1)

houseofzeus (836938) | more than 8 years ago | (#14198081)

It seems to vary. Often Gmail will still find the need to display its' custom red loading bar in the corner of the screen and you have to pause and wait for it.

Sure, this wait time is nowhere near as long as I remember say Hotmail traditionally was, but on most connections it is quite possible that Gmail might actually be as slow as or slower than a 'clean' old traditional implementation.

Importantly the keyword in my statement was 'major'. It's definitely arguable that Gmail provides benefits over the traditional webmail clients. Google maps however is the example that shines mainly because in the past navigating such services when they used 'normal' CGI scripts was a pain at the best of times.

Re:Really... (2, Insightful)

petard (117521) | more than 8 years ago | (#14199652)

Not really. Of the three examples Google maps is the only one that uses AJAX in a manner that provides major benefits over a traditional implementation.


Have you used gmail? Have you used any other web mail applications? I've used gmail, hotmail (the old, pre-ajax one), yahoo mail (the old, pre-ajax one), hushmail, squirrel mail, open webmail and outlook web access all fairly heavily at one time or another. gmail and OWA really stand out from that crowd. gmail uses AJAX, and OWA is conceptually very similar.

I haven't used digg enough to comment.

Your point (that developers need to ask themselves whether technology X provides any real benefit to their application) is sound, but I haven't noticed applications where, as you claim, AJAX detracts from the experience compared to other techniques. Could you point to a few examples where the AJAX implementation of an app sucks compared to the non-AJAX implementation?

So you're saying you're against "design"? (1)

smagruder (207953) | more than 8 years ago | (#14195161)

If you're suggesting that programmers just use Ajax "because they feel like it" without examining the 1) value and 2) repercussions, then this programmer will have to absolutely disagree with that approach. On the other hand, web programmers should certainly explore possibilities with Ajax and not reject it out of hand.

Glass House (1)

Eustace Tilley (23991) | more than 8 years ago | (#14191684)

... cash in on the latest fad

Speaking of fads, I note that as of 02:06 EST, www.houseofzeus.com fails to validate as XHTML 1.0 Transitional.

Re:Glass House (1)

houseofzeus (836938) | more than 8 years ago | (#14191802)

Exactly. I didn't cash in on the 'zomg i can put this validation clicky thingo on my page and be so coolzzz!111' fad. What's your point? :-P

Re:Glass House (1)

Eustace Tilley (23991) | more than 8 years ago | (#14192498)

What's the point of serving invalid XHTML? Why should a site falsely claim to be XHTML? What is the motivation for using bandwidth for an erroneous DOCTYPE statement instead of simply omitting it, if it is not simply a faddish impulse?

Re:Glass House (1)

houseofzeus (836938) | more than 8 years ago | (#14193011)

Have a look at the bottom of the page, niether the blog software nor the template in use were created by me. I haven't changed the doctype from its original state and I don't intend to.

If bad doctypes and abuse of standards really gets to you that much I suggest you stop using computers now.

Ummm, cool it. (1)

smagruder (207953) | more than 8 years ago | (#14195199)

Even those of us who strive generally to achieve standards compliance will often put site features ahead of compliance, leaving compliance as a leftover task. Compliance is not a simple thing. Anyone who has been a web programmer for any length of time knows the difficulty of balancing features versus standards compliance, not to mention the ease of making little mistakes that fail the standards testing tools.

Re:Ummm, cool it. (1)

Eustace Tilley (23991) | more than 8 years ago | (#14198169)

Even those of us who strive generally to achieve standards compliance will often put site features ahead of compliance
What site feature is enabled by failing to escape ampersands?

Re:Ummm, cool it. (1)

houseofzeus (836938) | more than 8 years ago | (#14198594)

What site feature is gained by escaping them? Run the validator over Digg or Slashdot and you will see the same errors (because of Slashdot's referrer block on w3c.org you will need to use the save as html option of your browser).

While we're at it, you mightn't have noticed, but the ampersands weren't what blocks the page validating. Under XHTML 1.0 Transitional that only generates a warning.

The error is because my chosen blogging software doesn't shove an alt tag on when it converts [img][/img] to proper HTML. Personally if I have to put up with a page that doesn't quite validate for the convenience of not having to worry about plain HTML when posting I'm happy to do so.

Re:Ummm, cool it. (1)

Bogtha (906264) | more than 8 years ago | (#14201039)

What site feature is gained by escaping them?

Reliability. Sure, you can get away with ?foo=x&bar=y nine times out of ten, but then you use something like ?foo=x&bar=y&trade=z and links start breaking in some software but not others.

Escaping isn't there just for the hell of it. It has a purpose. When you ignore that purpose and hope all software that will encounter your code will compensate for your mistake, you are bound to have things break some of the time.

While we're at it, you mightn't have noticed, but the ampersands weren't what blocks the page validating. Under XHTML 1.0 Transitional that only generates a warning.

The rules for escaping ampersands are uniform across all variants of XHTML. Not sure why you think the rules vary between Transitional and Strict; the differences between those two document types are structural in nature, not syntactical.

The error is because my chosen blogging software doesn't shove an alt tag on

There's no such thing as an "alt tag". You mean alt attribute. Tags and attributes are completely different things.

Re:Ummm, cool it. (1)

houseofzeus (836938) | more than 8 years ago | (#14205909)

Reliability. Sure, you can get away with ?foo=x&bar=y nine times out of ten, but then you use something like ?foo=x&bar=y&trade=z and links start breaking in some software but not others.

They weren't in a URL. They were in body text, hence no breaking links.

'The rules for escaping ampersands are uniform across all variants of XHTML. Not sure why you think the rules vary between Transitional and Strict; the differences between those two document types are structural in nature, not syntactical.'

Something to do with that pesky W3C validator deciding they were only worthy of a warning.

Re:Ummm, cool it. (1)

houseofzeus (836938) | more than 8 years ago | (#14206002)

The rules for escaping ampersands are uniform across all variants of XHTML. Not sure why you think the rules vary between Transitional and Strict; the differences between those two document types are structural in nature, not syntactical.

For interests sake I went and shoved an unescaped ampersand back into the body text of the page and ran it through the validator again to check I wasn't seeing things.

http://www.houseofzeus.com/filez/fails.jpg [houseofzeus.com]

As it stands I never said anything to suggest the XHTML variants handle it differently. I just stated that MY page is marked as transitional and that it only generates a warning when ampersands aren't escaped in body text.

How is any of that NOT true?

Re:Ummm, cool it. (1)

Bogtha (906264) | more than 8 years ago | (#14206271)

As it stands I never said anything to suggest the XHTML variants handle it differently.

When you said that "Under XHTML 1.0 Transitional that only generates a warning." you seemed to be implying that it being just a warning had something to do with it being XHTML Transitional. I guess I read too much into that sentence.

I just stated that MY page is marked as transitional and that it only generates a warning when ampersands aren't escaped in body text.

How is any of that NOT true?

While it's true that the validator only generates a warning, it misses the point somewhat. The validator doesn't define what is valid and what isn't, it merely tries to determine whether a particular document is valid or not. The fact that the validator treats it as a warning is pretty irrelevant. It's an error, [w3.org] and the validator should have reported it to you as an error.

You can consider this to be a bug in the validator. There's a thread [w3.org] about it on the www-validator W3C mailing list, and a bug report [w3.org] in their database that has been fixed in development versions. Remember, validators are just tools, they don't have the final say in deciding whether something is an error or not.

Re:Ummm, cool it. (1)

houseofzeus (836938) | more than 8 years ago | (#14206397)

Fair enough, but the fact of the matter is the validator has been doing that for a bloody long time now. If the W3C can't work out how to validate it properly I doubt I have anything to worry about re. rendering in various user-agents for quite some time :-P.

Re:Ummm, cool it. (1)

shobadobs (264600) | more than 8 years ago | (#14199436)

What site feature is enabled by failing to escape ampersands?

What personality feature is enabled by failing to escape uppity snidery?

I was going to help you... (1)

afabbro (33948) | more than 8 years ago | (#14190326)

...but then you had to go and say "Web 2.0" and I dissolved [theregister.co.uk] into fits [theregister.co.uk] of laughter [wordpress.com] .

Re:I was going to help you... (0)

Anonymous Coward | more than 8 years ago | (#14191322)

Well, I use the word web2.0 in jest. IU personally hate it, but if it helps us all get paid more, then I think I can tolerate it for a while...

The real weird thing is, my roomie is A. Fabbro.

~Squink

Every AJAX call is a request (2, Interesting)

natmsincome.com (528791) | more than 8 years ago | (#14190445)

Just remember that. It's not half a request it's a full request. The easiest way to think about it is imagine instead of using AJAX you reload the page.

Now that isn't quite true as you only reload part of the page.

The common example is google sugest. Instead of a list of searches lets try a list of products. If you use AJAX against a database of 1000 products and you had say 5 users using AJAX hiting the database. If you just did a select each time it would be really bad. At least 5 database hits per second. In the old environment it would have been 1 hit every second (assuming it took 5 seconds to fill in the form). So in this case you're increased your database load by more than 5 times (if you used like instead of = in the SQL).

To get around this you have a number of options. Here some of the ideas I've seen
1. Add columns to the product table with 1, 2, 3, 4 characters and index the columns. This means you can use = instead of like which is faster.
2. Hard code the products in an array.
3. Use has files. EG create 10 has files for different lengths. Then check the length of the imput and then load the correct hash file and look up the key.

The basic idea concept is try and do some kind of work upfront to decrease the overhead of each call because you'll end up having a lot more requests.

Re:Every AJAX call is a request (0)

Anonymous Coward | more than 8 years ago | (#14191549)

if you use postgresql, you can set the text keys to be built into a dfa tree, which is actually faster than your 1-4 char index columns (for large amounts of keys).

Re:Every AJAX call is a request (1)

samjam (256347) | more than 8 years ago | (#14192202)

Even in mysql, LIKE with a without a wildcard prefex; e.g.
    where name like 'ab%'

still properly uses the index and is efficient; and if it weren't you could do:
    where name > 'ab ' and name 'ab}' or something

Sam

Re:Every AJAX call is a request (1)

$1uck (710826) | more than 8 years ago | (#14192900)

Isn't hardcoding arrays, using flat files for product lists etc a *huge* step backwards?
I'm thinking when the server starts up maybe initialize some arrays or temp files that get updated when the database gets updated (though this seems to violate DRY and could cause concurrency issues).

Re:Every AJAX call is a request (1)

commanderfoxtrot (115784) | more than 8 years ago | (#14193978)

Ouch.

What you are suggesting is not good for the database at all!

A decent database (even MySQL) can do LIKE matches on indexes.

The trick, as others have said, is to use caching. Lots of caching.

You may even want to put a Squid (or Apache 2.2) in front of your web server to catch these little requests and keep them away from your database.

Decide what you really need (5, Informative)

b4k3d b34nz (900066) | more than 8 years ago | (#14190466)

It's going to be tempting to use a lot of AJAX, especially if sounds fun. In reality though, you should be considering user experience, since this is a community site. Don't use an AJAX call where someone might expect a page refresh.

With that said, it's best to try to cache frequently accessed items in memory (regardless of whether you're doing AJAX calls). ASP.NET does a good job of this--I don't know what you're programming in, but definitely find out how to cache so that you don't have to read the database all the time. This reduced our database server load from 55% to 45% upon implementation (it's separate from the web server).

To specifically answer your question, the thing that's fast about AJAX is mostly perceived. Yes, you'll reduce calls, but at the sacrifice of having to code things twice: once for users with JS, once for those without. Use it in places where it's senseless to reload an entire page. For example, opening a nested menu. Searches that aren't done by keyword are good as well. Like has been said above, delay a server request until the user is done typing so that you can reduce calls. Remember, it's still a hit on your server, it just doesn't have to get all the rest of the crap on the page.

To reduce bandwidth, use JSON instead of XML, and only pass the headers that you need to into the AJAX call. To reduce server strain, cache frequently accessed database calls/results. Also, other non-AJAX javascript can help reduce calls, such as switching between "tabs" with some display:none action instead of reloading a page.

The answer is not gratuitous AJAX, the answer is thinking through how people will most commonly use your site, and making those parts easiest (so users don't have to redo things, therefore wasting your server capacity/bandwidth). Take things that shouldn't have to refresh the page and make them work using javascript, AJAX or not. Depending on how crappily things are coded now, you should see between a 15 and 35% reduction in server load and database calls.

Re:Decide what you really need (1)

SanityInAnarchy (655584) | more than 8 years ago | (#14191156)

Don't use an AJAX call where someone might expect a page refresh.

Why not?

I expect to fail a course and die a smelly virgin, but if I pass the final and get laid, I won't complain.

With that said, it's best to try to cache frequently accessed items in memory (regardless of whether you're doing AJAX calls). ASP.NET does a good job of this

Sorry, I don't think that's where you should be focussing. Cache frequently accessed items in client memory. In the javascript. Oh, and he's probably not using ASP.NET if he's using MySQL.

Yes, you'll reduce calls, but at the sacrifice of having to code things twice: once for users with JS, once for those without.

Or maybe drop support for those without? Or maybe run the JS on the server side? Or maybe find an AJAX library that's already coded the things twice that you actually want to code twice.

To reduce bandwidth, use JSON instead of XML

While I think JSON is awesome, I think XML is faster overall. If you're using mod_gzip, bandwidth of XML vs. JSON is negligable, and I think since the browser parses XML for you, it might be faster than using javascript code to create/interpret JSON.

But then, I'd test that before I actually did the application.

Also, other non-AJAX javascript can help reduce calls, such as switching between "tabs" with some display:none action instead of reloading a page.

Duh. But why display:none? Why not just '' or something similar? But again, this is about caching in the browser, not the server. You'll have to load the page the first time.

I think eventually we'll either go back to Java, invent something new, or invent a good AJAX framework so that people don't whine about "doing things twice".

Re:Decide what you really need (1)

dolmen.fr (583400) | more than 8 years ago | (#14192374)

I think since the browser parses XML for you, it might be faster than using javascript code to create/interpret JSON.

As you think that the browser parse XML for you, it would be coherent to think that the browser parses and interpret JavaScript for you.

As you will finally process the retrieved data in the Javascript interpreter, the only comparison apply to:
  • JSON: let the JavaScript parser extract the data to JavaScript data
  • XML:
    1. tell the XML parser to load the XML document and represent it in memory with the DOM
    2. copy data from the DOM to JavaScript data


So, what do you think is the most efficient?

Re:Decide what you really need (1)

Bogtha (906264) | more than 8 years ago | (#14194141)

Why does everybody advocating JSON skip a few steps when describing how it is processed?

JSON is data supplied in the form of Javascript instructions. Using it goes like this:

  1. Retrieve the resource from the server
  2. Parse the Javascript into a set of instructions that can be executed
  3. Execute those instructions (which loads the data into Javascript objects)
  4. Execute the function that uses that data

You neglected to mention the fact that JSON information doesn't magically find its way into your code, it has to be parsed and executed first. It's not a data format as far as the Javascript interpreter is concerned, it's an additional script to run.

XML has the advantage of not needing to be executed after being parsed. Using it goes like this:

  1. Retrieve the resource from the server
  2. Parse the XML into a DOM tree
  3. Execute the function that accesses that DOM tree

It's not clear at all that one is faster than the other; in fact it really depends on how fast the DOM access is, which varies wildly between browsers and depends on exactly what it is you are doing.

Re:Decide what you really need (1)

masklinn (823351) | more than 8 years ago | (#14194334)

It's not clear at all that one is faster than the other; in fact it really depends on how fast the DOM access is, which varies wildly between browsers and depends on exactly what it is you are doing.

"Fast" and "DOM Access" should never belong to the same phrase. Ever. Not without a negation somewhere anyway.

Re:Decide what you really need (1)

Wardie (920532) | more than 8 years ago | (#14192776)

Don't use an AJAX call where someone might expect a page refresh.

Why not?

I expect to fail a course and die a smelly virgin, but if I pass the final and get laid, I won't complain.


How about the fact that it breaks the back/forward buttons and means you cannot add the page to favourites/bookmarks. Small issues for some, but could be damn irritating for people who bookmark a page for the bookmark just to resolve back to the original page.

With that said, it's best to try to cache frequently accessed items in memory (regardless of whether you're doing AJAX calls). ASP.NET does a good job of this

Sorry, I don't think that's where you should be focussing. Cache frequently accessed items in client memory. In the javascript. Oh, and he's probably not using ASP.NET if he's using MySQL.


I think you missed his point, Caching would normally be used for application wide data, to save a round-trip to the database if a previous request has fulfilled a similar query. If you store data on the client, you've still got to get the data from somewhere, so it's useful to get it from a server-side cache.

Re:Decide what you really need (1)

lukewarmfusion (726141) | more than 8 years ago | (#14193019)

I think he really means that "that's what people expect" is not the same as "that's the most usable." If it were, we'd abandon a lot of UI improvements - tabbed browsing or multiple desktops, for a couple of examples. Sure, sometimes consistency translates to usability - but innovation is a way of saying that the status quo is not good enough.

A basic search function that can quickly and efficiently (the focus of this conversation) return relevant results without needing to refresh the browser can be a huge improvement over the current method.

In iTunes, you can search for a song, artist, album, etc. without submitting your entry - it searches automatically. Recently, I took that concept and added it to a web-based file server application. You can still submit your search for certain advanced criteria, but now you can also quickly search without waiting for a refresh. It's not amazingly efficient (I've got about 5 total users, very few of them concurrent). But it's an example of an improved interface. With proper indexing, caching, and some application limits, that feature could be workable for a much larger audience.

Right now, I consider this a way for us to experiment with improved user experience. Five years from now, we might wonder how we got by without these conveniences.

Re:Decide what you really need (1)

b4k3d b34nz (900066) | more than 8 years ago | (#14194339)

Sorry, I don't think that's where you should be focussing. Cache frequently accessed items in client memory. In the javascript.

That's quite possibly the stupidest idea I've heard today. How the heck do you expect anyone to get data in the first place? It's not going to magically appear in the client's cache. I sure hope you mean doing it both ways, if anything, although I don't see any benefit to caching anything on the client-side...if you're working with data, chances are good it's going to be entered into the DOM after you use it.

Or maybe drop support for those without?

This is a horrible idea. Repeat after me. This is a horrible idea. The only, only, ONLY exception is if you know exactly who's going to be using your application or web site--for example, in the case of an intranet app that's restricted to a small amount of users. Don't give me any crap about what a small percentage of people don't have Javascript enabled. It's still 10 million folks or more.

Duh. But why display:none? Why not just '' or something similar? But again, this is about caching in the browser, not the server. You'll have to load the page the first time.

Look, smart guy--the poster is obviously concerned about performance with his application. I gave him a suggestion. Don't assume that everybody knows everything you do about Javascript.

While I think JSON is awesome, I think XML is faster overall.

Either way is fine--it's mostly preference if you're using mod_gzip, like you said. However, some browsers parse an XML DOM very slowly. IE, Mozilla (older versions) and pretty much any Linux browser, for example.

I think eventually we'll either go back to Java...

I do have to wonder what you're thinking here...do you mean Java on the client side or the server side? If you mean client side, you should probably just pack up your things and go home. I agree with your other suggestion: decent AJAX libraries. I created a pretty thin one here at work that does what we need.

Maybe it was unclear, but my comments about caching were server-side based only. Really, the only hangups for AJAX speed are how long the server takes to process things, which can get intensive if you have huge queries with nasty joins or something of the like. That's where caching the results of the query help. It's much more efficient to store a commonly-used recordset in memory.

You have a lot of good comments, but it doesn't seem like you're using best practices.

Re:Decide what you really need (1)

elemental23 (322479) | more than 8 years ago | (#14195367)

This is a horrible idea. Repeat after me. This is a horrible idea. The only, only, ONLY exception is if you know exactly who's going to be using your application or web site--for example, in the case of an intranet app that's restricted to a small amount of users. Don't give me any crap about what a small percentage of people don't have Javascript enabled. It's still 10 million folks or more.

Likewise, don't give us any crap about 10 million people having JavaScript disabled unless you can give us some verifiable sources for that number.

I would never require JavaScript for a general-use web site -- this isn't really where AJAX is useful anyway -- but I think it's reasonable to require it for a public-facing web application/service as long as the requirements are spelled out clearly. Sure, you may lose a fraction of your potential audience in those who refuse to enable it, but this may be an acceptable tradeoff for the time spent writing, debugging, and supporting two different interfaces.

Re:Decide what you really need (1)

b4k3d b34nz (900066) | more than 8 years ago | (#14197014)

The following URL has some information from a counter service that seems to have aggregated their data. http://www.thecounter.com/stats/2005/September/jav as.php [thecounter.com] . I admit, this data is skewed because the service is probably not tracking users by IP (which is flawed anyway). However, if they're getting that many people with JS disabled and they have a pretty small slice of the pie, then it's obvious there are plenty more. The percentage is probably pretty accurate in either case.

If you're not using AJAX in some sort of application, chances are pretty good you're using it the wrong way anyway. It looks like we're probably going to continue disagreeing, which is fine by me, but I believe in moving past 1997 with "best viewed with so-and-so browser" or a page telling me to enable something I otherwise wouldn't have to. If that's an acceptable trade-off to you, fine. I prefer to keep it behind the scenes so it works for the user, javascript or not.

Re:Decide what you really need (1)

smagruder (207953) | more than 8 years ago | (#14195618)

Or maybe drop support for those without [JavaScript]?

Not advisable. That's like dropping support for Opera and Safari users. A significant subset of your users will have scripting turned off. If you care about getting the widest possible audience for your site, in most cases, you will not want to shortchange functionality for anyone (perhaps except those using very old browser versions), but instead just present things differently for those with JS on and those with it off.

Re:Decide what you really need (1)

jrumney (197329) | more than 8 years ago | (#14198519)

That's like dropping support for Opera and Safari users.

Is that supposed to be an argument for, or against?

In the real world, who cares about Opera and Safari users? The real world where until about 3 months ago, few people even cared about Firefox users.

Re:Decide what you really need (1)

shobadobs (264600) | more than 8 years ago | (#14199477)

While I think JSON is awesome, I think XML is faster overall. If you're using mod_gzip, bandwidth of XML vs. JSON is negligable, and I think since the browser parses XML for you, it might be faster than using javascript code to create/interpret JSON.

The only place where speed matters like this is the server. Client side, the difference is a one-time occurance, and it's a matter of microseconds. Compared to download time, processing time is negligable. Compared to the speed at which the user interacts with the software, processing time is zero, negative if you factor in measurement error :-)

In general, it makes sense to lay as much work on the client side end as possible. It's like getting free distributed computing (possibly at the cost of programmer time).

Coded reasonably, most likely less load (4, Interesting)

uradu (10768) | more than 8 years ago | (#14190505)

The trick in minimizing server traffic is to come up with the right remote data granularity--i.e. don't fetch too much or too little data on each trip. At one extreme you'd fetch essentially your entire database in a single call and keep it around on the client, wasting both its memory and the bandwith to get data that will mostly go unused. At the other extreme you simulate traditional APIs, which typically get you what you want in very piecemeal fashion, requiring one function call to get this bit of data, which is required by the next function, which in turn returns a struct required by a third function, and so on until you finally have what you really want.

The happy medium is somewhere in between. Come up with functions that return just the right amount of data, including sufficient contextual data to not require another call. For a contacts-type app you would provide functions to read and write an entire user record at a time, as well as a function to obtain a list of users with all the required columns to display them in a single call. You will generally find it more bandwidth and client-side processing efficient to taylor the remote functions towards the UI that needs them, fetching or uploading just the required data for a particular application screen or view. Once you have a decent remote function architecture you will have no doubt considerably less server traffic, since practically only raw data makes the trip anymore.

could be less load (2, Insightful)

josepha48 (13953) | more than 8 years ago | (#14190581)

you could experience less of a load as you will not be reserving up the entire page each time you "refresh".

An example is an application that display a list of cities in a state, after a user selects the state. (1) If you send ALL the data to the client at onece, its a large file transfer and takes a long time. This produces a heavy load all at once. (2) If you coded it to refresh the whole page after the selection then it is a smaller initial load, but on the 'refresh' you are sending the whole page plus the new data. (3) If you use AJAX, you only have to send the initial small request ( not the heavy load ) and then the second request for the part of the page that needs updating.

Between 2 & 3, #3 is better because it reduces the second hit on the server and network as it does not have to resend parts of the page that have already been sent. Between 1 and 3 you actually will have more hits on the server but #3 will result in less data being sent across the network.

The biggest problem with #2 is that sometimes refreshing a whole page ( onchange ) confuses users. Yes, this may sound weird, but I have had people tell me this.

The biggest problem with #3 is that if the server request fails, you must code for this and if you don't the user may not know what happened. Also how do you handle a retry on something like this?

Since it's a webapp... (4, Interesting)

davecb (6526) | more than 8 years ago | (#14190813)

You can do a surprising amount with nothing but the response times of each kind of transaction.

Make some simple test scripts using something like wget, and capture the response time with PasTmon or ethereal-and-a-script, one test for each transaction type, while at the same time measuring cpu, memory and disk IO/s.

At loads that wget or a human user will generate, 1/response time equals the load at 100% utilization of the application (not 100% cpu!), so if the average RT is 0.10 seconds, 100% utilization will happen at 10 requests per second (TPS).

For each transaction type, compute the CPU, Memory, Disk I/Os and network I/Os for 100% application utilization. That becomes the data for your sizing spreadsheet.

If you stay below 100% load when doing your planning, you'll not get into ranges where your performance will dive into the toilet (:-))

--dave
This is from a longer talk for TLUG next spring

Re:Since it's a webapp... (1)

kill-1 (36256) | more than 8 years ago | (#14193210)

At loads that wget or a human user will generate, 1/response time equals the load at 100% utilization of the application (not 100% cpu!), so if the average RT is 0.10 seconds, 100% utilization will happen at 10 requests per second (TPS).

That's just wrong. You mixed up bandwidth and latency.

Re:Since it's a webapp... (1)

davecb (6526) | more than 8 years ago | (#14230106)

It's all response time (RT). Latency (L) is the part of RT before the first byte shows up back at the client, whereias bandwidth is bytes/(RT-L).

This is from Raj Jain's, "The Art of Computer Science Peformance Analysis", Chapter 33 (Opertional Laws).

--dave

Webapp Benchmarking (1)

dharvey (936685) | more than 8 years ago | (#14197219)

Hi Dave - tried to find a way to contact you outside of this, but have been unsuccessful to date - apologies all around if I'm using the wrong communication channel...

Your post confirmed some hunch I've had for a while - I've been trying to figure out a way to measure CPU, I/O, RAM and other resources on our web servers to get reasonable application benchmarking data - I've been told it's next to impossible, and I've had the hardest time finding any info on this type of benchmarking on the net (beyond gutfeelings, and vmstat and top - which only provide performance snapshots). Yet, I'm a strong believer that "you can't manage something you don't measure" (quoted from somewhere...), and you seem to have found concrete ways to do just that. Any chance you could share some of your references or resources on this server/app benchmarking stuff? I'd be much grateful for any insight.

Cheers,
Dax (dharvey@comminit.com)

It's a two-way street (4, Interesting)

photon317 (208409) | more than 8 years ago | (#14190954)


I'm in the latter stages now of my first serious professional project using AJAX-style methods. In my experience so far, it can go either way in terms of server load versus a traditional page-by-page design. It all depends on exactly what you do with it.

For example, autocompletions definitely raise server load as compared to a search field with no autocompletion. Using a proper autocomplete widget with proper timeout support (like the Prototype/Scriptaculous stuff) is a smart thing to do - I've seen home-rolled designs that re-checked the autocomplete on every keystroke, which can bombard a server under the hands of a fast typist and a long search string. But even with a good autocomplete widget, the load will go up compared to not having it. That's the trade-off. You've added new functionality and convenience for the user, and it comes at a cost. Many AJAX enhancement techniques will raise server load in this manner, but generally you get something good in return. If the load gets too bad, you may have to reconsider what's more important to you - some of those new features, or the cost of buying bigger hardware to support them.

On the flip-side, proper dynamic loading of content can save you considerably processing-time and bandwidth in many cases. Rather than loading 1,000 records to the screen in a big batch, or paging through them 20 at a time with full page reloads for every chunk - have AJAX code step through only the records a user is interested in without reloading the containing page - big win. Or perhaps your page contains 8 statistical graphs of realtime activities in PNG format (the PNGs are dynamically generated from database data on the server side). This data behind each graph might potentially update as often as every 15 seconds, but more normally goes several minutes without changing. You can code some AJAX-style scripting into the page to do a quick remote call every 15 seconds to query the database's timestamps to see if any PNG's would have changed since they were last loaded, and then replace only those that need updating, only when they need to be updated. Huge savings versus sucking raw data out of the database, processing it into a PNG graph, and sending that over the network every 15 seconds as the whole page refreshes just incase anything changed.

Re:It's a two-way street (1)

jrumney (197329) | more than 8 years ago | (#14198460)

The PNG example isn't really anything to do with AJAX. It is just a case of proper use of the "If-Modified-Since" header. But you're right in general, a well designed AJAX app should cause less server load for a traditional page-by-page app with the same functionality. It is only the extra functionality that is not available without AJAX that should be adding any load.

A new tier? (1)

0kComputer (872064) | more than 8 years ago | (#14191482)

I'm not the first person to have this idea, but this brings up the question: do we need to define a new tier e.g. something like a presentation services tier? I think so. I'm not going to go into it because its pretty self explanitory; but honestly I don't think ajax is going away and if you are going to make asynch. calls in this fashion, you need hardware to back it up.

Re:A new tier? (1)

chill (34294) | more than 8 years ago | (#14194322)

I'm not the first person to have this idea, but this brings up the question: do we need to define a new tier e.g. something like a presentation services tier?

You mean like the Presentation Layer of the OSI model? The one below Application (second from the top)?

  -Charles

Separation of Services (2, Insightful)

jgardn (539054) | more than 8 years ago | (#14191822)

You're going to want to separate out your web server if you are going to face any real load. A good mod_perl implementation with a PostgreSQL (or even MySQL) can give you the kind of dynamic speeds you need. Since the AJAX queries usually translate to a single call, you can probably get much more performance than the older style where each page had to make several queries.

To serve up the webpage, I think you should go with a static HTTP server if you can. If you can't I would use a different server because there are different queries and a different load characteristic. For starters, people can wait 2 seconds while a new page loads, but they will get antsy if they have to wait more than 1/2 a second for a trivial AJAX query to complete.

Cash cash and money (4, Informative)

tod_miller (792541) | more than 8 years ago | (#14191844)

Don't mention web2.0 it is utter stupidity.

People talked about RSS web server loads versus advertising revenue about 2 years ago on slashdot, so I hardly think people are that stupid.

Also, if every page is (at best - which I doubt in your case) 50Kb - and the AJAX traffic each call is 500bytes - decide if that ajax call saved an entire page refresh (from your site, a page is probably 120Kb, with ads for customer pages can be 200kb..)

So, initial download even at worst (or best) would be 50Kb, each call 500bytes, so you can see the % of overhead is little, and if this call SAVED a refresh then you have saved 49.5p which is good for half a pint on fridays between 12 and 2 at the little willow on hidge street.

good day.

Web 2.0 is... (1)

Bazman (4849) | more than 8 years ago | (#14191966)

Web 2.0 is made of ... 600 million unwanted opinions in realtime
        Paul Moore
Web 2.0 is made of ... emergent blook juice
        Ian Nisbet
Web 2.0 is made entirely of pretentious self serving morons.
        Max Irwin
Web 2.0 is made of ...Magic pixie dust (a.k.a . Tim O'Reilly's dandruff)
        Jeramey Crawford

  - and a load of other things, see http://www.theregister.co.uk/2005/11/11/web_two_po int_naught_answers/ [theregister.co.uk]

I read those quotes (1)

tod_miller (792541) | more than 8 years ago | (#14192123)

And like 'podcasting' has a lot of twats fighting over who thought of the grand scheme (while ordinary people were making mp3's and letting people download them without the need for twatish words or syndicated xml), people will fight over who was the one who needs all the attention over web 2.0

I like the one about pretentious self-serving morons and 600 million unwanted opinions.

Web0.002 is like the web, only with a lower signal:noise ratio.

Does anyone find the fucktarded way ingaydget puts every fucking keyword to a link back to its search engine in every story a bit uber-google-gay?

I personally don't like engaydget poisoning my results with their own brand of love-my-ads juice. I ad block that site so harshly, I feel sorry for them.

mod parent up (3, Interesting)

samjam (256347) | more than 8 years ago | (#14192220)

It's those same idiots who spend all their time talking about how the web has failed to deliver its promises, and in reality are just trying to figure a way they can get all the money by patenting stuff folk have been doing for years.

Because they are so dumb it all looks non-obvious to them; 1 click ordering is so dumb nobody bothered doing it, but hey- the customer (dolt) likes it, so as Amazon were the first senseless idiots to actually do it they get to patent it!

Sam

Re:I read those quotes (1)

elemental23 (322479) | more than 8 years ago | (#14195420)

Does anyone find the fucktarded way ingaydget puts every fucking keyword to a link back to its search engine in every story a bit uber-google-gay?

Nothing beats a little intelligent, well thought out criticism.

Beware Accessibility Issues (2, Insightful)

Nurgled (63197) | more than 8 years ago | (#14192657)

This isn't directly related to your question, but it's something that most people experimenting with "AJAX" seem to be overlooking. It's too easy to fall into the trap of using XMLHttpRequest to do everything just because you can, but by doing that you are restricting yourself to a small set of browsers that actually support this stuff. This doesn't include many of these phone/PDA browsers that are becoming more common.

Also worth noting is that changing the DOM can cause confusion to users of aural browsers or screen readers. In some cases this doesn't cause a major problem; for example, if you have a form page where choosing your country then changes the content of the "select region" box that follows, the user will probably be progressing through the form in a logical order anyway and so the change, just as in a visual browser, won't be noticable. However, having a comment form which submits the comment using XMLHttpRequest and then plonks the some stuff into the DOM will probably not translate too well to non-visual rendering, as the user would have to backtrack to "see" the change.

Of course, depending on the application this may not matter. Google Maps doesn't need to worry about non-visual browsers because maps are inherently visual. (though that doesn't actually use AJAX anyway!) Google Maps would be useful on a PDA, however. I'm not saying "avoid AJAX at all costs!" but please do bear in mind these issues when deciding where best to employ it. Most of the time it really isn't necessary.

Re:Beware Accessibility Issues (0)

Anonymous Coward | more than 8 years ago | (#14194666)

Mod parent up, bitches.

Smart Use of Client Side is key (3, Insightful)

Pascarello (909061) | more than 8 years ago | (#14193125)

There are a lot of good points posted in here. Caching on the client on the server are two big things for a good application that is using the XHR. A good database design is also key if you do not want to use "like" which slows down the search. In Ajax In Action [manning.com] as discussed on Slashdot here [slashdot.org] . In chapter 10, the project talks about how to limit post backs with an auto suggest by using the clientside efficiently. The basic idea examines the results returned. If it is under a certain number, it uses JavaScript regular expressions to trim down the dataset instead of hitting the server. Plus there is a limit on number of results returned so it speeds up response time.

One thing I can not get through people's minds enough when I do my talks is Ajax is not going to be a "client-based app" on the web. The main reason is going to be network traffic getting in the way of your request. Imagine a dial up user in India with your server sitting in the United States. The request is going to have to travel to the other side of the world and back with the slow speeds of dial-up. Testing on your localhost is going to look great until you get on an outdated shared server hosting multiple applications with a full network load. Yes we are talking small requests pinging the server, but 1000 users with a 10 letter word could mean death if you designed the system badly!

I love XHR, cough Ajax, but you need to look at what you are dealing with. The design of an XHR app can kill you if you do not think it out fully.

My 2 cents,
Eric Pascarello
Coauthor of: Ajax In Action

Re:Smart Use of Client Side is key (1)

chill (34294) | more than 8 years ago | (#14194423)

Imagine a dial up user in India with your server sitting in the United States. The request is going to have to travel to the other side of the world and back with the slow speeds of dial-up.

Ummmm...no, it is not. UUCP is deader, thank God. It is going to travel from the dial-up users computer to his local ISP at the slow speed. After that, it gets routed on the rest of the Internet. Last I checked, that isn't using dial-up connections. The last link is going to be back to the user at dial-up speeds but everything in-between depends on the normal medium-to-high speed network.

Second point, not everyone has a global audience and has to worry about round-the-world latencies. I've done work for several financial institutions. They're heavily regulated and licensed to do business in limited geographic areas. Because of that 95%+ of their traffic originates and terminates inside that geographic area.

Even if you do have to worry about a decent percentage of dial-up users, there is nothing to stop you from dynamically turning on and off features based off of the detected connection speed of the client. Worst case, have "low bandwith" and "high bandwith" links.

  -Charles

Re:Smart Use of Client Side is key (1)

Pascarello (909061) | more than 8 years ago | (#14194658)

I thought it was common knowledge that the slowness is only to the ISP, guess I should have stated that.

The whole point is you need to realize that there is a difference between dialup, DSL, Cable, and localhost that a lot of developers tend to forget. I have seen people ask why it was so fast in development and sluggish in production. That is why I brought up the point.

Some people think that slapping an XHR on a page is going to be a beam of light from the skies to end all of their troubles. Ends up it can be even more trouble if they do not account for all of the minor issues.

Eric Pascarello

Re:Smart Use of Client Side is key (1)

Forbman (794277) | more than 8 years ago | (#14196488)

Worse than dialup is 56K frame relay WANS...

Re:Smart Use of Client Side is key (1)

Forbman (794277) | more than 8 years ago | (#14196452)

But...has anyone looked into how this exact same scenario would affect asp.net, which essentially does the same exact thing that AJAX uses (server side components that spit Javascript code to client, which executes the Javascript code to communicate bidirectionally with the server, essentially "out of band" of the original HTML stream)?

Re:Smart Use of Client Side is key (1)

budgenator (254554) | more than 8 years ago | (#14202606)

There is a lot to be said for some crazy stuff, like serving from a 33Mhz 486, kinda gives a feel for a normal server getting really hammered.

AJAX not the answer (3, Insightful)

Anonymous Coward | more than 8 years ago | (#14193492)

I am not quite sure what the question is, but I am fairly confident that AJAX is not the answer. IMO AJAX is a freak of nat..., computer science.
AJAX reliance on ECMA-script seems like a shaky foundation at best. I imagine debugging ECMA-script can be quite clunky and even if tool support might solve this problem at some point, there is no guarantie that browsers will interpret ECMA-script the same way, it seems like an embrace and extend waiting to happen.
I will not venture too far into the dynamically vs. statically typed language discussion other than stating that personally I prefer strongly typed languages.
I get the impression AJAX is a quick-and-dirty solution to a problem that requires something more advanced.
It seems like AJAX is an attempt to overcome the shortcomings of thin clients using the technology that had the widest market penetration, without considdering whether the technology was the appropriate tool for the job.
I am afraid that we will have to live with AJAX for a long time. A tradgedy similar to VHS victory over betamax, where an inferior technology beat a superior one.
I wonder if something like a next generation X-Server browser plugin or a thick client Java framework might not have been better suited for the job. It can't help but feel like AJAX is somehow trying to force a round peg through a square hole.

Use javascript instead of AJAX (1)

dirtyhippie (259852) | more than 8 years ago | (#14193946)

If you are worried about load, take the time to think about what really requires a round trip to the server, and what can more easily be done by populating some data on the webpage and then use that directly with javascript. My organization recently paid a certain overblown web design consultanting firm which botched a certain popular humor news site's website a lot of money, and they wanted us to use AJAX to autocomplete all of our forms - without making the simple connection that we only have about 100-200 elements in each autosuggest, and we could just put that data on the page and scrape it out with javascript, thus avoiding the extra server load and the lag time in the web browser. Make sure you see through the hype of AJAX, and don't use it when you don't need to.

If you use AJAX for everything, yes, your load will go through the roof. However, the good news is you probably don't need full blown AJAX to get most of the functionality you want. And, as other folks have pointed out, caching can make things a lot less painful where you really need it.

Good luck...

Autosuggest without AJAX (0)

Anonymous Coward | more than 8 years ago | (#14194275)

MediaMatters.org just launched a redesign in which they use some hairy javascript code that parses the DOM's visible and invisible fields to populate their autosuggest: http://mediamatters.org/issues_topics/media_person alities [mediamatters.org]

Interesting (1)

Paralos (936987) | more than 8 years ago | (#14204840)

Interesting. This isn't 100% new idea about AJAX but pretty darn close. I have only seen squeaks and sqwaks from a few people who are in the Web server business but not much else:

http://www.devx.com/asp/Article/29617 [devx.com]
http://www.port80software.com/200ok/archive/2005/0 4/29/393.aspx [port80software.com]

My opinion I would think folks haven't done enough apps to know what is what and the only people saying much are going to be the Web2.0 folks themselves (unlikely to own up to it quite yet) or the few folks like these sitting at an interesting vantage point to see lots of application and site efforts in conjunction with acceleration, caching, compression, etc. knowledge. Anybody else seen similar postings, thoughts, papers? It does concern me a bit given how I have seen polling abused in JS.

-M.J.F.

Re:Interesting (0)

Anonymous Coward | more than 8 years ago | (#14208293)

You might be interested in this OSS AJAX web framework.

ZK ( http://zk1.sourceforge.net/ [sourceforge.net] )

I would suggest reading the product overview first (http://zk1.sourceforge.net/wp/ZK-wp-prodovw.pdf [sourceforge.net] ). It denotes the issues regarding the AJAX development and how to overcome them. It even supports the modal dialog in web application which is an implementation utilizing the background threading and AJAX.

The best of all, no need to learn javascript or dom, no client side codes, just pure java. As simple as developing a desktop application.

It had been discussed (1)

beholder (35083) | more than 8 years ago | (#14214702)

http://www.mortbay.com/MB/log/gregw/?permalink=Sca lingConnections.html [mortbay.com]

Basically, the server that is used to handle X number of customers making a request every 2-3 minutes, will get a multiple of that because the requests are coming in much more frequent.

You will need to tune the server for much higher throughput value (more listeners/threads/workers) to deal with AJAX.
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>