Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Google To Host Ajax Libraries

CmdrTaco posted more than 6 years ago | from the cache-once-cache-often dept.

Programming 285

ruphus13 writes "So, hosting and managing a ton of Ajax calls, even when working with mootools, dojo or scriptaculous, can be quite cumbersome, especially as they get updated, along with your code. In addition, several sites now use these libraries, and the end-user has to download the library each time. Google now will provide hosted versions of these libraries, so users can simply reference Google's hosted version. From the article, 'The thing is, what if multiple sites are using Prototype 1.6? Because browsers cache files according to their URL, there is no way for your browser to realize that it is downloading the same file multiple times. And thus, if you visit 30 sites that use Prototype, then your browser will download prototype.js 30 times. Today, Google announced a partial solution to this problem that seems obvious in retrospect: Google is now offering the "Google Ajax Libraries API," which allows sites to download five well-known Ajax libraries (Dojo, Prototype, Scriptaculous, Mootools, and jQuery) from Google. This will only work if many sites decide to use Google's copies of the JavaScript libraries; if only one site does so, then there will be no real speed improvement. There is, of course, something of a privacy violation here, in that Google will now be able to keep track of which users are entering various non-Google Web pages.' Will users adopt this, or is it easy enough to simply host an additional file?"

cancel ×

285 comments

solution in search of a problem (4, Insightful)

nguy (1207026) | more than 6 years ago | (#23570327)

Compared to all the other crappy media that sites tend to have these days, centralizing distribution of a bunch of Javascript libraries makes almost no sense. I doubt it would even appreciably reduce your bandwidth costs.

Re:solution in search of a problem (5, Interesting)

causality (777677) | more than 6 years ago | (#23570467)

The "problem" already exists. It's "how can we collect more data about user's browsing habits?" You have to consider that Google is a for-profit business and hosting these files represents a bandwidth cost and a maintainence cost for them. They are unlikely to do this unless they believe that they can turn that into a profit, and the mechanism available to them is advertising revenue.

This is very similar to the purpose of the already-existing google-analytics.com. I block this site in my hosts file (among others) and I take other measures because I feel that if a corporation wants to take my data and profit from it, they first need to negotiate with me. Since Google is not going to do that, I refuse to contribute my data. To the folks who say "well how else are they supposed to make money" I say that I am not responsible for the success of someone else's business model, they are free to deny me access to their search engine if they so choose, and I would also point out that Google is not exactly struggling to turn a profit.

The "something of a privacy violation" mentioned in the summary seems to be the specific purpose.

Re:solution in search of a problem (2, Interesting)

Shakrai (717556) | more than 6 years ago | (#23570703)

This is very similar to the purpose of the already-existing google-analytics.com. I block this site in my hosts file (among others) and I take other measures because I feel that if a corporation wants to take my data and profit from it

Do you actually have to block it in your hosts file in order to effectively deny them information? I have it blacklisted in NoScript -- is that sufficient? I'd always thought it was called via Javascript.

Re:solution in search of a problem (1)

pjt33 (739471) | more than 6 years ago | (#23570823)

It is, but no reason not to block it in the host file. (You could easily have checked how it's called by viewing source, as /. uses it. Curiously the script is in the wrong place in this page: at the end of the head rather than the end of the body).

Re:solution in search of a problem (4, Interesting)

causality (777677) | more than 6 years ago | (#23570889)

Do you actually have to block it in your hosts file in order to effectively deny them information? I have it blacklisted in NoScript -- is that sufficient? I'd always thought it was called via Javascript.


The file is indeed Javascript and it's called "urchin.js" (nice name eh?). Personally, I use the hosts file because I don't care to even have my IP address showing up in their access logs. This isn't necessarily because I think that would be a bad thing, but it's because I don't see what benefit there would be for me and, as others have mentioned, the additional DNS query and traffic that would take place could only slow down the rendering of a given Web page.

I also use NoScript, AdBlock, RefControl and others. RefControl is nice because the HTTP Referrer is another way that sites can track your browsing; before Google Analytics it was common for many pages to include a one-pixel graphic from a common third-party host for this reason. Just bear in mind that some sites (especially some shopping-cart systems) legitimately use the referrer so you may need to add those sites to RefControl's exception list in order to shop there, as the default is to populate the referrer with the site's own homepage no matter what the actual referrer would have been.

Re:solution in search of a problem (1)

Shakrai (717556) | more than 6 years ago | (#23571329)

The file is indeed Javascript and it's called "urchin.js" (nice name eh?). Personally, I use the hosts file because I don't care to even have my IP address showing up in their access logs

I guess that was my (badly phrased) question. Is blocking it in NoScript sufficient to stop Firefox from even downloading it (i.e: is it usually called with a javascript element as opposed to being an embedded image or some other method?) or should the truly paranoids also include it in the hosts file?

Re:solution in search of a problem (4, Informative)

Tumbarumba (74816) | more than 6 years ago | (#23571409)

The file is indeed Javascript and it's called "urchin.js" (nice name eh?).
"urchin.js" is the old name for the script. Google encourages webmasters to upgrade to the new ga.js, which has pretty much the same functionality, but some other enhancements. Both those scripts feed data into the same reports. If you're interested, you can see what the scripts is doing by looking at http://www.google-analytics.com/ga.js [google-analytics.com] . It's pretty compact JavaScript, and I haven't gone through it to work out what it's doing. Personally, I use it on the website for my wife's children's shoe shop [lillifoot.co.uk] . From my point of view, the reports I get out of Google Analytics are excellent, and really help me optimise the website for keywords and navigation. I will admit though, that it is a little creepy about Google capturing the surfing habits of people in that way.

Re:solution in search of a problem (5, Insightful)

Daengbo (523424) | more than 6 years ago | (#23570707)

You have to consider that Google is a for-profit business and hosting these files represents a bandwidth cost and a maintainence cost for them.

The bandwidth cost should be small since Google uses these libraries already and the whole idea is to improve browser caching. The maintenance cost of hosting static content shouldn't be that high, either. I mean, really.

Since the labor, hardware, and bandwidth costs all seem to be low, Google wouldn't be under pressure to make the investment pay. Google hosts lots of things that don't benefit them directly and from which they gain no real advantage except image.. Despite being a data-mining machine, Google does a lot of truly altruistic stuff.

Re:solution in search of a problem (0)

Anonymous Coward | more than 6 years ago | (#23570969)

the whole idea is to improve browser caching
I guarantee that that is not the idea. Google needs to know which sites are visited by actual people, which pages are actually read (i.e. contain useful information,) how long people stay there and what they do there. Pagerank (inbound link count) has become almost meaningless and Google needs a more reliable popularity metric. The sites which haven't been suckered into using Google Analytics, Maps or one of the other "free" Google services now have another carrot dangled in front of them. This is also a more potent offering, because blocking google-analytics.com doesn't break websites, but blocking the Google-API script will remove all maps and now also break the page itself if it uses these AJAX libraries. I'll do it anyway. If that results in too much breakage, I'll recreate the files on a local server and redirect requests.

Re:solution in search of a problem (4, Insightful)

causality (777677) | more than 6 years ago | (#23570979)

Since the labor, hardware, and bandwidth costs all seem to be low, Google wouldn't be under pressure to make the investment pay. Google hosts lots of things that don't benefit them directly and from which they gain no real advantage except image.. Despite being a data-mining machine, Google does a lot of truly altruistic stuff.

Low cost != no cost. While you definitely have a point about their corporate image, I can't help but say that recognizing a company as a data-mining machine as you have accurately done, and then assuming (and that's what this is, an assumption) an altruistic motive when they take an action that has a strong data-mining component, is, well, a bit naive. I'm not saying that altruism could not be the case and that profit must be the sole explanation (that would also be an assumption); what I am saying is that given the lack of hard evidence, one of those is a lot more likely.

Re:solution in search of a problem (1)

Daengbo (523424) | more than 6 years ago | (#23571233)

I didn't assume that it was an altruistic move. I just said to look at their record and the facts. Don't assume that profit is the explanation. Google does a lot of data mining. They also do a lot of stuff which isn't related to that.

Re:solution in search of a problem (5, Insightful)

socsoc (1116769) | more than 6 years ago | (#23570955)

Google Analytics is invaluable for small business. AWStats and others cannot compete on ease of use and accuracy. By blocking the domain in your hosts file, you aren't sticking it to Google, you are hurting the Web sites that you visit. I'm employed by a small newspaper and we use Google Analytics in order to see where our readers are going and how we can improve the experience for you. Google already has this information through AdSense, or do you have that blocked too? Again you're hurting small business.

You may refuse to give them your data, but if I had the ability, Apache would refuse to give you my data until you eased off on the attitude.

Re:solution in search of a problem (0)

Anonymous Coward | more than 6 years ago | (#23571345)

Kudos. I'd mod you up, but I'm out of points.

Re:solution in search of a problem (3, Funny)

Kickersny.com (913902) | more than 6 years ago | (#23571363)

You may refuse to give them your data, but if I had the ability, Apache would refuse to give you my data until you eased off on the attitude.
Brilliant! He pokes you with a thumbtack and you retaliate by shooting yourself in the foot!

Re:solution in search of a problem (5, Insightful)

telbij (465356) | more than 6 years ago | (#23571003)

Is it really necessary to be so dramatic?

When you visit a website, the site owner is well within their rights to record that visit. To assert otherwise is an extremist view that needs popular and legislative buy-in before it can in any way be validated. The negotiation is between Google and website owners.

If you want to think of your HTTP requests as your data, then you'd probably best get off the Internet entirely. No one is every going to pay you for it.

Also:

To the folks who say "well how else are they supposed to make money"


Red herring. No one says that. No one even thinks about that. Frankly there are far more important privacy concerns out there than the collection of HTTP data.

Re:solution in search of a problem (1)

Janos421 (1136335) | more than 6 years ago | (#23571395)

What's the purpose of Privacy Policy then?

Re:solution in search of a problem (1)

alien9 (890794) | more than 6 years ago | (#23571129)

Agree. They are not doing it for goodness or something.

Indeed the bandwidth cost is ridiculous as an argument since you can jsmin and / or obfuscate javascript to a minimum that it would barely surpass the content's payload.

The intention behind this move is intriguing and suggests some clever tactics to measure someone's site usage. In other point of view, usage of the hosted library can give startups some googlability... in a startup context defined as we want to be googlabducted!!

Serious businesses won't provide usage information for the champion of data miners at all. Me included.

Re:solution in search of a problem (5, Informative)

dalmaer (725759) | more than 6 years ago | (#23571195)

I understand that people like to jump onto privacy, but there are a couple of things to think about here: - We have a privacy policy that you can check out - There isn't much information we can actually get here because: a) The goal is to have JavaScript files cached regularly, so as you go to other sites the browser will read the library from the cache and never have to hit Google! b) If we can get browsers to work with the system they can likewise do more optimistic caching which again means not having to go to Google c) The referrer data is just from the page itself that loaded the JavaScript. If you think about it, if you included prototype.js anyway then we could get that information via the spider... but it isn't of interest. We are a for profit company, but we also want to make the Web a better faster place, as that helps our business right there. The more people on the Web, the more people doing searches, and thus the better we can monetize. Hopefully as we continue to roll out services, we will continue to prove ourselves and keep the trust levels high with you, the developers. Cheers, Dion Google Developer Programs Ajaxian.com

Re:solution in search of a problem (0)

Anonymous Coward | more than 6 years ago | (#23571301)

the browser will read the library from the cache and never have to hit Google!
Yes, it will. Go check out how the library is loaded. You use the loader API and that is a script which has your API key in the URL. Bingo, no way to cache that across sites...

Re:solution in search of a problem (1)

mrrudge (1120279) | more than 6 years ago | (#23571493)

Please mod dalmaer up.
If you have the file cached on your HDD, how exactly are google going to monitor that ?
*sigh*

Re:solution in search of a problem (0)

Anonymous Coward | more than 6 years ago | (#23571275)

This all seems overly paranoid. If the idea is that browsers will have the libraries cached then Google won't be getting a hit on every page view like with analytics. They'll just get one hit per browser every few days, and the referring URL that goes with it.

Considering this is voluntary, I don't see why anyone has a problem with this. Their target audience most likely already uses Google Analytics anyway.

Re:solution in search of a problem (1)

djw (3187) | more than 6 years ago | (#23570473)

The idea, as far as I can tell, is to improve browser caching, not just distribution.

If a lot of sites that use prototype.js all refer to it by the same URL, chances are greater that a given client will already have cached the file before it hits your site. Therefore, you don't have to serve as much data and your users don't have to keep dozens of copies of the same file in their caches, and sites load marginally faster for everyone on the first hit only.

Plus Google gets even more tracking data with which to Not Be Evil. See, everybody wins.

Re:solution in search of a problem (1)

maxume (22995) | more than 6 years ago | (#23570491)

It isn't about bandwidth, it is about the apparent responsiveness of the page. Pulling the library from disk is almost always going to be faster than pulling it over the network. If it gets there faster, the browser has more time to chug its way through the bloat.

Re:solution in search of a problem (1)

paskie (539112) | more than 6 years ago | (#23570665)

And on the other hand, the extra DNS queries necessary to download the file from a third-party server _reduce_ the responsiveness, often severely. I too often wait for a page to finally show anything while my browser tries hard to resolve google-analytics.com, some obscure polish servers or an ad server. (I would actually say that DNS latency is much underestimated and one of the worst contributing factors to web-browsing latency. Not that I would have that much trouble with it, but when it happens, it's damn noticeable, and it does not happen that rarely.)
For heavy user of many AJAX sites, this might be some improvement. But for casual users, this will in fact cause _additional_ delays.

Re:solution in search of a problem (1)

Tangent128 (1112197) | more than 6 years ago | (#23571593)

Don't browsers usually cache DNS data? Most casual users will probably already have Google in there.

Re:solution in search of a problem (1)

Jacques Chester (151652) | more than 6 years ago | (#23570609)

That very much depends on whose problem you're talking about.

If you're a web site worried about javascript library hosting, caching and such, this will help, a bit. Mostly to banish an annoyance.

If, on the other hand, you're a famous search engine who'd love to know more about who uses what javascripting libraries on which sites ... well, this sort of scheme is just your ticket.

Re:solution in search of a problem (5, Insightful)

maxume (22995) | more than 6 years ago | (#23570661)

In theory, cache hits wouldn't give Google an information at all. So when the api works the way it is supposed to, it doesn't reveal anything.

Someone could even put up a site called googlenoise.com or whatever, with the sole purpose of loading the useful versions of the library into the cache from the same place.

Re:solution in search of a problem (0)

Anonymous Coward | more than 6 years ago | (#23571179)

The libraries are loaded through a small loader API script, which also loads other Google APIs (Maps, for example). The loader API URL includes a site-specific Google API key, so this file won't be cached across sites. It will produce a trackable hit for every participating site. Should Google require more information, they can always add web beacons (AKA web bugs) to subsequent page impressions. Google recently changed the terms of service and now requires sites to inform users about the potential use of web beacons. Effectively this replaces one uncached script file by another, albeit smaller one.

Re:solution in search of a problem (2, Informative)

maxume (22995) | more than 6 years ago | (#23571241)

They encourage use of the loader, but they aren't requiring it, there are direct urls for accessing the libraries:

http://code.google.com/apis/ajaxlibs/documentation/index.html#AjaxLibraries [google.com]

Re:solution in search of a problem (0)

Anonymous Coward | more than 6 years ago | (#23571455)

If caching is the idea, why would they encourage using an uncacheable file to load the libraries? The option to load the libraries directly only serves as a fig-leaf.

Re:solution in search of a problem (0, Flamebait)

Bobb Sledd (307434) | more than 6 years ago | (#23570683)

... and it's downloading text, not binary. I don't even think the user would notice anything appreciably different, especially if they have Vista and already used to things being slow!

Re:smart move (1)

JavaStreet (1183815) | more than 6 years ago | (#23570887)

Compared to all the other crappy media that sites tend to have these days, centralizing distribution of a bunch of Javascript libraries makes almost no sense. I doubt it would even appreciably reduce your bandwidth costs.
Please! This is a great solution to reducing bandwidth costs whether they are monetary or just reducing the burden on your own servers. AOL already hosts AJAX libraries for this purpose. Although the AJAX libraries try to remain small, there size can be significant. Utilizing a fat piped resource to host these libraries? Smart move.

Re:solution in search of a problem (1)

neuromancer23 (1122449) | more than 6 years ago | (#23570963)

The problem with current AJAX APIs is that they all suck ass. Dojo is several megabytes, but even paying for all of that bandwidth has got to be better than dealing with Google. Still, the real solution is to write your own AJAX API that:

1. Is not bloated
2. Does not arbitrarily modify the DOM making it impossible work with.
3. Actually works.

Then make it open source so that the human race can have an AJAX API that doesn't blow goats.

No good reason for this... (2, Insightful)

GigaHurtsMyRobot (1143329) | more than 6 years ago | (#23570349)

If you want to improve the speed of downloading, how about removing 70% of the code which just encodes/decodes from XML and start using simple and efficient delimiters? I was a fan of Xajax, but I had to re-write it from scratch... XML is too verbose when you control both endpoints.

It is not a problem to host an additional file, and this only gives Google more information than they need... absolutely no good reason for this.

Re:No good reason for this... (1)

CastrTroy (595695) | more than 6 years ago | (#23570451)

Well, from my experience making AJAX libraries, the stuff to encode to XML is pretty minimal. It's pretty easy and compact to write code which when you call a function, sends an XML snippet to the server to run a specific function in a specific class, using a few parameters. The real lengthy part is getting the browser to do something with the XML you send back.

Re:No good reason for this... (1)

GigaHurtsMyRobot (1143329) | more than 6 years ago | (#23570685)

As you said, the lengthy part is handling the XML in Javascript... which shouldn't be happening!

To give you an idea... my re-written Aj library takes up less than 6k for the basics.

Re:No good reason for this... (3, Informative)

AKAImBatman (238306) | more than 6 years ago | (#23570517)

how about removing 70% of the code which just encodes/decodes from XML

Done [json.org] . What's next on your list?

(For those of you unfamiliar with JSON, it's basically a textual representation of JavaScript. e.g.

{
name: "Jenny",
phone: "867-5309"
}
If you trust the server, you can read that with a simple "var obj = eval('('+json+')');". If you don't trust the server, it's still easy to parse with very little code.)

Re:No good reason for this... (1)

Dekortage (697532) | more than 6 years ago | (#23570591)

And if you still want to use jQuery for other JavaScript interface joy, it can handle JSON natively [jquery.com] . (Other frameworks probably do too, I just happen to be a fan of jQuery.)

Re:No good reason for this... (1)

Dekortage (697532) | more than 6 years ago | (#23570731)

Actually this [jquery.com] is a better example.

Re:No good reason for this... (0)

Klaus_1250 (987230) | more than 6 years ago | (#23570611)

You can argue whether or not doing a lot of js evals will be any faster/more efficient than pulling in XML. Haven't checked how fast/efficient are in the current generation of browsers, but I used to avoid them like the plague due to speed issues.

Re:No good reason for this... (0)

Anonymous Coward | more than 6 years ago | (#23570733)

I'd take a couple of evals over recursive XML-tree walking any day.

besides, if you load your JSON call by adding a script tag to the page DOM, instead of loading it the an XHTTPRequest call, there's no need to do the eval

Don't use "eval" in Javascript for input (1)

Animats (122034) | more than 6 years ago | (#23571267)

This was a dumb feature in Javascript. In LISP, there's the "reader", which takes in a string and generates an S-expression, and there's "eval", which runs an S-expression through the interpreter. The "reader" is safe to run on hostile data, but "eval" is not. In Javascript, "eval" takes in a string and runs it as code. Not safe on hostile data.

JSON is a huge security hole if read with "eval". Better libraries try to wrap "eval" with protective code that looks for "bad stuff" in the input. Some such libraries actually work. Maybe. The process of checking "JSON" input for "bad stuff" is complicated enough that just parsing the input without "eval" can be simpler.

Well doh (4, Insightful)

Roadmaster (96317) | more than 6 years ago | (#23570351)

Will users adopt this, or is it easy enough to simply host an additional file?
Well duh, it's dead easy for me to just host another file, so easy in fact that web frameworks usually do it for you by default, but that's missing the point: the point is that for the end-user it would be better, faster and more efficient if I went to the trouble of using google's hosted version, instead of using my local copy. That, indeed, would be more work for me, but it benefits the end user.

Re:Well doh (1)

Yetihehe (971185) | more than 6 years ago | (#23570383)

but it benefits the end user.
And google too ;)

Re:Well doh (1)

Nursie (632944) | more than 6 years ago | (#23570427)

How?

A bit of code (unless I'm missing something) is going to be smaller than your average image. What's the gain?
Other than for google of course.

Re:Well doh (1)

gmor (769112) | more than 6 years ago | (#23571235)

The problem isn't the size of the script; it's the latency. As I understand, when I browser encounters a script element, it loads and executes the code before rendering the rest of the page. Images, on the other hand, can load after the rest of the page is parsed. Depending on how close your servers are to your users and whether your users have Google ready in their DNS caches, this may be a win even if the scripts aren't already in the user's browser cache.

Privacy, privacy, privacy.... (0)

Anonymous Coward | more than 6 years ago | (#23570369)

"There is, of course, something of a privacy violation here..."

Yeah, its Google, so lets just talk about privacy. Does not matter if its relevant to the story or not. You see, its Google.

How is privacy (0)

Anonymous Coward | more than 6 years ago | (#23570841)

not related to the story?

Couldn't be... (-1, Redundant)

Siquo (1259368) | more than 6 years ago | (#23570373)

Google and privacy violations? Nah, couldn't be!

The enormous size of these js files isn't exactly slowing teh internet down, and you are relinquishing control of your entire website to Google. Also, those hosted js files would be prime targets for people who want to spread their malware, so I sure hope they're safe...

Re:Couldn't be... (4, Insightful)

Jellybob (597204) | more than 6 years ago | (#23570545)

Also, those hosted js files would be prime targets for people who want to spread their malware, so I sure hope they're safe...

Yes, you've gotta be careful with those incompetant sysadmins that Google are hiring.

After all, they're constantly getting the servers hacked.

Re:Couldn't be... (1)

Siquo (1259368) | more than 6 years ago | (#23570781)

Ah, darnit, I forgot that allmighty Google is totally hackerproof by definition. My bad.

from... (1)

cosmocain (1060326) | more than 6 years ago | (#23570385)

...the blurb: There is, of course, something of a privacy violation here, in that Google will now be able to keep track of which users are entering various non-Google Web pages.

Ha. News at 11.

Only a partial solution (4, Insightful)

CastrTroy (595695) | more than 6 years ago | (#23570391)

This is only a partial solution. The real solution is for sites using AJAX to get away from this habit of requiring hundreds of kilobytes of scrip just to visit the home page. Couldn't you design a modular AJAX system that would bring in functions as they are needed? That way, someone visiting just a couple pages wouldn't have to download the entire library. Have each function in it's own file, and then when an AJAX call is done, make it smart enough to figure out which functions need to be downloaded to run the resulting Javascript. The problem with Google hosting everything, is that everybody has to use the versions that Google has posted, and that you can't do any custom modifications to the components. I think that what Google is doing would help. But the solution is far from optimal.

Re:Only a partial solution (1)

dmomo (256005) | more than 6 years ago | (#23570445)

> The problem with Google hosting everything, is that everybody has to use the versions that Google has posted, and that you can't do any custom modifications to the components. I think that what Google is doing would help. But the solution is far from optimal.

That isn't too much of a problem. You can include the Google version first and then override any function or object by simply redeclaring it.

Re:Only a partial solution (1)

Ambient_Developer (825456) | more than 6 years ago | (#23570599)

The company I own is doing something similar to this ATM, only in a different and better way.
~mp

Re:Only a partial solution (0, Insightful)

Anonymous Coward | more than 6 years ago | (#23571079)

and your company is who? thanks for the useful info on how i can use your company's service...

Re:Only a partial solution (4, Insightful)

Bobb Sledd (307434) | more than 6 years ago | (#23570603)

Yikes...

Maybe it is possible to get TOO modular. Several problems with that:

1. With many little files comes many little requests. If the web server is not properly set up, then the overhead these individual requests causes really slows the transmission of the page. Usually, it's faster to have everything in one big file than to have the same number of kilobytes in many smaller files.

2. From a development point of view, I use several JS bits that require this or that library. I don't know why or what functions it needs. And I really don't care; I have my own stuff I want to worry about. I don't want to go digging through someone else's code (that already works) to figure out what functions they don't need.

3. If I do custom work where file size is a major factor or if I only use one function from the library, I guess then I'll just modify as I see fit and host on my own site.

I think what Google is doing is great, but I can't really use it for my sites (they're all secure). So unless I want that little warning message to come up, I won't be using it.

Beware the overhead. (5, Insightful)

ClayJar (126217) | more than 6 years ago | (#23570677)

Couldn't you design a modular AJAX system that would bring in functions as they are needed? That way, someone visiting just a couple pages wouldn't have to download the entire library. Have each function in it's own file, and then when an AJAX call is done, make it smart enough to figure out which functions need to be downloaded to run the resulting Javascript.
Actually, the trend is in the opposite direction. By including everything in one file, you can greatly reduce the number of HTTP transactions. Eliminating the significant overhead there can improve "speed" tremendously.

Additionally, if you're using compression, it is likely that one large file will compress more effectively than a collection of smaller files. (You *are* using compression, aren't you?)

Re:Beware the overhead. (1)

CastrTroy (595695) | more than 6 years ago | (#23570923)

But isn't the whole point of AJAX to reduce server load by having users do lots of little requests instead of a few large requests? While one large file would compress better than many small files, would one large file compress better than 1/10 of the data actually being sent out because the user didn't need the other 9/10 of the data? You could also optimize the fetching of code by sending a single request to request all the Javascript for a specific action in just one request. Which would contain a bigger section of code and be more easily compressible. My solution isn't finallized, and there could be some tweaking needed. However, there has to be a better solution than sending out your entire AJAX library to every user who visits your page.

Data vs. code. (1)

ClayJar (126217) | more than 6 years ago | (#23571137)

The whole point of AJAX is to reduce the amount of data you need to send to the user, not necessarily to reduce the amount of code. Yes, the browser will need to download the entire library, but only once. Caching takes it from there.

Compared to data, code is small. This is not a universal truth -- you can have a white pages site with a tremendously weighty interface that displays nothing but "Jenny 867-5309" -- but it is a valid assumption in the general case. With AJAX, data is effectively unbounded.

If you're using AJAX just to make your collection of 42 casual haiku look pretty, that's one thing. If you're using AJAX more along the lines of Google Maps (where there is almost unfathomably more data than code), that's a horse of a different color. I imagine most people are somewhere in between, but it seems readily obvious that it would be incorrect to think of the AJAX designs in the present using the assumptions of the now distant past.

Re:Only a partial solution (0)

Anonymous Coward | more than 6 years ago | (#23570687)

Couldn't you design a modular AJAX system that would bring in functions as they are needed? That way, someone visiting just a couple pages wouldn't have to download the entire library. Have each function in it's own file, and then when an AJAX call is done, make it smart enough to figure out which functions need to be downloaded to run the resulting Javascript.
Doing multiple http requests just to build the script for a page can be very cumbersome for people with less than stellar internet service. The trick is to find the perfect balance between minimal server requests and minimal download size.

I would say this is a good idea for sites that get plenty of 'one and done' traffic, but not all that necessary for sites that retain most of their traffic.

Re:Only a partial solution (1)

pushing-robot (1037830) | more than 6 years ago | (#23570693)

It's a good idea, but you're trading a little bandwidth for a lot of extra requests (and latency). And besides: a few hundred kilobytes isn't a big deal these days if users only have to download it once, which is what Google is doing. Custom per-site implementations defeat that.

Re:Only a partial solution (1)

vitaflo (20507) | more than 6 years ago | (#23570749)

Mootools sort of does this, but on the developer end. When you download Mootools you get to choose which components you want as part of your JS libs. Just need AJAX and not the fancy effects, CSS selectors, etc? Then you can just include the AJAX libs into Mootools and forget the rest. It's not load on demand, but at least it's better than having to download a 130k file when you're only using 10k of it.

Re:Only a partial solution (1, Informative)

Anonymous Coward | more than 6 years ago | (#23571067)

Couldn't you design a modular AJAX system that would bring in functions as they are needed? That way, someone visiting just a couple pages wouldn't have to download the entire library.
Qooxdoo [qooxdoo.org] does this. While developing you download the entire framework, but when you are ready to release you run a makefile which creates a streamlined .js file with only the methods/classes your application UI needs; it also trims whitespace, renames variables to save space, etc.

Fun package to work with too.

Re:Only a partial solution (1)

divided421 (1174103) | more than 6 years ago | (#23571109)

Couldn't you design a modular AJAX system that would bring in functions as they are needed?
Yes, and most ajax programmers already do it. But...you do need the first initial 'core' library to build upon. I primarily use jQuery - which is compressed down to ~23k. I think that is acceptable for a home page. From there, the scripts are either inline in the html via the tag, or downloaded 'ajaxically' via the $.get() call. Google's solution would work, fairly well, although I don't think enough sites use those frameworks consistently, in the same version, to really make a difference.

Re:Only a partial solution (3, Informative)

lemnar (642664) | more than 6 years ago | (#23571415)

AJAX systems are modular - at least some of them are somewhat. Scriptaculous, for example, can be loaded with with only certain functions.

"With Google hosting everything," you get to use exactly the version you want - google.load('jquery', '1.2.3') or google.load('jquery', '1.2'), which will get you the highest version '1'.2 available - currently 1.2.6. Furthermore, you can still load your own custom code or modifications after the fact.

For those concerned about privacy: yes they keep stats - they even do some of it client side - after libraries are loaded, a call is made to http://www.google.com/uds/stats [google.com] with a list of which libraries you loaded. However, the loader is also the same exact loader you would use if you were using other Google JavaScript APIs anyways. It started out as a way to load the Search API and the Maps API: google.load('maps', '2') and/or google.load('search', '1').

Google's claim to providing good distributed hosting of compressed and cachable versions of the libraries aside, the loader does a few useful things in its own right. It deals with versioning, letting you decide to which granularity of versions you want to load, and letting them deal with updates. Also, it deals with setting up a callback function that actually works after the DOM is loaded in IE, Safari, Opera, and Firefox, and after the entire page is load for any other browesers. They also provide convenience functions. google_exportSymbol will let you write your code in a non-global scope, and then put the 'public' interfaces into the global scope.

Finally, you can inject your own libraries into their loader. After the jsapi script tag, include your own, set google.loader.googleApisBase to point to your own server, and call google.loader.rpl with a bit of JSON defining your libraries' names, locations, and versions. Subsequent calls to google.load('mylib', 'version') will work as expected.

Nifty (1, Insightful)

Anonymous Coward | more than 6 years ago | (#23570407)

Now if only this could be done with GWT. Rather than building on a base-library, GWT vomits a slew of files all with hashed names. Since no two compiles are the same, you end up with an ever growing set of JS and HTML files sitting in the component directory. This is particularly annoying as all these files interact poorly with version control systems. (Even one as advanced as, say, Mercurial.)

At the very least, a standard ANT plugin so that GWT could be built at build-time rather than dev-time would do wonders for the project.

nothing new here (4, Informative)

Yurka (468420) | more than 6 years ago | (#23570411)

The DTD files for the basic XML schemas had been hosted centrally at Netscape and w3.org since forever. No one cares or, indeed, notices (until they go down [slashdot.org] , that is).

Re:nothing new here (0)

Anonymous Coward | more than 6 years ago | (#23571023)

Yeah but they are supposed to be cached for very long times (far too long to get good browsing data from HttpReferer).

Re:nothing new here (1)

Myen (734499) | more than 6 years ago | (#23571059)

No, you weren't supposed to actually use those DTDs - they should have came with the app. It's just got a URL to be a unique string, and actually exists as a service so you know where to copy the file from, not to be downloaded every time your app runs.

A better analogy is.. AOL [aol.com] and dojo.

Privacy from Google? (1)

IcyHando'Death (239387) | more than 6 years ago | (#23570419)

Surely this doesn't open the door to Google much wider than it already was. Don't they already know about every page you hit that serves up their ads?

Re:Privacy from Google? (2, Interesting)

Anonymous Coward | more than 6 years ago | (#23570555)

That's the idea. AdWords, these "hosted" JS libraries, Urchin/Google Analytics, Google Friend Connect -- Google clearly wants to be involved in every single web "page" that's ever served.

http://www.radaronline.com/from-the-magazine/2007/09/google_fiction_evil_dangerous_surveillance_control_1.php

Re:Privacy from Google? (3, Funny)

jason.sweet (1272826) | more than 6 years ago | (#23570787)

The Google Funding Bill is passed. The system goes on-line August 4th, 2009. Human decisions are removed from strategic search. Google begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug. Google fights back.

Yabbut (2, Interesting)

FlyByPC (841016) | more than 6 years ago | (#23570441)

Yeah, but what if Google decides that nobody is using these -- or they can't legally host them for whatever reason -- or they just decide that they don't want to do this anymore?

I like Google too -- and this is nice of them -- but I like the idea of a website being as self-sufficient as possible (not relying on other servers, which introduce extra single-points-of-failure into the process.)

At the risk of sounding like an old curmudgeon, whatever happened to good ol' HTML?

dependence on Google is but one problem (4, Interesting)

SuperBanana (662181) | more than 6 years ago | (#23570585)

Yeah, but what if Google decides that nobody is using these -- or they can't legally host them for whatever reason -- or they just decide that they don't want to do this anymore?

Think broader. What happens when:

  • Google decides to wrap more than just the promised functionality into it? For example, maybe "display a button" turns into "display a button and report usage stats"?
  • Google gets hacked and malicious Javascript is included?

But, yes- you're right. This is a scary new dependency. For a company full of PhD geniuses supposedly Doing No Evil, nobody at Google seems to understand how dangerous they are to the health of the web. In fact, I'd suggest they do, and they don't care- because they seem hell-bent on making everything on the web touch/use/rely upon Google in some way. This is no exception.

A lot of folks don't even realize how Google is slowly weaning open-source projects into relying on them, too (with Google Summer of Code.)

I won't adopt (5, Insightful)

corporal_clegg (547755) | more than 6 years ago | (#23570461)

As a developer, privacy of my users is of paramount importance. I have grown increasingly concerned with Google's apparently incessant need to pry into my searches and my browsing habits. Where once I was a major Google supporter, I have trimmed my use of their service back from email and toolbars to simple searches and now even won't use their service at all if I am searching for anything that may be misconstrued at some point by guys in dark suits with plastic ID badges. The last thing I am going to do as a developer is force my users into a situation where they can feed the Google Logging Engine.

Re:I won't adopt (1)

3c5x9cfg (41606) | more than 6 years ago | (#23570681)

"Fail to sign up today and we'll throw in a lifetime membership of Main Core *absolutely free*!"

Speaking as a JQuery user... (2, Insightful)

MikeRT (947531) | more than 6 years ago | (#23570557)

If I were worried about bandwidth, why wouldn't I just use one of the packed down files? They're as small, if not smaller, than most of the images that will appear on a web page.

Re:Speaking as a JQuery user... (2, Informative)

thrillseeker (518224) | more than 6 years ago | (#23571373)

Because the hundred other pages the visitor went to that session is also demanding their own copy of the library be downloaded. It's not your bandwidth this saves (only trivially it is) - it's the end user's download, and parse, of the same code for each of the dozen sites he visits that use the same library. The libraries Google has initially chosen are extremely popular - i.e. there are good odds that you have a dozen copies in your browser cache right now - each download of which made your browsing experience that much slower.

You mean like YUI does? (2, Interesting)

Gopal.V (532678) | more than 6 years ago | (#23570559)

I didn't see no slashdot article when yahoo put up hosted YUI packages [yahoo.com] served off their CDN.

I guess it's because google is hosting non-google libraries?

Probably more for other google offerings (0)

Anonymous Coward | more than 6 years ago | (#23570579)

I see this as more of a value add for people using other outward facing google products -- namely google apps and google pages. Why have a brazillion copies of these things on their servers (and using up their customers' storage limit) when they can offer it up once.

It also ensures that all web-sites using these projects can keep up to date automatically, so any security hole or bug gets fixed immediately for sites that take advantage of this.

As well, I can see this as a benefit for users of noscript and the like. If you've already white listed "code.google.com" (or wherever it's being hosted) on one site's implementation, any other site using it will automatically be cool too.

Besides, you can already do this with their google code repository. go look at Dean Edward's projects [edwards.name] . All of them are hosted on google code, and he specifically recomends pointing to the google server from your site. This seems to be just an extension for other open source projects. Sure this could be handled by the individual projects themselves, on their own servers. But why have your site hammered by the infinite visitors of the sites that use your product when Google is willing to absorb the hammering for you.

Yahoo does this already... (2, Interesting)

samuel4242 (630369) | more than 6 years ago | (#23570581)

With their own YUI libraries. See here [yahoo.com] Anyone have any experience with this? I'm a bit wary of trusting Yahoo, although I guess it's easy enough to swap it out.

Will be used, but not by you (or me) (1)

Gandalf (787) | more than 6 years ago | (#23570653)

What I would expect is that this will be useful for many people and that there is no drawback in using (yet another) Google service especially not if Adsense or Analytics already let Google track your visitors.

If there are reasons for not to use it (privacy, control), you probably already know this of yourself because you have carefully picked where to host your site (possibly in-house) and/or partnered with a CDN (even if just S3) to optimise content delivery. Or you have an intranet application where there is hardly any advantage for this.

Basically, you won't use this if you believe you know what you're doing, which you (yes, you) and me both do.

fuck that, i'd rather use sourceforge (0)

Anonymous Coward | more than 6 years ago | (#23570689)

or someone else not trying to be not evil

One of not so good about this .. (1)

CALI-BANG (14756) | more than 6 years ago | (#23570699)

when sysadmin blocks google...

your site won't be rendered properly.

on our corporate network ... sysadmin blocks yahoo and other yahoo properties .. so sites that uses yahooapis.com are blocked also.

i know you're not suppose to use companys internet connection --- but who else are workin that sometimes visit other sites? like slashdot

Dojo libs are on AOL's edge network already (1)

ggpauly (263626) | more than 6 years ago | (#23570713)

eg http://o.aolcdn.com/dojo/1.1.1/dojo/dojo.xd.js [aolcdn.com]

This is really fast - I think they cache on distributed servers. Much faster than from my own server.

Anybody have more info on this? Is Google going to do something similar? Is AOL harvesting data on my clients' users?

SSL warnings (0)

Anonymous Coward | more than 6 years ago | (#23570721)

SSL might not like referencing remote libraries...

Host it yourself, add meta-tag (3, Interesting)

Anonymous Coward | more than 6 years ago | (#23570723)

A far better solution would be to add a meta-tag to a call, which the browser could check to see if it has it. For security reasons you need to define it always to use it, so if you don't define it, there will never be a mixup.

Eg:

script type="javascript" src="prototype.js" origin="http://www.prototype.com/version/1.6/" md5="..............."

When another user want to use the same lib, he can the use the origin, and the browser will not download it from the new site. It's crucial to use the md5 (or other method), which the browser must calculate the first time it download it. Or else it would be easy to create a bogus file and get it run on another site.

Of course this approach is only as secure as the hash.

The web needs content addressable links! (1)

Omnifarious (11933) | more than 6 years ago | (#23570761)

The web really needs some sort of link to a SHA-256 hash or something. If that kind of link were allowed ubiquitously it could solve the Slashdot effect and also make caching work really well for pictures, Ajax libraries and a whole number of other things that don't change that often.

Re:The web needs content addressable links! (1)

Omnifarious (11933) | more than 6 years ago | (#23570847)

I wish I could go back and my post...

It would also solve stupid things like Netscape ceasing to host the DTDs for RSS.

Probably a pretty cool idea (1)

mlwmohawk (801821) | more than 6 years ago | (#23570765)

I know it is not obvious, but sites that are sensitive to bandwidth issues may find this a cost saving measure.
Google, of course, gets even more information about everyone.

win win, except for us privacy people. I guess we have to true "do no evil," huh?

Data Replication = BAD (1)

y86 (111726) | more than 6 years ago | (#23570825)

It's really foolish to replicate these libraries all over the place.

No one says you have to use google's service. It's just an idea. They eliminate library management problems for you and you give them a little data.

So what. Do you think that COMCAST and other companies that are throttling bit torrent and high jacking DNS queries aren't mining and selling all your UNENCRYPTED CAN BE READ WITH NOTEPAD AND TCPDUMP HTTP get requests?

what a piece of nonsense (1)

Tom (822) | more than 6 years ago | (#23570971)

Yeah, so it downloads some Ajax library twice, or even ten times, or a hundred. So what? The ads on your typical webpage are ten times as much in size and bandwidth.

Thanks, but I prefer that my site works even if some other site I have nothing to do with is unreachable today. Granted, Google being unreachable is unlikely, but think about offline copies, internal applications, and all the other perfectly normal things that this approach suddenly turns into special cases.

Re:what a piece of nonsense (1)

thrillseeker (518224) | more than 6 years ago | (#23571443)

But that's the point - those ads are already mostly centrally hosted - i.e they were already using a few common sources - now the code libraries have a common source.

All... (1)

EddyPearson (901263) | more than 6 years ago | (#23571051)

...your script are belong to us

Umm, no (3, Interesting)

holophrastic (221104) | more than 6 years ago | (#23571103)

First, I block all google-related content, period. This type of thing would render many sites non-operational.

Second, I've always had this complaint with the whole external javascript files. When you're already downloading a 50K html page, another 10K of javascript code in the same file inline downloads at full-speed. The external file requires yet another hit to the server, and everything involved therein. It almost never makes any sense. Even as a locally cached file, on a broadband connection, downloading the extra 10K is typically faster than opening and reading the locally cached file!

But still, hosting a part of your corporate web-site with google simply breaches most of your confidentiality and non-disclosure agreements that you have with your clients and suppliers. It's that simple. Find the line that reads "shall not in any way disclose Confidential Information to any third party at any time, including consultants and contractors, copy and/or merge the Confidential Information/business relationship with any other technology, software or materials, except contractors with a specific need to know . . ."

Simply put, if your Confidential client conversations go over gmail, you're in breach. If google tracks/monitors/sells/organizes/eases your business with your clients or suppliers, you're in breach -- i.e. it's illegal, and your own clients/suppliers can easily sue you for giving google their trade secrets.

Obviously it's easier to out-source everything and do nothing. But there's a reason that google and other such companies offer these services for free -- it's free as in beer, at the definite cost of every other free; and it's often illegal for businesses.

Re:Umm, no (1)

_xeno_ (155264) | more than 6 years ago | (#23571569)

Second, I've always had this complaint with the whole external javascript files. When you're already downloading a 50K html page, another 10K of javascript code in the same file inline downloads at full-speed. The external file requires yet another hit to the server, and everything involved therein. It almost never makes any sense. Even as a locally cached file, on a broadband connection, downloading the extra 10K is typically faster than opening and reading the locally cached file!

That's not the reason that I generally use external JavaScript files. The reason is code reuse, pure and simple. Generally speaking it's far easier to just link to the file (especially for static HTML pages) than it is to try and inline it. That way when you fix a bug that effects Internet Explorer 6.0.5201 on Windows XP SP1.5 or whatever you don't have to copy it to all your static files as the code is in a single location.

Sure, you could use server-side includes, but then you need to make sure that your JavaScript code doesn't include "</script>" anywhere. The requirements are even more strict for (true) XHTML.

It's also code separation. It separates the JavaScript code from the display, which generally makes it far easier to work with, especially with syntax-highlighting editors that get retarded when they see JavaScript in HTML.

But anyway:

The external file requires yet another hit to the server, and everything involved therein.

Your web client sucks then. Get one that understands persistent HTTP connections [w3.org] . If you actually look at a network sniffer while any modern browser accesses a webpage you should see them all use the same socket.

The other option is that the web server sucks or is configured not to use persistent HTTP connections. In any case, this shouldn't be a real problem.

Extend the standard Javascript Library! (0)

Anonymous Coward | more than 6 years ago | (#23571159)

Currently its either use a popular open source library which adds some extra bandwidth overhead or reinvent the wheel yourself.

Isn't using javascript from multiple domains dumb? (1)

Battalion (537403) | more than 6 years ago | (#23571223)

Isn't pulling javascript from different domains a fundamentally dumb idea? I disable javascript for everything, then enable on a per site basis if the javascript provides something useful to me. Pulling javascript from multiple domains makes it a pain in the backside having to find where all the javascript is coming from and enable javascript exection from that domain.

Cross-Site Scripting by Definition (3, Insightful)

Rich (9681) | more than 6 years ago | (#23571227)

Well, one effect of this would be to allow google to execute scripts in the security context of any site using their copy of the code. The same issue occurs for urchin.js etc. If your site needs to comply with regulations like PCI DSS or similar then you shouldn't be doing this as it means google has access to your cookies, can change your content etc. etc.

For many common sites that aren't processing sensitive information however, sharing this code is probably a very good idea. Even better would be if google provided a signed version of the code so that you could see if it has been changed.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...