Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Google Accelerator: Be Careful Where You Browse

timothy posted more than 9 years ago | from the press-three-for-random-deletions dept.

Google 89

Eagle5596 writes "It seems that there can be a serious problem with Google's Web Accelerator, and I'm not talking about the privacy concerns. Evidently some people have been finding that due to the prefetching of pages their accounts and data are being deleted."

cancel ×

89 comments

Sorry! There are no comments related to the filter you selected.

Just goes to show.... (4, Funny)

Anonymous Coward | more than 9 years ago | (#12463755)

Google should have beta tested it first.

Re:Just goes to show.... (1)

mintshows (716731) | more than 9 years ago | (#12464215)

This IS beta! When you use a beta version of anything..don't be suprised if something "breaks"

Re:Just goes to show.... (0)

Anonymous Coward | more than 9 years ago | (#12464344)

Over your head at the speed of sound...

Re:Just goes to show.... (1)

SpaceLifeForm (228190) | more than 9 years ago | (#12465861)

AC stumps high numbered /. login.

News at 11:00.

Re:Just goes to show.... (1)

passthecrackpipe (598773) | more than 9 years ago | (#12466897)

You are an idiot. "Beta" does not mean "We can do what the hell we want and delete all your data". Beta means "We have tested this application to the best of our in-house abillities, and now need wider input from a wider audience." Implied in the concept of Beta testing is the assumption that no catastrophic bugs will hit you. These have been burned out in Alpha. *especially* an organisation like Google should have caught such a simple issue.

I am not getting into the "this is cool" or "this is evil" argument - I won't use it as I don't have a need for it, but my (already low) estimation for Google as a Software Development and publishing house has just sunk a little deeper. They do a relatively cool searchengine, but their published software sucks.

it's all about intelligence (2, Interesting)

cryptoz (878581) | more than 9 years ago | (#12463764)

Perhaps we should start keeping our own data secure, rather than relying on other people to do it for us? I mean, if you're paranoid about people using this program and gaining access to our "sensitive" data, then it's your own damn fault. Your data shouldn't be so wide open on internet web pages anyhow. Bah.

Re:it's all about intelligence (1)

cryptoz (878581) | more than 9 years ago | (#12463778)

And your "important" data shouldn't be on the web where it could be deleted anyhow.

Oops (0)

Anonymous Coward | more than 9 years ago | (#12463772)

Forgot who we were talking about, sorry. :)

zerg (-1, Offtopic)

Lord Omlette (124579) | more than 9 years ago | (#12463781)

In the linked page, someone in the comments posted PHP code that's very clearly wrong... The dangers of cutting and pasting!

Another POV... (2, Insightful)

Gothic_Walrus (692125) | more than 9 years ago | (#12463785)

Something Awful had an article on this subject [somethingawful.com] a few days ago.

I'm not sure if I agree with the "Google is the new Microsoft" sentiments, but thinking before you install new software is always a good idea.

Re:Another POV... (3, Insightful)

Jerf (17166) | more than 9 years ago | (#12464303)

Actually, that's yet another different problem, one where you get the wrong page from the cache, specifically somebody else's personalized page. It is completely unrelated, in the sense that one could fix either problem independently. (It is possible that they have the same root cause, but I doubt it.)

This bring the current list of reasons not to use the Accelerator up to three, counting the obvious privacy issues.

Re:Another POV... (1)

Gothic_Walrus (692125) | more than 9 years ago | (#12465638)

In my defense, this article does use the phrase "in addition to."

I just read it incorrectly. Not an uncommon event on my part... >_

Re:Another POV... (1)

mike_sucks (55259) | more than 9 years ago | (#12465698)

"Actually, that's yet another different problem"

Not necessarily. If to "logout" you need to click on a link, then that may potentially be cached and so you do not get logged out when you click on it. If the webapp is using a poor session implementation, it may lead to the same problem.

Websites using session-based authentication really should use a form and do a POST to do logout.

Of course, if web sites used http-auth (as they should), this wouldn't be a problem at all.

Re:Another POV... (2, Informative)

Jerf (17166) | more than 9 years ago | (#12465801)

No matter what links you click on, you can't see another user's page, unless the web application is just horrifically badly designed, well beyond merely not quite conforming to a strict interpretation of certain HTTP standards that actually say "should" instead of "must". It is reasonable to assume many web apps use GET in ways going against the spec's recommendation, but surely if merely clicking a link could log you in as arbitrary other users, it would have been noticed. Not to mention only other users of Google's caching are showing up, indicating the bug isn't coming from random link pseudo-clicking.

If you're getting pages from other users, it is a distinct problem from aggressive precaching.

Re:Another POV... (1)

mike_sucks (55259) | more than 9 years ago | (#12466299)

Err, yes, you're probably right. Don't know what I was thinking... must not post to /. before drinking coffee.

Re:Another POV... (2)

orkysoft (93727) | more than 9 years ago | (#12464327)

I think the author is jumping the gun. I believe that this Google Web Accelerator was born from the "Hey, why not use Google's cache all the time when browsing sites on frequently slow servers?" idea, and that these issues are merely unintentional side effects that still need to be fixed (which will be pretty complicated if you ask me).

Still, Google will have the opportunity to store virtually the entire browsing history of Google Web Accelerator users, which people should keep in mind when installing the program.

Bug in the pages, not Google (5, Informative)

keesh (202812) | more than 9 years ago | (#12463811)

According to the HTTP spec, GET requests must not be used to change content. POST actions must be used if you're deleting / changing something. And google doesn't prefetch POST, does it?

Re:Bug in the pages, not Google (1)

Karma Farmer (595141) | more than 9 years ago | (#12463917)

Unfortunately, I'm not aware of anything in the HTML spec that allows the page designer to attach a POST action to anything other than a submit button. It's not particularly difficult to add a POST action to a JavaScript event handler, but I'm that presents problems of its own.

input type=image (2, Informative)

slashkitty (21637) | more than 9 years ago | (#12463953)

It's quite easy and common.. and it's in the HTML spec. Too many people just create a GET link instead of a POST form becuase it's a little easier.

Re:input type=image (1)

Karma Farmer (595141) | more than 9 years ago | (#12466573)

I'm sorry, but what is quite easy and common?

The only two "common" ways that I'm aware of to submit a form as a POST action are to use a submit button or to fire the submit the form in a scripted event.

If you know of a way to submit a POST action from a text link without using javascript, please share it with the rest of us.

Re:input type=image (0)

Anonymous Coward | more than 9 years ago | (#12469077)

I'm sorry, but what is quite easy and common?

<input type="image"> - he put it in the subject.

Re:Bug in the pages, not Google (0)

Anonymous Coward | more than 9 years ago | (#12464166)

Thank you, Captain Obvious.

Now, since GET requests are being used to change content - and Google Web Accelerator also exposes private content that's sent in response to GET request - what are you going to do about it?

Rewrite millions and millions of web apps?
Or tell Google to knock it off?

Re:Bug in the pages, not Google (1, Informative)

Anonymous Coward | more than 9 years ago | (#12464511)

Um, tell them to follow the spec? If not what are specs for then?

Re:Bug in the pages, not Google (1)

anthony_dipierro (543308) | more than 9 years ago | (#12464777)

Millions and millions? I doubt it's that many. And if there are that means there are millions of developers who are going to start working on rewriting them really soon.

Re:Bug in the pages, not Google (2, Interesting)

toastyman (23954) | more than 9 years ago | (#12464450)

Unfortunately, it's not that simple in the real world though.

If you want to POST something, the only way to do that is to use a form. Forms cause a few problems.

IE and Opera render forms slightly "creatively". Wherever a form ends, the browser inserts vertical space in many situations, some of which are unavoidable. This usually makes the page render very strangely. If I want a list of links, and some of them have side-effects and some don't - my choices are to make some of them forms and some regular <a> tags or make all of them forms. If I make some of them forms, the spacing on the page is inconsistent/wrong. If I make all of them forms, I lose a lot of functionality in the pages that don't have side-effects.

If you want a regular text link to submit a form, you have to use Javascript. This creates a dependancy on Javascript for even the most simple of actions, and makes the links unbookmarkable, and uncopy/pasteable into another window/browser.

If you want to avoid javascript you have to use images or rather ugly UI buttons for every link. Images aren't always appropriate (download times, accessability issues, etc) and there's no way to put a TINY submit button on the page for little-used functions if you're using the standard submit buttons the browser provides.

Other issues with form POSTing include the inability to use the back button after POSTing. Even if we can deal with rePOSTing of the same data on the server side and handle it correctly and gracefully, there's no way for webmasters to tell the browser not to pop up with the "Are you sure you want to resend the POST action again?" window.

So, if we choose to follow the HTTP guidelines, we break UI and style guidelines even worse. If we want to use POST we have to give up having the page rendering correctly in major browsers, break the back button, break the ability to bookmark state information (unless you encode variables both in the URL in get fashion AND others in a POST), and make every link either an image(bad for accessability and download speeds) or use some Javascript magic (even worse for bookmarkability and accessability).

I would love something like:

<a href="/link.script" method="post" variables="a=1;b=2">

or even just:

<a href="/link.script?a=1&b=2" method="post"> (if the only goal is to use POST instead of GET, forgetting about the other differences)

Things like this aren't clear "bad webmasters not following the spec" issues. When the browsers that all the clients are using don't give you the tools to use POST in any meaningful way, you're kinda screwed no matter what you do.

Re:Bug in the pages, not Google (1)

keesh (202812) | more than 9 years ago | (#12464505)

Again, you're misusing the technology. HTML is a text markup language, not a page layout language. If you want pixel perfection, use PDFs or a similar format which was designed for that kind of thing.

Re:Bug in the pages, not Google (5, Informative)

Anonymous Coward | more than 9 years ago | (#12464584)

If you want to POST something, the only way to do that is to use a form. Forms cause a few problems.

With all due respect, even though forms aren't perfect, they've been around over a decade, and if you can't deal with them by now, don't bother calling yourself a web developer.

Wherever a form ends, the browser inserts vertical space in many situations, some of which are unavoidable.

You're kidding, right? If you don't want a bottom margin, say so with CSS. This is basic FAQ newbie stuff [htmlhelp.org] .

If you want a regular text link to submit a form, you have to use Javascript.

You can use CSS to make the button look like a text link.

This creates a dependancy on Javascript

No it doesn't. You can easily use Javascript without depending on it. That's the way it's supposed to be used. This too is basic newbie stuff.

Other issues with form POSTing include the inability to use the back button after POSTing.

Huh? Works fine here.

there's no way for webmasters to tell the browser not to pop up with the "Are you sure you want to resend the POST action again?" window.

That's not a bug, that's a feature! POST is not idempotent. Resubmitting a POST is something that absolutely needs to be warned about, because it's a fundamentally different action to reloading a page with GET.

GET followed by refresh == just GET it again

POST followed by refresh == send the server some more data

So, if we choose to follow the HTTP guidelines, we break UI and style guidelines even worse.

There is a reason submit buttons look different to links. It's because they do different things. There are semantics associated with clicking a button that aren't associated with clicking a link. If style guidelines instruct you to make submit buttons look like links, then the style guidelines are probably broken.

So, if we choose to follow the HTTP guidelines, we break UI and style guidelines even worse. If we want to use POST we have to give up having the page rendering correctly in major browsers, break the back button, break the ability to bookmark state information (unless you encode variables both in the URL in get fashion AND others in a POST), and make every link either an image(bad for accessability and download speeds) or use some Javascript magic (even worse for bookmarkability and accessability).

Wow. Get with the times. No really. I'd expect this kind of attitude from a newbie developer in the mid 90s.

Re:Bug in the pages, not Google (1)

toastyman (23954) | more than 9 years ago | (#12464772)

Wherever a form ends, the browser inserts vertical space in many situations, some of which are unavoidable.

You're kidding, right? If you don't want a bottom margin, say so with CSS. This is basic FAQ newbie stuff.


Yes, and IE ignores it in some situations, and in some places will size your table even though it had added the space.

here's no way for webmasters to tell the browser not to pop up with the "Are you sure you want to resend the POST action again?" window.

That's not a bug, that's a feature! POST is not idempotent. Resubmitting a POST is something that absolutely needs to be warned about, because it's a fundamentally different action to reloading a page with GET.


Right, but if I want people to be able to hit the back button and deal with the idemotency and side effect issues on the server side, the user is still annoyed with the popups. If I use GETs, I deal with those problems on the server side and bother the user about the problems as necessary. With POST I can't turn that behavior off even when I know it's incorrect.

Even UI issues aside, I don't think it's a correct design that the page containing a link should have to know if what it's linking to has side effects or not. I would have been much happier if it were standard behavior for the client to add a header saying "This is a prefetch/cache-fill/whatever, treat it as such" and the server replying with "This page can't be prefetched, access it only when you want it to do the action associated with it."

GET/<a href> links with actions taken when clicked are EVERYWHERE. I know Slashdot isn't a shining example of HTML compliance either, but look on the left bar of every page here. That "Logout" link has a side effect of going to it, and it's a GET. At the most basic level, even tracking "how many people have seen this page" is an effect of loading it, that is affected by undesired prefetching. Keeping track of which pages are most recently accessed to handle server side caching of dynamic content is an effect of loading a page, even when no data on the page is changed. At what point do you draw the line between when a request changes something or not?

As much as I support what Google is trying to do with this product, it's causing problems that I don't think they fully considered before launching it.

I've got direct involvement with one website who recently had their site performance suffering. A whole lot of investigation revealed that it specifically was because Google Accelerator was being used by a handful of clients. Why? One page frequently used had a tiny list of links on it to generate reports. "Daily"/"Weekly"/"Monthly"/"Yearly"/"All Records(Note: will take several minutes to generate)". GA was following all of those links to prefetch them.

Yes, there are workarounds to this. But any client that is fetching data without being asked to, and nothing in the HTML indicated that prefetching was desired on them... that's broken. There isn't even a way to ask GA not to prefetch certain links other than hiding them in javascript or forms.

Re:Bug in the pages, not Google (1)

anthony_dipierro (543308) | more than 9 years ago | (#12464898)

I know Slashdot isn't a shining example of HTML compliance either

Nuff said.

That "Logout" link has a side effect of going to it, and it's a GET.

I'll say it anyway. It shouldn't.

At the most basic level, even tracking "how many people have seen this page" is an effect of loading it, that is affected by undesired prefetching. Keeping track of which pages are most recently accessed to handle server side caching of dynamic content is an effect of loading a page, even when no data on the page is changed. At what point do you draw the line between when a request changes something or not?

I'd draw the line somewhere between incrementing a counter and deleting content. Counters and server side cashing hints are known to be approximate measures. Sure, you're breaking the spec, but you're doing so knowing full well that it might mess things up, and that's OK.

All that said, I don't like what google is doing with the precashing either. I'd consider it a violation of the Robot Exclusion Standard to visit links without checking robots.txt first. This is arguable, as precashing isn't exactly the same as other web spiders, but I'd say it falls under the standard.

Re:Bug in the pages, not Google (1)

jovlinger (55075) | more than 9 years ago | (#12472231)


One page frequently used had a tiny list of links on it to generate reports. "Daily"/"Weekly"/"Monthly"/"Yearly"/"All Records(Note: will take several minutes to generate)". GA was following all of those links to prefetch them.

[snip]

There isn't even a way to ask GA not to prefetch certain links other than hiding them in javascript or forms.

huh.

you mean that google doesn't obey robots.txt?

That suprises me.

Re:Bug in the pages, not Google (0)

Anonymous Coward | more than 9 years ago | (#12472708)

Yes, and IE ignores it in some situations, and in some places will size your table even though it had added the space.

Testcase? I've not encountered this bug, and it sounds a lot like invalid code elsewhere screwing things up.

Right, but if I want people to be able to hit the back button and deal with the idemotency and side effect issues on the server side, the user is still annoyed with the popups.

Again, WTF are you on about with the back button? The back button works perfectly normally for me. Give an example, or at least describe what your problem is.

I don't think it's a correct design that the page containing a link should have to know if what it's linking to has side effects or not.

If you code things correctly, the page doesn't have to know if what it links to has side effects. That's because anything it could link to doesn't have side-effects.

Linking to a page and sending data to a web application are two entirely different things. If you are constructing a web application, and you are confused about which you need to do with any given instance, then you simply aren't qualified to do the job.

I would have been much happier if it were standard behavior for the client to add a header saying "This is a prefetch/cache-fill/whatever, treat it as such" and the server replying with "This page can't be prefetched, access it only when you want it to do the action associated with it."

You can do this. RTFA.

That "Logout" link has a side effect of going to it, and it's a GET.

What significance does that have? You already conceded that Slashdot's technical merits are dubious at best.

At the most basic level, even tracking "how many people have seen this page" is an effect of loading it, that is affected by undesired prefetching.

And that is precisely the reason why the RFC says SHOULD and not MUST. SHOULD means that if you deviate, then you'd damn well better understand why and make sure it won't cause interoperability problems. Logging doesn't cause any interoperability problems. Using it to change state in a web applciation does cause interoperability problems.

Re:Bug in the pages, not Google (1)

Ed Avis (5917) | more than 9 years ago | (#12475417)

There's kind of a convention that setting cookies on the client side doesn't count as a side effect. You can follow a normal text link and receive a cookie, so it doesn't hurt too much to follow a 'Logout' link for a cookie to be deleted. If something happens to prefetch that page, it just won't apply the cookie changes. If 'Logout' ends up setting some state on the server, that's less appropriate and really should be a POST.

I know there isn't an exact line between what counts as a side effect and what doesn't, but most people have a general idea and I think one should try to follow the spirit of it. Personally I like to make clicky forms where intermediate changes are done with text links (eg toggling something on and off) but the final 'commit' happens by pressing a button and making a POST.

Mod, oh MOD PARENT UP! (1)

marat (180984) | more than 9 years ago | (#12464899)

Other issues with form POSTing include the inability to use the back button after POSTing.

Huh? Works fine here.
Just one thing for those didn't get this one: you should always return Location: header when replying to POST.

Re:Bug in the pages, not Google (0)

Anonymous Coward | more than 9 years ago | (#12464743)

If you want a regular text link to submit a form, you have to use Javascript. This creates a dependancy on Javascript for even the most simple of actions, and makes the links unbookmarkable, and uncopy/pasteable into another window/browser.

Why would you want to bookmark something like deleting a page or an account? I mean, I can see a bookmarklet that you explicitly set up, but that isn't what this is about.

If you want to avoid javascript you have to use images or rather ugly UI buttons for every link.

So if your users don't want images or rather ugly UI buttons, they need to enable javascript. Simple enough.

Other issues with form POSTing include the inability to use the back button after POSTing. Even if we can deal with rePOSTing of the same data on the server side and handle it correctly and gracefully, there's no way for webmasters to tell the browser not to pop up with the "Are you sure you want to resend the POST action again?" window.

That's not a bug, that's a feature. It's one of the reasons you should use POST, not GET, if you're performing an action, certainly for a significant action like deleting something.

Re:Bug in the pages, not Google (1)

mike_sucks (55259) | more than 9 years ago | (#12465674)

"I would love something like:

<a href="/link.script" method="post" variables="a=1;b=2">"

I guess it it fortunate for us that you'll never see it - no browser would implement such a thing. It is contrary to the spirit of HTML in general and links specifically.

See the WhatWG discussion [dreamhost.com] of this sort of thing for more reasons why it sucks.

Re:Bug in the pages, not Google (0)

Anonymous Coward | more than 9 years ago | (#12464529)

The spec does not prohibit it. It's strongly suggested that GET not be used for queries with side-effects, but not required.

Re:Bug in the pages, not Google (1, Interesting)

Anonymous Coward | more than 9 years ago | (#12464699)

Quoting from section 9.1.1 Safe Methods of the HTTP 1.1 RFC (2616):

Implementors should be aware that the software represents the user in their interactions over the Internet, and should be careful to allow the user to be aware of any actions they might take which may have an unexpected significance to themselves or others.

In particular, the convention has been established that the GET and HEAD methods SHOULD NOT have the significance of taking an action other than retrieval. These methods ought to be considered "safe". This allows user agents to represent other methods, such as POST, PUT and DELETE, in a special way, so that the user is made aware of the fact that a possibly unsafe action is being requested.

There is a big difference between SHOULD NOT and MUST NOT. The fact of the matter is the web development community has used GET to perform "non-safe" actions. That is the reality of the current world. It was irresponsible of Google to simple look at a spec and interpret what "should be the way of the world", when in reality the world isn't that way.

Re:Bug in the pages, not Google (0)

Anonymous Coward | more than 9 years ago | (#12464745)

Amazing. The spec explicitly goes out of its way to say: These methods ought to be considered "safe". And you use that quote to try and argue that Google should not have considered those methods to be safe.

Re:Bug in the pages, not Google (2, Informative)

sepluv (641107) | more than 9 years ago | (#12464933)

And just after that it goes on to say that, as it is expected that GET requests are sent without the explicit permission of a user, the server side (web developers) accepts all responsibility for any breach of the previous "SHOULD NOT" and have no right to blame the user side (users, Google) if they decide to make GETs do more than just retrieval of a document.

FFS, how can these stupid web designers be threatening to sue Google when the HTTP itself (protocol of the WWW which they should all have read) says that it is there frigging fault and they should blame themselves if they use GET requests in that way.

Re:Bug in the pages, not Google (1)

wdr1 (31310) | more than 9 years ago | (#12465224)

Uh, doesn't that throw all of REST out the window?

-Bill

Re:Bug in the pages, not Google (0)

Anonymous Coward | more than 9 years ago | (#12466021)

oh yeah. great, 10mill pages out there use links to delete pages, record last visited times, etc, but since 'technically' it's wrong, let's just ignore the problem.

There. problem fixed.
Good job.

Re:Bug in the pages, not Google (1)

GryMor (88799) | more than 9 years ago | (#12466796)

AFAIK, GETs should be Idempotent, but that just means hitting them n times for all positive n, produces the same results, it DOES NOT specify that hitting it 0 times has the same result as hitting it once (that would be identity in addition to idempotence). Loging out is a classic example of something with a side effect that is also Idempotent. From any state, hitting a logout link takes you to the logged out state, hitting it again takes you to the same state, therefore, logout is Idempotent.

Re:Bug in the pages, not Google (1)

johnnliu (454880) | more than 9 years ago | (#12467283)


Most websites uses some sort of link-checking program on a scheduler to make sure they didn't accidentally create broken-links within their own website.

Such link-check programs also follows all the links in your webpage.

Bug in the webpage. Nothing to do with Google.

Re:Bug in the pages, not Google (1)

davegaramond (632107) | more than 9 years ago | (#12471390)

GWA also doesn't prefetch GET with query strings. The problem is that apparently Basecamp/Backpack uses short/pretty URLs that don't contain query strings, e.g. http://host/account/delete/121 [host] instead of http://host/account?action=delete&id=121 [host] . It's not prohibited to use GET for delete/add/whatever links.

Well (1)

KinkifyTheNation (823618) | more than 9 years ago | (#12463834)

If it can't determine whether or not a dynamic link (like "delete this") is harmful or not, perhaps this could be the end of Google Accelerator?

Re:Well (1, Informative)

Anonymous Coward | more than 9 years ago | (#12463874)

If it can't determine whether or not a dynamic link (like "delete this") is harmful or not

The thing is, it can determine whether or not a dynamic link is harmful or not. GET is supposed to always be safe. The HTTP specification says so. Stupid web developers used GET in an unsafe way and are paying the penalty because Google thought something that's defined as being always safe is, well, safe.

Stupid web developers (2, Informative)

Anonymous Coward | more than 9 years ago | (#12463851)

The root of the problem is stupid web developers ignoring RFC 2616 and using the GET method to change state.

Now all the people who cut corners thinking it didn't matter have been caught with their pants down, they look silly because the web applications they wrote are losing data, so they have gotten angry and pointed the finger at Google.

Sorry kids, but this is what happens when you don't follow the specs. They are there to make all our lives easier, you ignored them, you fucked up.

Yeah, maybe Google could have guessed the fact that you've fucked up and hobbled their software to hide your bugs. But you've got no right to complain that they didn't mollycoddle your stupid, broken web applications when it's you that broken them in the first place trying to cut corners.

Re:Stupid web developers (3, Insightful)

zenyu (248067) | more than 9 years ago | (#12463902)

The root of the problem is stupid web developers ignoring RFC 2616 and using the GET method to change state.

Seriously, using POSTs was something we all learned in 1994... Hopefully, this Google accelerator thingy will be popular enough to rid us of these creaky old broken sites.

Re:Stupid web developers (0, Troll)

mkavanagh2 (776662) | more than 9 years ago | (#12464293)

I hope you and your parent post die from some form of catastrophic genital haemorrhage.

Re:Stupid web developers (3, Insightful)

0x461FAB0BD7D2 (812236) | more than 9 years ago | (#12464155)

A lot of "stupid" web developers use GET so that those states can be bookmarked or sent to others so they can do something with it.

Unless you have another idea, using GET for states is here to stay.

Re:Stupid web developers (0)

Anonymous Coward | more than 9 years ago | (#12464231)

A lot of "stupid" web developers use GET so that those states can be bookmarkedor sent to others so they can do something with it.

You either don't know what the word "state" means in this context, or you meant to say something else, because that sentence doesn't make sense.

If you can't rephrase in a way that expresses your point clearly, perhaps an example would help.

Re:Stupid web developers (2, Interesting)

pk2200 (324678) | more than 9 years ago | (#12466022)


You can use POST without sacrificing bookmarkability. After your code processes the POSTed request, redirect to a GET-style URL that provides a view to the same content.

This technique is quite common.

Yikes! (1)

Guspaz (556486) | more than 9 years ago | (#12463863)

Good to know, I've disabled prefetching in GWA as a result.

Increased malware installation (0)

OppView (880517) | more than 9 years ago | (#12464151)

I would have thought mass prefetching of pages is also going to make the lives of iframe scumware/malware installers easier too. :(

You dont even have to visit their pages to get "infected"

Re:Increased malware installation (0)

Anonymous Coward | more than 9 years ago | (#12464336)

You dont even have to visit their pages to get "infected"

Yes you do. GWA is what does the prefetching, not the browser, so the browser doesn't actually see the pages until you actually go to them. If GWA had the ability to execute and install spyware, then it might be a different matter, but it doesn't. That's the browser's job.

What the cunting fuck. (-1, Flamebait)

mkavanagh2 (776662) | more than 9 years ago | (#12464258)

Hey, shitfuck: it's obviously Google's fault. Web application designers are dumb for using GET for stuff like this, but it was not a real problem for users until the stupid fucks at Google decided to release something awful like GWA without thinking for a second of the responsibility that should come with the high profile of Google. It was people at Google that were too arrogant to think about what they were doing, and it is the fault of people at Google.

Re:What the cunting fuck. (3, Funny)

mkavanagh2 (776662) | more than 9 years ago | (#12464266)

Oh, and obligatory "lol slashdot" comment: Think about what most people would be saying if Internet Explorer suddenly did this because Microsoft thought it would be a good idea. You'd be all over them like rats over a rotting horse cock.

Re:What the cunting fuck. (1)

sepluv (641107) | more than 9 years ago | (#12464948)

I for one would strongly congratulate MS on finding the MSIE source code and getting round to actually updating their stone-age browser and adding a feature.

I would also strongly congratulate them on complying with WWW standards for a change--and indeed I have done in the past on those few occasions when MS has chosen the path of standards.

Re:What the cunting fuck. (1)

mkavanagh2 (776662) | more than 9 years ago | (#12467172)

do yuo like to fuck your own ass with your tongue :@

lol fag :@

Re:What the cunting fuck. (1)

sepluv (641107) | more than 9 years ago | (#12464481)

This is just a way for WWW designers to not admit responsibilty, and the argument you and many others are putting forward (esp. when some say "Sue Google") is dangerously attempting to extend responsibility to everyone for one person's stupidity, it's-not-my-fault-I-killed-my classmates-with-a-BFG,-it-was-Quake,-my-parents,-t he-education-system style.

The rules of society (inc. Internet) are there for a reason. If you break the laws/rules, and I do something that wouldn't normally hurt you (if you weren't doing something unlawful), it isn't my fault.

Analogy: If I'm driving a train and you lie in the middle of the railway track, you can't blame me because you should have had the common sense to understand that there might have been a reason why people made a law against going on railway tracks, and, whatever you may think, there is actually nothing l33t about breaking rules that you don't understand.

To all you l33t script-kiddie-style WWW designers and programmers out there, your actions have consequences...news@11.

Re:What the cunting fuck. (2, Insightful)

mkavanagh2 (776662) | more than 9 years ago | (#12464515)

It is still Google's fault. Any half-competent software engineer would have thought about this, and the people at Google did not. It doesn't matter if the websites affected were non compliant to the RFC, because they were the existing state of affairs. Google stuck this crap out there with no thought for the existing state of affairs, so it is their fault. It's the practical view of things, and the practical view is the only one that anyone should take.

Re:What the cunting fuck. (2, Informative)

sepluv (641107) | more than 9 years ago | (#12464578)

I wouldn't be quite so harsh. Isn't the point of early beta tests like this to find out how their UA works out there in the Real World? Apparently they've already issued a fix to solve the problem (or go some way to...I don't know the details).

Re:What the cunting fuck. (1)

mkavanagh2 (776662) | more than 9 years ago | (#12464614)

I wouldn't be so harsh if this was some guy releasing stuff on a random .org domain that three people visit in a year. This is Google we are talking about. They should be well aware that even public betas will be used by people as if they are the greatest software ever created, oh hallelujah, we thankyou for this software we are about to recieve, our lord and master Google, forever and ever amen.

They screwed up and I hope everyone remembers this for a while. They had better not screw up like this again, and they had better issue a prominent apology.

Re:What the cunting fuck. (1)

sepluv (641107) | more than 9 years ago | (#12464615)

To extend my analogy, the way I see this is that your so-called practical view would say that trains don't pass that point on the railway track 99.9999% of the time and it is much quicker going across the track tha all the way round, so obviously there's no reason at all why I shouldn't cross the track.

The architects of HTTP (as people who know how the {WWW/railway} works) clearly envisiged that people should not {cross the track/design their sites with GET requests that change stuff} because a {train/web accelerator} might come along.

Re:What the cunting fuck. (1)

mkavanagh2 (776662) | more than 9 years ago | (#12464648)

False analogy.

A correct analogy: A train track goes unused for many years. Despite warnings, it becomes a popular playing area for children, due to the surrounding trees, the open space, and the interesting terrain. Everyone is aware that hundreds of children play on the disused track every day.

One day, some cunt runs a high speed service down the track and kills 50 kids. Whose fault is it?

Re:What the cunting fuck. (1)

sepluv (641107) | more than 9 years ago | (#12464663)

OK. I think this is still the fundamentally the same analogy you've, just altered it the scale of it (quantatively), so it is would still mainly be the kids fault--not the railway company--and the law would probably agree.

Re:What the cunting fuck. (0, Flamebait)

mkavanagh2 (776662) | more than 9 years ago | (#12464693)

You're really smart and I love you, please have my babies and I want you to lead the country because you are a smart guy and fair and just, did I mention you are smart :)

Re:What the cunting fuck. (1)

sepluv (641107) | more than 9 years ago | (#12464802)

Sarcasm is really a subtle art.

Anyway, I've had major sleep deprivation (mainly with UK general election--I was an election agent) hence atrocious syntax.

Here's what the laws/standards of the Internet say (verbatim) in the section on safety with section number 9.1.1 (irony?) which all those whiney web designers really should have actually bothered to read (my emphasis):

9.1.1 Safe Methods

Implementors should be aware that the software represents the user in their interactions over the Internet, and should be careful to allow the user to be aware of any actions they might take which may have an unexpected significance to themselves or others.

In particular, the convention has been established that the GET and HEAD methods SHOULD NOT have the significance of taking an action other than retrieval. These methods ought to be considered "safe". This allows user agents to represent other methods, such as POST, PUT and DELETE, in a special way, so that the user is made aware of the fact that a possibly unsafe action is being requested.

Naturally, it is not possible to ensure that the server does not generate side-effects as a result of performing a GET request; in fact, some dynamic resources consider that a feature. The important distinction here is that the user did not request the side-effects, so therefore cannot be held accountable for them.

In other words, that last bit says that, if web designers do choose to break the "SHOULD NOT" and allow GET requests to result in some (preferably minor--definitely NOT DELETION) action, it is improtant for those web designers to remember that they have no right to blame the user (including the user agent--that's what that rfc means by user) for any side-effects of those GET requests--they should instead hold themselves responsible.

It goes on...

9.1.2 Idempotent Methods

Methods can also have the property of "idempotence" in that (aside from error or expiration issues) the side-effects of N > 0 identical requests is the same as for a single request. The methods GET, HEAD, PUT and DELETE share this property. Also, the methods OPTIONS and TRACE SHOULD NOT have side effects, and so are inherently idempotent.

However, it is possible that a sequence of several requests is non- idempotent, even if all of the methods executed in that sequence are idempotent. (A sequence is idempotent if a single execution of the entire sequence always yields a result that is not changed by a reexecution of all, or part, of that sequence.) For example, a sequence is non-idempotent if its result depends on a value that is later modified in the same sequence.

A sequence that never has side effects is idempotent, by definition (provided that no concurrent operations are being executed on the same set of resources).

...which further backs up my point of view..these web `application' system are not idemptoent.

Re:What the cunting fuck. (1)

mkavanagh2 (776662) | more than 9 years ago | (#12464824)

You're not going anywhere. I have not suggested that the people who wrote these sites were not breaking the specs. I have suggested, for it is so, that it is Google's fault when Google's software interoperates badly with such sites, because Google have a responsibility to be aware.

Incidentally, you're a retard and I am burning karma so fuck you. :@

Re:What the cunting fuck. (1)

sepluv (641107) | more than 9 years ago | (#12464968)

I'm saying the HTTP itself (the web's protocol that all web designers worth their salt have read a few times) clearly states that responsibility always lies with the web designer for any result of a GET request other than mere retrieval and they cannot blame anyone else but themselves.

Re:What the cunting fuck. (1)

mkavanagh2 (776662) | more than 9 years ago | (#12465010)

And I'm saying that Google are still culpable for this. They have a responsibility to be aware about the environment they are releasing their software into, and they did not think for a second before putting out software that fucks up severely with the pre-existing state of affairs.

It doesn't matter two stone shits that the existing state of affairs is in breach of the specs; if Google released a webbrowser that wrote pseudo-random 1s and 0s to the entire harddrive several times over whenever it encountered invalid HTML (oh no! the specs!), it would be Google's fault. And it's Google's fault now.

In a sane world perhaps... (1)

ConceptJunkie (24823) | more than 9 years ago | (#12464729)

so it is would still mainly be the kids fault--not the railway company--and the law would probably agree.

In a sane world, yes. In places like the U.S. the rail line would be quickly writing lots and lots of settlement checks.

My Dad worked for a power company that had to settle over a case of a kid breaking into an electrical substation and getting injured, where "breaking in" means doing something along the lines of climbing a 15-foot fence.

They settled, because they were afraid they would lose the lawsuit. Compared to that, the train situation above would be a slam dunk for the families of the victims.

Re:In a sane world perhaps... (1)

sepluv (641107) | more than 9 years ago | (#12464859)

In this case, the "law" (the HTTP standard) states that if web designers choose to allow normal GET requests to result in an action other than mere retrieval (i.e.: cross the railway track), they assume full responsibilty for the consequences and cannot blame the user's end (the train driver and his company), and, therefore, by extension, Google (i.e.: the train manufacturer).

This is the reason why I think the designers should assume responsibilty. Because the standard says so, and anyone who calls themselves a WWW designer should have read HTTP (it *is* the WWW's protocol FFS).

Re:In a sane world perhaps... (1)

MrAndrews (456547) | more than 9 years ago | (#12465127)

Ultimately, it's like this: the user brings the train to the spot where the children play, but doesn't proceed. However, the new Google computer onboard the train turns the engine back on and plows on forward. Why? 'cause that's a track, and this is a train.

The kids shouldn't be playing there, but that doesn't mean the automatic train idea is smart.

I think the only real shock in all this is that no one at Google was aware GET/POST was as abused as it is.

Re:In a sane world perhaps... (1)

ConceptJunkie (24823) | more than 9 years ago | (#12470080)

I agree with you. They're no dummies at Google. This had to have happened before the public release.

It's just like Stronghold 2, which I just bought. Now a game isn't quite the scale of a tool like this, but within a couple of hours, I'd found a good half dozen serious UI bugs and a number of significant UI design problems. The irony is that the game mechanics seem sound... these are probably fairly easy problems to fix. It amazes me how many apps are shipped with glaring errors that are evident within minutes (or even less) of installing the app.

It only took me a couple days to find a flaw in Outlook 2003 that I felt was so serious I immediately switched to Thunderbird. Once the mail store database gets bigger than about a gig and a half Outlook starts losing data. This was confirmed by a couple of folks who know a lot more about Microsoft than me. I can't understand how this can happen.
The whole software quality thing, like UI design seems to be getting progressively worse industry-wide.

Re:What the cunting fuck. (0)

Anonymous Coward | more than 9 years ago | (#12464757)

Are you aware you appear to be a twelve year old with a really crappy attitude? Grow up.

Re:What the cunting fuck. (1)

mkavanagh2 (776662) | more than 9 years ago | (#12464835)

You didn't do this rite, get out :@

Slashdot Editors: Be Careful What You Post (1)

HunterZ (20035) | more than 9 years ago | (#12464526)

Sigh...YADA (Yet Another Duplicate Article)

This was already posted on /. in the last day or two.

Re:Slashdot Editors: Be Careful What You Post (1)

wdr1 (31310) | more than 9 years ago | (#12465238)

For once, it's not timothy's fault -- his Google Accelerator must have pre-fetched the moderator's "approve" button!

-Bill

google got hacked (1)

mkavanagh2 (776662) | more than 9 years ago | (#12464567)

hey guys did i do this rite

POP goes the Google (0)

Anonymous Coward | more than 9 years ago | (#12464600)

Looks like all of Google.com went off-line about an hour ago. the search engine is back, news and gmail are still MIA. I'm not getting asked for cookies to sites I haven't visited yet, so pre-fetch may be gone.

There is always hope

Maybe now people will fix the security holes... (1)

anthony_dipierro (543308) | more than 9 years ago | (#12464831)

If you can delete content by following a link, then this is a major security hole. Any website could easily embed such a link into java, javascript, even just an image link. Someone could send you an email with an image referencing the link. This is one place you should be following the spec. If you're making an important side-effect, use POST.

Isn't googlebot just as dangerous? (1, Interesting)

Anonymous Coward | more than 9 years ago | (#12464935)

Ignoring the fact that you now have accounts that are logged in, couldn't you just as easily make a public site that allows anonymous visitors to edit content -- let's say, a wiki -- with "delete" links sprinkled on it?

What would you say to a webmaster that sticks "delete" links everywhere on their pages, and suddenly finds that Googlebot, in its daily rounds, wipes out their entire wiki?

Somebody isn't following the standards (1, Insightful)

Anonymous Coward | more than 9 years ago | (#12464947)

Link pre-fetching, as performed by Mozilla/Firefox [mozilla.org] , is an opt-in thing. Webmasters should add the "rel='prefetch'" attribute to their tags to enable software to intelligently prefetch links.

It's safe, it's an emerging standard, and webmasters maintain control. Why isn't Google following the standard?

Appreciate the irony (2, Informative)

Presto_slashdot (573879) | more than 9 years ago | (#12465723)

Nearly every highly-rated comment points the finger at "stupid" web designers rather than at Google, because GWA simply reveals that putting side effects on links is dangerous.

I hope you appreciate the irony of posting such comments on a site whose Logout link is implemented via a GET (see upper left of your screen.) That's the point: every site implements Logout as a link, and Google should have recognized this.

PS while I'm writing I might as well point out my previous GWA comment [slashdot.org] from a few days before this whole controversy. I was kinda hoping to shed some light on this exact problem. No one noticed, so I went and told 37signals what was going on ;)

Re:Appreciate the irony (1)

animeshpathak (873597) | more than 9 years ago | (#12465914)

Excellent Point. I wish I had some mod points to mod you up!

Anyways, did anyone notice that another problem the prefetch creates is bandwidth costs on poor websites. Does GWA follow robots.txt [I guess not, since then a lot of sites will be off bounds]?

My 2 cents.

Re:Appreciate the irony (1)

Mad Merlin (837387) | more than 9 years ago | (#12467065)

I don't think we've ever considered Slashdot to be a good example of web design practices...

You know what's funny... (1)

gbulmash (688770) | more than 9 years ago | (#12466139)

People on dial-up are going to use web accelerators. Concerns about privacy and the other nightmares accelerators cause (such as making graphics look like shit) are generally (though not exclusively) limited to people willing to pay $10-20 more a month for broadband (Netscape dial-up $9.95, AOL Dial-up $19.95, avg. DSL $29.95).

All this stuff we bitch and moan about here probably won't make a dent in the adoption of Google's accelerator and they're just going to run roughshod over webmasters whose sites don't comply. If they pick up X million users, you will code your site to work with their accelerator or face the consequences.

- Greg

maximum capacity reached? (1)

pulsorock (882829) | more than 9 years ago | (#12485586)

I went to http://webaccelerator.google.com/ [google.com] and I saw this message:
"Thank you for your interest in Google Web Accelerator. We have currently reached our maximum capacity of users and are actively working to increase the number of users we can support."

Maybe has this someting to do with all this security concerns?
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?