Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Wayback Machine Safe, Settlement Disappointing 182

Jibbanx writes "Healthcare Advocates and the Internet Archive have finally resolved their differences, reaching an undisclosed out-of-court settlement. The suit stemmed from HA's anger over the Wayback Machine showing pages archived from their site even after they added a robots.txt file to their webserver. While the settlement is good for the Internet Archive, it's also disappointing because it would have tested HA's claims in court. As the article notes, you can't really un-ring the bell of publishing something online, which is exactly what HA wanted to do. Obeying robots.txt files is voluntary, after all, and if the company didn't want the information online, they shouldn't have put it there in the first place."
This discussion has been archived. No new comments can be posted.

Wayback Machine Safe, Settlement Disappointing

Comments Filter:
  • Simple post (Score:3, Informative)

    by Kagura ( 843695 ) on Thursday August 31, 2006 @06:12PM (#16019777)
    • shouldn't be copyrightable - there is nowhere more "public domain" than the Internet. Same with radio/TV - anyone who makes use of the public airwaves should sacrifice any claim to copyright for that priviledge. If someone wants to control their works through copyright, they should use controlled, private distribution.

      I'll no doubt have lawyer (and lawyer wannabees) protesting - but that only follows the literal and common sense meaning of "public domain," instead of the legal rationalization which has bee
      • by Khuffie ( 818093 )
        Erm, if I post something on my website (which I bought the domain for and paid for hosting), it is not a public space, since I paid for it. Stuff on www.whitehouse.gov, on the other hand, would be, since tax payer money paid for it.
        • Re: (Score:3, Insightful)

          by Lactoso ( 853587 )
          And just what does that check to your hosting company pay for aside from the physical location and maintenance of the webserver? Propogation of your website's IP address to DNS and bandwidth. And what do you need bandwidth for if not to share your web pages with the internet at large...
          • by phulegart ( 997083 ) on Thursday August 31, 2006 @08:37PM (#16020699)
            so if my content is behind a protected "members area" then it is still public domain and should be freely available? If I am a photographer, and my site clearly states that all images are copyright of a certain date and that use of them without my permission is forbidden, that means nothing? If someone uses images of me without my permission, that they got from a website or protected members area, how is it that I can get them removed by complaining? If they are public domain, then it should be my tough luck, right?

            If I post your credit card and bank information on a forum site, does that mean it is now public domain and you have no protection?

            If I post on a forum site that I am selling stolen credit card info and bank info, my post should not be touched, because it is public domain and it should be freely available?

            • Re: (Score:3, Interesting)

              by iminplaya ( 723125 )
              If I post your credit card and bank information on a forum site, does that mean it is now public domain and you have no protection?

              If anything bad comes from it, it only means that the banks employ weak security. That information by itself should mean nothing. Complain to the financial institutions, not the person who posts it. Make it the bank's problem and it will go away. Don't use their services until they make it secure without making it unduly inconvenient for the customer. The silly passwords and 20
              • Re: (Score:2, Insightful)

                by phulegart ( 997083 )
                what you are saying, is that the person who puts the information on the internet, is the one who decides if it is public domain. As opposed to the person to whom the information belongs.

                You know the current standard the US follows, for copyright of printed works, is LIFE+70 years? That means that once the author copyrights their work, the copyright is good for 70 years after they die. Only after the copyright expires and it is not renewed, the work becomes public domain.
                http://onlinebooks.library.upenn.e [upenn.edu]
      • Re: (Score:2, Insightful)

        by phulegart ( 997083 )
        here's a little story... it deals with archiving and the like.

        My friend's hosting service got hacked. we caught it right away, before a site had been put into place, but the individuals attempted to put up the site http://paypal-protect.org./ [paypal-protect.org.] We shut them down quick. They went on to hack another hoster, and currently have their little phishing site up and running. I suggest you go to the site, and without using ANY real information, login with a bogus email and password, and check it out. If you take a
        • by 1u3hr ( 530656 )
          However... EACH OF THOSE POSTS is still there in the google cache. Go ahead and see. Why is this important? Because all you need to see, if you are in the market to buy stolen Identities and credit cards, is the contact information.

          I don't see what your point is. Surely if these guys are engaged in criminal activity as you suggest, and have contact information, they should be investigated and arrested. The FBI (etc) should take over the contacts and shut them down, or use them to entrap other thieves. It

      • by Kelson ( 129150 ) *

        If someone wants to control their works through copyright, they should use controlled, private distribution.

        But isn't the purpose of copyright to extend legal protection beyond "controlled, private distribution"?

        After all, photocopiers, VCRs, audio tape recorders, CD/DVD writers -- heck, the printing press -- mean that distribution is no longer controlled or private, unless you restrict access to people who can use them. (Or you try to make it technically difficult via DRM, but that's only a temporary

        • by msauve ( 701917 )
          After all, photocopiers, VCRs, audio tape recorders, CD/DVD writers -- heck, the printing press -- mean that distribution is no longer controlled or private,

          Those are examples of private distribution. By "controlled private distribution," I do not mean avoiding distribution to the public through regular sales channels, where there exists a definite relationship between buyer and seller. When you buy a CD, that is a private transaction between you and the seller. It is controlled (you get the CD after you'
      • anyone who makes use of the public airwaves should sacrifice any claim to copyright for that priviledge. If someone wants to control their works through copyright, they should use controlled, private distribution.

        I think that is too restrictive. You should automatically give up your copyright if you show you work to the public by any means.
  • I want.... (Score:5, Funny)

    by Whiney Mac Fanboy ( 963289 ) * <whineymacfanboy@gmail.com> on Thursday August 31, 2006 @06:12PM (#16019783) Homepage Journal
    Obeying robots.txt files is voluntary, after all, and if the company didn't want the information online, they shouldn't have put it there in the first place."

    I want a search engine that only indexes items excluded in the robots.txt file :-)
    • Re: (Score:3, Interesting)

      by hackstraw ( 262471 ) *
      I want a search engine that only indexes items excluded in the robots.txt file :-)

      What's interesting is that I've heard of robots that do that exclusively. It may of been here on slashdot, but I've heard of people putting stuff in their exclude list in robots.txt and some robots _ONLY_ searched those files.

    • by Anonymous Coward on Thursday August 31, 2006 @06:42PM (#16019999)
      Obeying robots.txt is "voluntary" in the same sense that obeying RFCs is voluntary. In other words, it isn't. You can technically ignore any and all standards, but there will be sanctions. In the case of robots.txt, these sanctions can very well be court ruling against you, because robots.txt is an established standard for regulation of the interaction between automated clients and webservers. As such it is an effective declaration of the rights that a server operator is willing to give to automated clients in contrast to human clients. This is especially important with regard to services which mirror webpages. Doing so without the (assumed) consent of the author is a straightforward copyright violation and if the author explicitly denies robot access, then the service operator knowingly redistributes the work against the author's will.

      Even if you don't fear the legal system, disregarding robots.txt can quickly get you in trouble. There are junk-scripts which feed bots endlessly and there are blocklisting automatisms against unbehaving bots. If people program their bots to ignore robots.txt, these and possibly more proactive self-defense mechanisms will become the norm. Is that the net you want? Maybe obeying robots.txt is the better alternative, don't you think?
      • Re: (Score:3, Interesting)

        by pimpimpim ( 811140 )
        Yeah, and in a world with full cooperation, you wouldn't have to lock your door because no-one would enter your house, at that would mean that there will be serious actions against them. Dream on, mr AC! robots.txt is a flaky way of security, and everyone knows it. If I would want to find out something nasty/interesting of a certain company, I'd look at the robots.txt files to see what I could find.

        Furthermore, there are perfectly good ways to lock content away from the outside in a more rigourous way, pa

        • by Anonymous Coward on Thursday August 31, 2006 @07:15PM (#16020201)
          An attitude like yours is exactly why people go to court over these things. If you don't even adhere to the most basic rules, then it's easier and less costly to have you pay my lawyers and a fine instead of trying to stop robots from reading information that human users are supposed to see without difficulty. The lack of common courtesy on the net is disconcerting. The server tells you in no uncertain terms that you are not welcome, but you keep requesting "forbidden" pages. Consider an analogous situation in real life: You are walking in the park and someone asks you for a dollar. You decline, but the beggar keeps asking. You're saying that accepting your first denial as binding is "voluntary" and the beggar can keep bugging you as long as he likes. If that happened to me twice, I'd have the asshole arrested, and that's exactly what you're going to see online if people don't behave, especially when their behaviour leads to copyright violations which would have been avoided if they had followed the robot exclusion standard.
          • It's more like: bum asks you for a dollar. You give him one. Two weeks later you decide you don't want to give handouts any more, so you write on your forehead "no soliciting". Next you go to court and claim that writing "no soliciting" on your forehead means you not only won't give more handouts, but the bum who you PREVIOUSLY gave a dollar to, now has to return it.

            See: that company DID NOT HAVE a robots.txt directive active when the Wayback machine archived it. They put the robots directive up two we
      • Re: (Score:3, Informative)

        by grumbel ( 592662 )

        Obeying robots.txt is "voluntary" in the same sense that obeying RFCs is voluntary. In other words, it isn't.

        How about we have a look what the RFC-drafts (its not even official) say about robots.txt:

        "Web site administrators must realise this method is voluntary, and is not sufficient to guarantee some robots will not visit restricted parts of the URL space."

        "It is not an official standard backed by a standards body, or owned by any commercial organisation. It is not enforced by anybody, and th

    • http://www.whitehouse.gov/robots.txt [whitehouse.gov]

      think about it-- anything on this list IS NOT on google..

      why???

  • Autolawyers (Score:4, Insightful)

    by Doc Ruby ( 173196 ) on Thursday August 31, 2006 @06:16PM (#16019811) Homepage Journal
    What's really disappointing is that it's apparently cheaper to pay lawyers to settle a case than it is to defend your right to ignore optional guidelines like robots.txt in US courts.

    If Congress were serious about keeping the US economy "safe and effective", it would reform the "lawyers' job security" laws. Instead it will surely make them even worse, and make the lawyer tax on technology mandatory.
    • Re: (Score:3, Insightful)

      Unless lawyers are paid by the state, like doctors in Canada, they cannot be considered officers of the court who's job it is to represent your rights before said court. Once they accept payment from a client, either actual or pending, they become no more that hired sales consultants peddaling their clients version of the truth.
      • I don't agree with any of the statements in your post.

        "Unless lawyers are paid by the state, like doctors in Canada, they cannot be considered officers of the court who's job it is to represent your rights before said court. Once they accept payment from a client, either actual or pending, they become no more [than] hired sales consultants [peddling] their [clients'] version of the truth."


        Second, there is no distinction between being an advocate for a client's version of the truth, and being an advo
        • Wow, worst formatting errors I've ever let through. Obviously, my text shouldn't be italic, and the paragraphs should be introduced as "first" and "second". Ugh.
      • Re:Autolawyers (Score:5, Insightful)

        by Doc Ruby ( 173196 ) on Thursday August 31, 2006 @06:38PM (#16019977) Homepage Journal
        There's a good case to be made for lawyers being paid by the state, as they certainly are working in those offices on that business. But even more than doctors they cannot be allowed to make their own interests coincide with that of the state. Lawyers often work for people against the state, which must be recognized by the state as a primary responsiblity of lawyers. Doctors rarely find their interests conflicting with that of the state (except when they're not getting paid on time ;), so that structure isn't as dangerous.

        There's probably a way to ensure that lawyers represent people's rights better than they do now. Regular random audits of billings and practices. More "contempt of court" punishment. More suspended/revoked licenses, especially for repeated frivolous representation. More "malpractice" awards. There ought to be more competition, with more standardized reviews contextualizing all those "scores", published for consumers.

        Lawyers even more than doctors hide behind consumer ignorance and blind "respect". Exposing their performance as part of the shopping process would make them more competitive, and better adhere to the required "ethics" that usually are assumed to come with the tie.
        • It's worth noting that your suggestions about increased contempt and malpractice damages (against lawyers) are possible today, without any new legislation: you would probably be surprised how much existing leeway there is for judges to make such damages. For a variety of reasons, they rarely do so. I like the idea of random audits, but it'd require a very sophisticated system of deployment to prevent harassment (for example, how do you weight a lawyer's likelihood of being audited? Should a more prolific
          • Re: (Score:3, Interesting)

            by Doc Ruby ( 173196 )
            Lawyers should be required to instruct (off the clock) clients how to complain, and judges should ask clients if they've been informed (checking against a form the client signs). Failure should be like violating Miranda rights.

            Yes, a more prolific lawyer should be more likely to be audited. Probably every nth case (by all lawyers) should have an audit initiated secretly to follow the proceedings, reporting malpractice as it's observed, so corrections aren't applied only after the case is derailed. That does
          • by AuMatar ( 183847 )
            The main reason they rarely do- most judges used to be lawyers.
        • From what I understand there is a group of lawyers who are assigned to you if you are charged with a crime and can't afford a lawyer yourself. Despite what you may see on TV these lawyers do a decent job (not always) of disagreeing with the State. And if they do a bad job you can often get a Judge (who seem to be reallly good at disagreeing with the State) to rule that your lawyer was incompetent. Sure if we went to a system of all public layers we would need some tougher checks and balances, but so far it
          • It's probably worth trying an experiment expanding public defenders and prosecutors to encompass a greater percentage of criminal cases. Maybe even require private lawyers to rotate through those offices something like 1 of every 10 years. If successful, maybe it's worth trying with civil law, too.

            FWIW, I'm not really "a liberal", but I did notice that more Conservative justices overturn Congress more than less Conservative justices [blogspot.com]. Which makes calling them "Conservative" ironic, and makes the Conservative
            • I think its become fairly obvious by now that "liberal" judges are ones that use the constitution to say things we've been doing all along were bad. While "conservative" judges are ones that use the constitution to say things we've just started doing are bad. /Don't like any judge that tries too hard to read what they want to hear out of the constituion. //Don't think anyone really purposfully does that, its just how their mind is set.

              • Obvious to Republicans, maybe. What about recent crimes like the NSA wiretapping, lying us into Iraq, Guantanamo, Abu Ghraib, Terry Schiavo, signing statements...
                • And you think these things are new in war??? HAHA.

                  I wasn't trying to make a republican democrat statement I was trying to cover the old addage of
                  "Conservative" = Likes things the way they are.
                  "Liberal" = Likes to try new things.
                  • War on Americans like domestic wiretapping, Iraq lies, American torture gulags, pandering to zombie lovers and unitary executive tyranny is recent, but not what "Conservative" judges are ruling against. They're ruling against civil rights, labor, environmental, "Watergate" oversight laws. Just because "Conservatives" want to "roll us back" to an imaginary past doesn't make mean they're really "conserving" anything. Just like "liberals" don't necessarily want to "try new things", but usually do want to keep
                    • And just because you prefer "Conservative" activist judges to "liberal" ones doesn't make the Conservatives any less activist. They're more activist, and more dangerous.

                      I don't think I said that. I keep trying to make neutral statements, and I keep getting attacked.
                      I guess I just learned lesson one in Internet, don't bother arguing a neutral position against someone who obviously has an axe to grind.
      • by nebaz ( 453974 )
        They don't have to be paid by the state, merely licensed by the state. That license comes with certain responsibilities, I think some pro-bono work must occur every year under some circumstances, for example.
    • Re: (Score:3, Informative)

      by hackstraw ( 262471 ) *
      If Congress were serious about keeping the US economy "safe and effective", it would reform the "lawyers' job security" laws. Instead it will surely make them even worse, and make the lawyer tax on technology mandatory.

      I don't see that happening any time soon -- http://www.yourcongress.com/ViewArticle.asp?articl e_id=1671 [yourcongress.com]

      • Interesting stats. I'd love to see the percentage of challengers to incumbents who are lawyers. Every second November, like this coming November 7, 2006, we can fire all the lawyers in the House, and probably about 30% of the lawyers in the Senate. And replace them with people who legislate, rather than lawyer.
    • Yeah, but the problem here is that archive.org kept the material accessible even though their own policy is to delete material if robots.txt says to. It has nothing to do with the right of archive.org to ignore the robots.txt file, it's all about whether archive.org must follow their own published policies.
  • by kaizenfury7 ( 322351 ) on Thursday August 31, 2006 @06:17PM (#16019823)
    If you go directly to their site [healthcareadvocates.com], you get a version of their site that looks like it's from 1995.
    • Re: (Score:3, Funny)

      by cptgrudge ( 177113 )

      Quick! Get those people some Rounded Corners and Gradients!

      Welcome to Web 2.0!

      • No, nobody needs Web 2.0.

        But the site doesn't even look midly professional. I could have made that page back in high school, and I SUCK at web design.
        • Since when did "professional" mean "difficult to make"? If the site conveys its content in a clear way who cares if you could have made it in high school? A web site that's simple to implement is a great thing, and extra technologies (that usually will increase development, maintenance and bandwidth costs) need to be justified in terms of how they actually make the site's experience better.
          • Re: (Score:3, Insightful)

            by MindStalker ( 22827 )
            I don't know, maybe I just don't expect my local newspaper to look like my highschool newspaper.
            Inital impressions go a long way. It may seem silly to some people, but in buisness it can mean the difference between people taking you seriously and buying your product, or not.
    • Not anymore...
      Go Slashdot!
  • by InsaneGeek ( 175763 ) <slashdot@insanegeek s . com> on Thursday August 31, 2006 @06:18PM (#16019830) Homepage
    which is exactly what HA wanted to do. Obeying robots.txt files is voluntary, after all, and if the company didn't want the information online, they shouldn't have put it there in the first place

    So by the logic, if I didn't want AOL to release my search information I shouldn't be mad as it's my fault to have used them in the first place? Or that if I want my copyrighted information to not be republished by someone else, I should just simply not publish at all? How about, if I don't want my GPL code resold by someone in a closed source product I should just know better and not put it out in the open to begin with. And that if I post something stupid when I'm 9 we believe it should follow me around throughout my entire lifetime, because a 9 year old should know better.
    • by Amouth ( 879122 )
      it should.. follow you around for ever.. but it should also be noted that you where 9.. and the other party has to decide if you at 9 knew better or not.. it is their point of view.

      the way back thing always told you when it was.. never trying to show it off as now
    • by Anonymous Coward
      So by the logic, if I didn't want AOL to release my search information I shouldn't be mad as it's my fault to have used them in the first place?

      You never intended to make your search results publicly available. These guys intentionally made their web page publicly available.

      Or that if I want my copyrighted information to not be republished by someone else, I should just simply not publish at all?

      That's a better point, but the question is whether the Wayback Machine "republished copyrighted material". If t
      • How about, if I don't want my GPL code resold by someone in a closed source product I should just know better and not put it out in the open to begin with.

        It is more similar to releasing it as public domain code, then someone puts it in a commercial product, then you change your mind and re-release it as GPL, then you sue the people who made the commericial product. And you should lose that case.
    • by fm6 ( 162816 ) on Thursday August 31, 2006 @06:51PM (#16020060) Homepage Journal

      Another example: someone I know wrote an essay that he thought only people in his class would ever see. It contained one or two mildly embaressing disclosures, not terribly personal, but not something you'd want a complete stranger to know about you. Some idiot put it up on the school web site without his permission.

      Here's a nasty possibility. Suppose somebody unintentionally publishes information useful to terrorists. DHS drops by and points out the error, and the information is withdrawn. Does Wayback Machine have a right to keep the information online?

      In fact, Wayback Machine has never asserted their right to keep anything online. As the article points out, they'll remove stuff that's noncompliant with the current robots.txt, even though it was compliant at the time it was spidered. This lawsuit wasn't about their right keep stuff online. It was just somebody accusing them of being negligent about enforcing their own policies.

      • by wik ( 10258 )
        What is their policy for websites that no longer exist? Their website says nothing about this.

        I want to remove archives of my websites for hostnames/domains that are no longer connected to the internet. Obviously, the robots.txt method cannot work here.
      • Here's a nasty possibility. Suppose somebody unintentionally publishes information useful to terrorists. DHS drops by and points out the error, and the information is withdrawn. Does Wayback Machine have a right to keep the information online?

        Why don't you just play the child pornography card instead? At least that's *illegal*, unlike putting publicly available information online instead of hidden in some dusty library gaurded against terr'ists by a librarian.

        The fact is, if something is actually illeg
        • by fm6 ( 162816 )

          Who said anything about putting publically available information online? It might, for example, be private information about a building that makes it easier to blow it up. "Our new death star has a state of the art venting port, located for easy access at ..."

          It's funny that you accuse me of bad faith, since you're lumping me in with the Bush administration's crazy attempts to control information. I didn't say anything about censorship. I simply pointed out that a web site can have legitimate reasons for

      • Wayback Machine has never asserted their right to keep anything online. As the article points out, they'll remove stuff that's noncompliant with the current robots.txt, even though it was compliant at the time it was spidered.

        I really hate that. When I want to find some info about some hardware made by a long-defunct company, I find old usenet posts referencing their website, This is now taken over by some scumbag who has filled it full of porn and viagra ads. I go to the Wayback Machine and find ALL the

      • Suppose somebody unintentionally publishes information useful to terrorists.
        Fearmongering. Great way to make your point - Sagan [carlsagan.com] called this an argument from adverse consequences.
      • Suppose somebody unintentionally publishes information useful to terrorists.

        Your information is useful to people. Terrorists are people. Therefore, your information is useful to terrorists.

        Therefore, you need to refrain from posting any information that is useful to people. Therefore, Slashdot is OK.
    • by gsn ( 989808 ) on Thursday August 31, 2006 @07:00PM (#16020114)
      Thats crazy - when you typed in your search term into AOL you had an expectation of privacy and you did not for one minute believe that they would release that data. All webpages are copyright and the Wayback machine is using fair use to archive copies for educational use. If you publish information (its automatically copyright) and someone reproduces it they might be able to under fair use or they might be infringing your copyright - talk to your lawyer. And yes if you posted something on the net when you were 9 that was stupid it might well follow you around for the rest of your life. Same goes if you were in a porno in college and you put it online. Sorry. Tough shit. Maybe your parents should have paid more attention to your online activities. Or you should have known better. IANAL and 9 year olds may get some protection as minors but basic point remains - you publish something online you had no expectation of privacy. This is not at all what you were doing when you sent AOl your search queries - you published zilch.

      If you post something on the net then I can point my browser to it - there is no privacy, and nor was there any expectation of it. I could have used wget -r -erobots=off on your page every day and got all its content - and I'd have that archive even when you deleted it or moved it into some private archive, and it happily ignored your robots.txt. Since obeying robots.txt is volutary I simply chose not to.

      News websites often want you to pay to for older content but there is nothing theoretically stopping you from saving all the content day by day. You are comparing apples and oranges.

      Heres the summary - we posted evidence online that was used against us in a court of law, we lost, we sued the people who provided that evidence, and because its cheaper to settle than deal with bloody lawyers we settled with them.
    • Re: (Score:3, Interesting)

      by DeadboltX ( 751907 )
      why do people make such god awful analogies?

      if you give private information to AOL and they release it publicly then you can get upset
      if you post private information on "check-out-my-ssn.com" and its public to the whole world then you can't get mad.
    • Re: (Score:3, Insightful)

      by alexhs ( 877055 )
      Maybe you need to inform yourself of what Robot [robotstxt.org] Exclusion [wikipedia.org] is and isn't.

      Its purpose is not to censor information but to avoid incident by agressive robots that could stress WWW servers (introduction in the first link).

      HA action is revisionism. Like a politician yelling something then a few years later claiming he never said such a thing and threatening people with a piece of evidence to the contrary.
  • by saskboy ( 600063 ) on Thursday August 31, 2006 @06:18PM (#16019831) Homepage Journal
    ...Don't put it on the Internet. In fact, don't even type it into a computer, or write it down.
    People shouldn't put anything on the Internet that they wouldn't want their worst enemy, boss, NSA, or grandmother to see. Obviously since the porn industiry exists online, few people follow this rule, but it's a good one none the less.

    I enjoy Archive.org and when I get nostalgic about my websites of the past, it's there to show me a glimpse into history.
    • What about my financial information for almost every single bank, credit cards, bill I have. And there is little I can do about it.
      It might be fairly secure... But its on the web. Point is everything will eventually be on the web, its only a matter of do you trust the security of the site. Should you trust the security of myspace? No..
      • by saskboy ( 600063 )
        "It might be fairly secure... But its on the web."
        Lack of real information security is the trade we made as a computerized networked society, for convenience in banking. With the effort saved in banking I'd say it's worth it, even with the potential identiy scams the plague thousands of people every year. Crime happens whether it's online or off.
  • ... you can't really un-ring the bell of publishing something online...


    For the life of me I can't figure out what ringing a bell and publishing something online have in common. Maybe if we didn't use digital clocks we could turn back the sands of time and use a different mixed metaphor instead?

    • Re: (Score:3, Informative)

      by LordNimon ( 85072 )
      There's only one metaphor - "you can't unring a bell", so there is no mixed metaphor.
    • The use of the phrase "you can't unring the bell" in the discussion of Free Speech is an old one, based on the concept that no matter what you do after someone rings a bell, you can't "unring" it. The use here as an analogy is appropriate, in that you cant "un-release" information from the internet.
    • I've never heard this expression either, and I agree that it is poor. You just wait until the echoes of the bell have dissipated and it's like the bell never rang.

      [Curmudgeon]Un-ring? Bah! Nonsense.[/Curmudgeon]

  • But.... (Score:2, Informative)

    by Stanislav_J ( 947290 )
    ....even if Wayback did respect the robots.txt (which I was under the impression that they generally do), any pages archived before the robots.txt was placed on the server aren't going to automatically disappear -- they are still there. You have to directly ask them to remove the previously arvhived pages if you don't want them to be accessible.
    • by Kelson ( 129150 ) * on Thursday August 31, 2006 @06:47PM (#16020036) Homepage Journal
      I recently discovered exactly how the Wayback Machine deals with changes to robots.txt.

      First, some background. I have a weblog I've been running since 2002, switching from B2 to WordPress and changing the permalink structure twice (with appropriate HTTP redirects each time) as nicer structures became available. Unfortunately, some spiders kept hitting the old URLs over and over again, despite the fact that they forwarded with a 301 permanent redirect to the new locations. So, foolishly, I added the old links to robots.txt to get the spiders to stop.

      Flash forward to earlier this week. I've made a post on Slashdot, which reminds me of a review I did of Might and Magic IX nearly four years ago. I head to my blog, pull up the post... and to my horror, discover that it's missing half a sentence at the beginning of a paragraph and I don't remember the sense of what I originally wrote!

      My backups are too recent (ironic, that), so I hit the Wayback Machine. They only have the post going back to 2004, which is still missing the chunk of text. Then I remember that the link structure was different, so I try hitting the oldest archived copies of the main page, and I'm able to pull up the summary with a link to the original location. I click on it... and I see:

      Excluded by robots.txt (or words to that effect).

      Now this is a page that was not blocked at the time that ia_archiver spidered it, but that was later blocked. The Wayback machine retroactively blocked access to the page based on the robots.txt content. I searched through the documentation and couldn't determine whether the data had actually been removed or just blocked, so I decided to alter my site's robots.txt file, fire off a request for clarification, and see what happened.

      As it turns out, several days later, they unblocked the file, and I was able to restore the missing text.

      In summary, the Wayback Machine will block end-users from accessing anything that is in your current robots.txt file. If you remove the restriction from your robots.txt, it will re-enable access, but only if it had archived the page in the first place.
      • by ebyrob ( 165903 )
        In summary, the Wayback Machine will block end-users from accessing anything that is in your current robots.txt file. If you remove the restriction from your robots.txt, it will re-enable access, but only if it had archived the page in the first place.

        That's pretty cool. I wish more software behaved in a manner that well thought out.
        • by rthille ( 8526 )
          Cool maybe, but also bad. I can gain control over content [at least to prevent access] I never originally published if I now control the domain.

          That's uncool.

          • an aquaintance of mine was interested in basing a wargame on modern day protest movements. One of the sources he planned to use was a19-- an adhoc organization devoted to producing some sort of protest march on august 19th (of some year). They had a website called a19.org. It was no longer of any value to them, and the domain eventually found its way into the hands of a net parasite.

            You know the type:

            You searched for quantum chromodynamics. Would you like to buy flowers instead?

            and of course, robots.txt was

  • by scenestar ( 828656 ) on Thursday August 31, 2006 @06:33PM (#16019947) Homepage Journal
    Is that some sites that used to exist had no robots.txt file, yet still get blocked

    After a certain domain was no longer in use for years some adware search rank linkpharm whatever it is added a robots.txt file to a "hijacked" domain.

    One can now get formerly accessible sites removed from archive.org. EVEN IF THE ORIGINAL OWNER NEVER INTENDED TO.
  • by Anonymous Coward on Thursday August 31, 2006 @06:36PM (#16019959)
    Check out their robots.txt: http://www.healthcareadvocates.com/robots.txt [healthcareadvocates.com] They ONLY restrict Internet Archive, from accessing their web site, but don't restrict any other spider... Haven't they heard of Google's cache?
    • Re: (Score:3, Interesting)

      by Sir Pallas ( 696783 )
      Which is funny, because ia_archiver is actually the Alexa Internet crawler; it's a throwback to before Amazon.com bought Alexa. (To this day, Alexa donates crawl data to the Archive.)
  • by proxima ( 165692 ) on Thursday August 31, 2006 @06:52PM (#16020063)
    Many people think of the Wayback Machine as being a tool for history and nostalgia. However, consider copyright expiration (IANAL, etc.). Many web pages have items like "Copyright 1995-2006 Blah". Some of the content was created as early as 1995. Assuming, of course, that items created in modern times eventually have their copyright expire, we will need a record of the content of these pages at that time.

    As more content moves online, the idea of publishing a work becomes blurred. Revisions years later can effectively update the copyright of the work, if the reader cannot distinguish when the content was created. So the Wayback Machine will hopefully provide that resource. The amount of potentially public-domain content there is huge.

    As a side note, it will be interesting to note when the first GPL programs (for example) lose their copyright. Of course, by then, the languages will seem more than archaic.
    • Actually its not without caselaw. If you change then republish something you get a new copyright on it. BUT someone can still copy the old material if they can find old material that the most recent revision of has fallen out of copyright. /Yes even you can take Shakespear, change of few words and copyright your publication. :)
      • by proxima ( 165692 )

        Actually its not without caselaw. If you change then republish something you get a new copyright on it. BUT someone can still copy the old material if they can find old material that the most recent revision of has fallen out of copyright. /Yes even you can take Shakespear, change of few words and copyright your publication. :)

        Right, I was operating under that assumption. Therefore, it is very important that we have a record of what existed at a given point in time.

        What I don't know for certain is the answ

  • First, let me get two points expressed first. 1) IANAL, 2) I wholeheartedly agree with the aims of wayback and support that organisation whole-heartedly. I am playing devil's advocate here.

    In the UK Computer Misuse laws, there is the concept of unauthorised access. It is an offence to access data on a computer system without authorisation.

    Typically it is assumed that access to data held on a publicly available website, without notice to the contrary, is authorised. A notice displayed stating that you should
    • Typically it is assumed that access to data held on a publicly available website, without notice to the contrary, is authorised. A notice displayed stating that you should not look at the data unless you are me is sufficient to make you aware that you should not access it.

      That sounds rather absurd. It's like posting a massive page of text in a busy public location, with a sticky note attached saying "do not read this text."

      I would think that in terms of computer networks, "unauthorized access" means breaki

    • Datajack: To whom ae you playing devils advocate?

      The IA does exactly that -- it respects robots.txt. Further, it RETROACTIVELY applies robots.txt. Now, this may not work (which is what the complaint was about). And AFAIK the retroactive edit doesn't remove data, it simply doesn't allow visibility (which is one of the reasons it may not work -- if there are two separate paths to the data, and the data is there, it can still be retrieved).

      The devils advocate argument would be that IA may be necessary to retai
  • by Anonymous Brave Guy ( 457657 ) on Thursday August 31, 2006 @07:13PM (#16020193)

    Pretty much every time we have a discussion about the legality of web/Usenet archive sites, the only argument with any legal weight that's given for what would otherwise be a clear infringement of copyright is that the rightsholder is implicitly consenting to certain uses by making the material available on that medium. The degree to which this holds in general is debatable, and AFAIK has never been tested in any major court case in any jurisdiction. However, even if robots.txt is voluntary, it's a clear statement of intent. There is no way you can claim implicit permission to copy the material when the supplier explicitly indicated, using a recognised mechanism, that they did not want it copied.

    That makes comments like this one by Doc Ruby [slashdot.org] and this one by saskboy [slashdot.org] seem a little presumptuous, IMNSHO.

  • IIRC This was in response to a situation where someone was suing HA, the plaintiff's law firm hammered archive.org and was able to get some of the pages that they were interested in. At which time HA sued the archive for copyright infringement because they changed their robots.txt to prevent the information from getting to the plaintiff's attorneys. The problem with this whole thing is that adding the robots file after the lawsuit is akin to destroying evidence during a trial and they should have been fou
  • Their policy is pretty simple, and direct, and involves minimal interaction with a human. (A bonus.)

    Put in a robots.txt.

    Direct wayback to index what you want or dont.

    THAT DIRECTION IS APPLIED TO FILES ON THEIR SITE FROM PREVIOUS VERSIONS.

    Meaning, if you deny all, and their bot sees it, all of your stuff is supposed to get deleted from the archive.

    If they didn't do that they violated their own policy.

    True, there can be complications (such as switching domain names) that might keep any given text in there wit
  • Obeying robots.txt files is voluntary, after all,

    It may still be voluntary today, but who knows what the future will bring?

    I, for one, welcome our robot.txt overlords.
  • wrong (Score:2, Interesting)

    by oohshiny ( 998054 )
    The US has copyright laws, and lots of people rely on it, including open source projects.

    The robots.txt file is a clear indication of the conditions under which a copyright holder gives you access to their copyrighted materials. As such, it is not "voluntary".

    In addition to probably being in violation of copyright law, it is simply rude for companies to ignore robots.txt files; if the Internet Archive does this, they are badly behaved.

    If courts should decide that robots.txt files can be ignored at will, th
  • Wrong, wrong, wrong (Score:4, Informative)

    by kimvette ( 919543 ) on Thursday August 31, 2006 @09:55PM (#16021066) Homepage Journal
    As the article notes, you can't really un-ring the bell of publishing something online, which is exactly what HA wanted to do. Obeying robots.txt files is voluntary, after all, and if the company didn't want the information online, they shouldn't have put it there in the first place."


    Wrong, wrong, wrong. archive.org explicitly tells you that if you want your content removed from their index, that you should modify your robots.txt and re-submit your site, and when their bot reads your robots.txt and sees the appropriate directives, your content will be dropped from the index. See:

    http://www.archive.org/about/faqs.php#2 [archive.org]

    http://web.archive.org/web/20050305142910/http://w ww.sims.berkeley.edu/research/conferences/aps/remo val-policy.html [archive.org]

    Let's review the text here, just in case someone from archive.org scurries to change it:

    Addendum: An Example Implementation of Robots.txt-based Removal Policy at the Internet Archive

     


    To remove a site from the Wayback Machine, place a robots.txt file at the top level of your site (e.g. www.yourdomain.com/robots.txt) and then submit your site below.

    The robots.txt file will do two things:

              1. It will remove all documents from your domain from the Wayback Machine.

              2. It will tell the Internet Archives crawler not to crawl your site in the future.

    To exclude the Internet Archive's crawler (and remove documents from the Wayback Machine) while allowing all other robots to crawl your site, your robots.txt file should say:

                                                  User-agent: ia_archiver

                                                  Disallow: /

    Robots.txt is the most widely used method for controlling the behavior of automated robots on your site (all major robots, including those of Google, Alta Vista, etc. respect these exclusions). It can be used to block access to the whole domain, or any file or directory within. There are a large number of resources for webmasters and site owners describing this method and how to use it. Here are a few:

                          http://www.global-positioning.com/robots_text_file /index.html [global-positioning.com]

                          http://www.webtoolcentral.com/webmaster/tools/robo ts_txt_file_generator [webtoolcentral.com]

                          http://pageresource.com/zine/robotstxt.htm [pageresource.com]

    Once you have put a robots.txt file up, submit your site (www.yourdomain.com) on the form on http://pages.alexa.com/help/webmasters/index.html# crawl_site [alexa.com].

    The robots.txt file must be placed at the root of your domain (www.yourdomain.com/robots.txt). If you cannot put a robots.txt file up, submit a request to wayback2@archive.org.


    By not honoring those directives, are they not engaging in both copyright infringement and fraud?

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...