Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Google Loses Cache-Copyright Lawsuit in Belgium

Zonk posted more than 7 years ago | from the no-convenience-for-you dept.

Google 340

acroyear writes "A court in Belgium has found that Google's website caching policies are a violation of that nation's copyright laws. The finding is that Google's cache offers effectively free access to articles that, while free initially, are archived and charged for via subscriptions. Google claims that they only store short extracts, but the court determined that's still a violation. From the court's ruling: 'It would be up to copyright owners to get in touch with Google by e-mail to complain if the site was posting content that belonged to them. Google would then have 24 hours to withdraw the content or face a daily fine of 1,000 euros ($1,295 U.S.).'"

Sorry! There are no comments related to the filter you selected.

Har Har (4, Informative)

N8F8 (4562) | more than 7 years ago | (#17996922)

The ruling basically reiterates the current Google policy.

Waffles (5, Funny)

bostons1337 (1025584) | more than 7 years ago | (#17996926)

Don't they have anything better to do....like make us Americans some waffles.

So no "fair dealing" or "fair use" in Belgium? (2, Interesting)

Anonymous Coward | more than 7 years ago | (#17996938)

I thought the whole EU had some sort of "fair dealing" exemptions. If they do, I can't believe that Google's lawyers lost this.

Re:So no "fair dealing" or "fair use" in Belgium? (2, Informative)

radja (58949) | more than 7 years ago | (#17997854)

the EU has a copyright directive. it's up to the individual countries to make it into a national law, so copyright law still differs across countries in the EU.

Re:So no "fair dealing" or "fair use" in Belgium? (2, Insightful)

scorpionsoft.be (994417) | more than 7 years ago | (#17997902)

Well, in this country, you don't win in court because you have 100 good lawyers

Re:So no "fair dealing" or "fair use" in Belgium? (1)

rucs_hack (784150) | more than 7 years ago | (#17998318)

I'm currently writing up my thesis, and to be frank, without the google cache I'd have to pay a small fortune just to gain access to the abstracts of some papers I need. It would be very difficult to do what I need to do.

I even found that some papers I've published are locked behind these pay per view portals. Ok I have copies, but given a choice I'd insist they be available free.

The google cache lets me find papers stored outside these portals, often on peoples university home space. Without it I simply couldn't reference some work. as it is I've had to abandon some research because I can't find the things I need in the google cache.

The portals do provide a service, and yes, they should be paid, but I dispute that they should be the only place to find those initial abstracts.

Ruling that non free services cannot be used will restrict the researcher with a low budget from doing research, or so I feel.

That's unfortunate (2, Interesting)

aussie_a (778472) | more than 7 years ago | (#17996952)

That is unfortunate, but I'm amazed caching is even legal in some (most?) countries. Its always seemed like it was just rampant copyright infringement to me, except of course the law in certain countries makes an exception for it.

Ridiculous (5, Insightful)

brunes69 (86786) | more than 7 years ago | (#17996998)

If you can't cache content, then you can't search it.

You have to copy content to your local machine to index it, and to be abel to select results with context. Hell, you have to copy it to *VIEW* it.

The courts and the law need to wake up and realize you can't do anything with a computer without copying it a dozen times. 25% or more of what your computer does is copy things from one place (network, hard drive, memory, external media) to another.

Re:Ridiculous (4, Insightful)

aussie_a (778472) | more than 7 years ago | (#17997052)

There's a difference between keeping a local copy and distributing it.

Not in terms of copyright law (0, Redundant)

brunes69 (86786) | more than 7 years ago | (#17997124)

See subject for text.

Re:Not in terms of copyright law (3, Informative)

91degrees (207121) | more than 7 years ago | (#17997258)

Yes it is different. In most countries, unauthorised distribution carries much heavier penalties than unauthorised possession (which may indeed have no penalty atttached at all).

Re:Ridiculous (1)

Vexorian (959249) | more than 7 years ago | (#17998380)

I guess this means search engines in general should only show URL results and nothing else, heck even the title of the page may be copyrighted.
Then a "totally legal" search for flying spaghetty monster would look like this:
http://www.venganza.org/ [venganza.org] http://www.venganza.org/games/index_large.htm [venganza.org] http://en.wikipedia.org/wiki/Flying_Spaghetti_Mons ter [wikipedia.org] http://flyingspaghettimonster.org/ [flyingspag...onster.org] http://uncyclopedia.org/wiki/Flying_Spaghetti_Mons ter [uncyclopedia.org] http://blog.pietrosperoni.it/2005/08/28/duck-and-c over-and-the-flying-spaghetti-monster/ [pietrosperoni.it]

I guess it would be quite fun.
PS: Is anyone wondering why did that website had "subscriptors only" messages not only available for the public but only accessable by links?

Re:Ridiculous (5, Insightful)

jandrese (485) | more than 7 years ago | (#17998022)

So the answer is obvious: Just delist these guys from Google entirely and configure the webcrawler to ignore them. Problem solved and you won't have to worry about them coming back later and claiming that your locally stored copy is also a copyright violation too.

Re:Ridiculous (1)

charlieman (972526) | more than 7 years ago | (#17998294)

Or just don't show the cache link for those sites searches.

Re:Ridiculous (1)

drinkypoo (153816) | more than 7 years ago | (#17998168)

25% or more of what your computer does is copy things from one place (network, hard drive, memory, external media) to another.

I guess that explains why computers still seem so slow. 50% of the time they're deciding whether or not to make a jump (and making one) and 25% of the time they're shoveling bytes, that only leaves 50% of the time to actually do work :D

Re:That's unfortunate (1, Insightful)

Anonymous Coward | more than 7 years ago | (#17997214)

If you don't want it cached, then don't make it publicly available on your website.

If you must make it publicly available on your website, then don't complain when it gets cached.

If your business model requires that everyone else in the world do absurd things that don't make sense (like fail to cache and redistribute publicly available information when the cost to do so is virtually zero), then go find a better buisness model.

Our laws should not make us pretend that reality is other than it is, or that the technological landscape has failed to take on a new shape.

Current copyright law is producing these sorts of absurd contradictions. The law, not the basic principles of human behavior, should be changed.

Re:That's unfortunate (0, Troll)

mikkelm (1000451) | more than 7 years ago | (#17997310)

Hey, that's a great idea.

If you don't want your music copied, don't release it.

If you don't want your book copied, don't release it.

If you don't want your trademarks infringed, don't publicise them.

If you don't want to be robbed, don't walk the streets at night.

Don't complain if you actually decide to do any of these things, 'cause you gave people the opportunity to abuse it.

Re:That's unfortunate (0)

Anonymous Coward | more than 7 years ago | (#17997516)

Being robbed is not analogous to any of those. Your post failed.

$1,295 per day? (5, Funny)

Rude Turnip (49495) | more than 7 years ago | (#17996978)

That's $472,675 per year, or, in Google's accounting terms, $0 after rounding to the nearest million.

Re:$1,295 per day? (2, Insightful)

ceejayoz (567949) | more than 7 years ago | (#17997006)

I suspect that's per-site, though.

Re:$1,295 per day? (5, Funny)

Anonymous Coward | more than 7 years ago | (#17997226)

1,000 sites * $0 still = $0.

Re:$1,295 per day? (1)

Tristandh (723519) | more than 7 years ago | (#17997062)

According to the article in the (quality) newspaper I read (http://www.standaard.be/Artikel/Detail.aspx?artik elid=DMF13022007_023, Dutch only), the stated fine is 25,000 per day. Which amounts to a lot more than $472,675 per year, 9,125,000 using your method of calculation (probably should only count business days, doesn't really matter now). (Note: 9,125,000 is approximately $11,877,000). Also, this is the second ruling on the matter. In the first ruling the fine was 1,000,000 per day the articles were on a google site.

Re:$1,295 per day? (1)

DrEldarion (114072) | more than 7 years ago | (#17997238)

From what I understand, 25,000 per day is the retroactive fine. Going forward, it's the lower figure.

Surely it's a win? (1)

Threni (635302) | more than 7 years ago | (#17997014)

I mean, they get to continue to cache. If they're told to remove it, they can. Or they can simply stop updating their database with any links to the companies site(s), just to make sure they don't accidentally infringe, and then the sites can start using robots.txt a little more successfully.

Big Deal. (1, Funny)

Innova (1669) | more than 7 years ago | (#17997026)

So Google lost some cache...they have a market cap of over $140 Billion, no biggie.

What's the problem? (5, Insightful)

DrEldarion (114072) | more than 7 years ago | (#17997036)

If they don't like it, they can very easily "opt out" by using Robots.txt to disallow Googlebot. I fail to see where the problem is here.

Re:What's the problem? (4, Insightful)

CRCulver (715279) | more than 7 years ago | (#17997090)

That argument makes no sense before the law. If publishing companies don't like me photocopying their books and passing them on to people, laden with ads for profit, could I say "No, the companies should have printed them on special anti-photocopying paper"? No. Google broke the law. The law assigns no responsibility to copyright holders to protect their property from those who would copy it, but it does bind the citizenry not to copy.

FWIW, I hate the entire idea of copyright, I'm just trying to show how Google has to act in court.

Re:What's the problem? (4, Insightful)

DrEldarion (114072) | more than 7 years ago | (#17997218)

Here's the rub, though:

1) The web page is publicly accessible for free to begin with. That complicates things quite a bit.
2) The ruling from the court doesn't say Google needs to stop caching, it just says that Google has to provide an opt-out. That option already exists.

Re:What's the problem? (2, Informative)

that this is not und (1026860) | more than 7 years ago | (#17997332)

That doesn't matter. Publishers of those free urban tabloids still retain copyright on the articles and graphics given away for free in the tabloids.

Re:What's the problem? (4, Insightful)

inviolet (797804) | more than 7 years ago | (#17997632)

Good answer.

This ruling doesn't significantly hurt Google. Alas, it only hurts everyone else -- all billion or so of Google's users. Having quick access to (at least a chunk of) a piece of content, especially when that content has expired or is temporarily unreachable, is convenient and valuable. Many times in my own searches, the piece of data I anxiously sought was available only in the cache.

Let's hope that Google does not respond to the ruling by across-the-board reducing or removing the cache feature.

Really? (4, Insightful)

gillbates (106458) | more than 7 years ago | (#17997400)

If that is true, then why do I see copyright statements at the beginning of books and DVDs? It would seem the publishers are being hypocritical - they post their content publicly, refuse to use the robots.txt file, and then go on a litigation rampage when someone actually makes use of their web site. They're little different than the kid who takes his ball and goes home when he starts losing the game.

Furthermore, I would argue that posting to a web page is implied permission because the owners do so expecting their work to be copied to personal computers. In an interesting turn of events, private individuals are allowed to copy and archive web pages, but Google is not.

Re:What's the problem? (5, Insightful)

91degrees (207121) | more than 7 years ago | (#17997442)

It's basically about established practice. We've pretty much established right and wrong when copying a book. As a rule, you don't do it. In many countries, libraries and schools have a licencing agreement that allows photocopying. With TV shows it's considered perfectly acceptable to copy an entire show. Audio mix tapes are usually considered acceptable or explictely legal.

On the web, caching search engines have been in existence for a lot longer than expiring content has been around. It's established that search engines are a neccesity, and that robots.txt is the way to opt-out. When you do business in a new arena, it makes sense that the existing rules of the arena should apply.

Re:What's the problem? (1)

bill_mcgonigle (4333) | more than 7 years ago | (#17997866)

No. Google broke the law. The law assigns no responsibility to copyright holders to protect their property from those who would copy it

TFS says:

It would be up to copyright owners to get in touch with Google by e-mail to complain if the site was posting content that belonged to them. Google would then have 24 hours to withdraw the content

Re:What's the problem? (1)

tiocsti (160794) | more than 7 years ago | (#17997980)

I agree, I think google should comply with the law, and on request remove any companies data from their cache, as well as remove the company from the search engine entirely. Problem solved.

Re:What's the problem? (4, Interesting)

petabyte (238821) | more than 7 years ago | (#17997132)

Or, even better, use the META tag to set NOARCHIVE:

<meta name="ROBOTS" content="NOARCHIVE" />

All of my website (quaggaspace.org) shows up in google, but you'll notice there is no "cached" button.

Here is the problem (3, Interesting)

roman_mir (125474) | more than 7 years ago | (#17998128)

Why should we have to opt out from being cached, why can't we opt in instead? I think the phone calls made by marketers are a perfect example of this. If you need your page to be found on Google or other search engines, add a meta tag, which explicitely lets a search engine to collect this page for indexing/caching. In fact allow these differences to be explicit, let search engines either index or cache or both.

Re:What's the problem? (3, Insightful)

suv4x4 (956391) | more than 7 years ago | (#17997302)

If they don't like it, they can very easily "opt out" by using Robots.txt to disallow Googlebot. I fail to see where the problem is here.

Problem is.... newspapers, wanna have their pie and eat it too.
Solution.... it's Google's fault.
Result.... news dinosaurs go extinct and news mammals come to rule Earth
Moral.... don't be greedy beyond survival.

Re:What's the problem? - Desired Outcome/Wet (1)

Nom du Keyboard (633989) | more than 7 years ago | (#17997642)

Desired outcome/Wet dream... they want a big wad of Google's big pile of $$$$$

Re:What's the problem? (1)

CultFigure (563155) | more than 7 years ago | (#17997524)

A more likely *correction* by Google will be to not list said website at all in any search. Let's see how long this ruling (and supporting law) lasts when companies that complain start getting delisted from Google.

Re:What's the problem? (1)

tlhIngan (30335) | more than 7 years ago | (#17998248)

A more likely *correction* by Google will be to not list said website at all in any search. Let's see how long this ruling (and supporting law) lasts when companies that complain start getting delisted from Google.


I don't want Google to delist. That's the easy way and Google obeys the 10 million ways to not have your site indexed/cached/traversed/whatever. Let Google drop those sites to forced pagerank zero. Which is known to cause some interesting side effects, actually. If they complain that their traffic drops off, well, that's what they wanted...

Re:What's the problem? (1)

mpcooke3 (306161) | more than 7 years ago | (#17997694)

I can "opt out" of having my stuff stolen by putting locks on my doors and windows.
But I don't see why, if I forget to lock my door or choose not to bother, it should be legal for someone to take all my stuff.

Re:What's the problem? (1)

MightyYar (622222) | more than 7 years ago | (#17998236)

I appreciate the need for analogy since intellectual property law is so... well, complicated and obtuse. However, analogies involving physical objects will always fail when applied to intellectual property. This is because taking someone's physical property is almost always morally wrong, whereas morality generally does not apply to intellectual property.

In this case, the court said that it is fine for Google to copy, but the copyright holders have a right to have any offending content taken down within 24 hours of emailing Google. This is a pretty weird ruling, since Google has had several effective opt out options for quite a while, including the robots.txt file and the meta tag that disables caching. I guess this ruling adds an option in case you make a mistake and accidentally allow Google to cache your site... sort of a morning after pill for stupid webmasters.

24 hours! (2, Funny)

loconet (415875) | more than 7 years ago | (#17997056)

"Google would then have 24 hours to withdraw the content or face a daily fine of 1,000 euros ($1,295 U.S.).'""

I think it is safe to say they can afford to take their time...

Why are newspapers retarded? (3, Insightful)

Mr. Underbridge (666784) | more than 7 years ago | (#17997058)

If I'm Google, I turn the morons off and see how fast they come screaming back when their ad revenue plummets. Seriously, IT'S FREE FREAKING ADVERTISING. Google should be charging *them*.

Re:Why are newspapers retarded? (0, Redundant)

Lumpy (12016) | more than 7 years ago | (#17997276)

Yup.

Ok we will stop caching. and listing you in our search engine.

Getting your site removed from Google is a death-knoll for you.

Re:Why are newspapers retarded? (1)

drinkypoo (153816) | more than 7 years ago | (#17998090)

Getting your site removed from Google is a death-knoll for you.

Is it grassy? Are there three shooters?

Back, and to the left. Back, and to the left. Back... and to the left.

Re:Why are newspapers retarded? (0)

Anonymous Coward | more than 7 years ago | (#17997322)

If I'm Google, I turn the morons off and see how fast they come screaming back when their ad revenue plummets. Seriously, IT'S FREE FREAKING ADVERTISING. Google should be charging *them*.

You suck at teh internets. This is about the "google cache" link supplied on Google's search results page.

Re:Why are newspapers retarded? (2, Insightful)

PeterBrett (780946) | more than 7 years ago | (#17998202)

If I'm Google, I turn the morons off and see how fast they come screaming back when their ad revenue plummets. Seriously, IT'S FREE FREAKING ADVERTISING. Google should be charging *them*.
You suck at teh internets. This is about the "google cache" link supplied on Google's search results page.

No, he makes a good point. If someone files a lawsuit against Google, all Google would have to do to stop them would be to suspend their site from all indexing and search results. There's no God-given right to be indexed by a search engine. Bad analogy; imagine you sell hot meaty pies, and some random guy walks around the town carrying a board with the words, "Eat Anonymous Coward's Hot Meaty Pies Today!!!". Now imagine that guy does it for free. Suing Google is somewhat like taking the guy to court because "Anonymous Coward" is your trademark and he didn't pay for a license to use it.

What about MY memory, is that a cache? (1)

thomasdz (178114) | more than 7 years ago | (#17997100)

Just speculating... what happens when I REMEMBER the free version of the article? Am I now violating Flemish copyright laws?
This really seems to be the direction that things are going.

Re:What about MY memory, is that a cache? (3, Informative)

Potor (658520) | more than 7 years ago | (#17997196)

Actually, the action was begun by French- and German-language papers and adjudicated in a Brussels court, and thus has nothing to do with anything Flemish.

Re:What about MY memory, is that a cache? (1)

thomasdz (178114) | more than 7 years ago | (#17997260)

Oh. (embarassed to admit I didn't RTFA)
Sorry.

Re:What about MY memory, is that a cache? (2, Funny)

Dog-Cow (21281) | more than 7 years ago | (#17998166)

Posting on slashdot means never having to be embarrassed for not RingTFA.

Re:What about MY memory, is that a cache? (0)

Anonymous Coward | more than 7 years ago | (#17997316)

Good point. But remember this is the French/German speaking guys that sued Google. The Flemish media seem to understand that Google actually drives them traffic and so far have shown no indication of wanting to sue Google. Flemish stuff is cached by Google, and as far as I know that won't change.

Please flame the right language group in this funny little country :)

Re:What about MY memory, is that a cache? (1)

db32 (862117) | more than 7 years ago | (#17997548)

I hope you don't print anything.

Personal Responsibility (1, Insightful)

kimvette (919543) | more than 7 years ago | (#17997118)

Personal Responsibility

Google caching is a free service which is optional. Web site owners have total control over it. Note the following:

<META HTTP-EQUIV="CACHE-CONTROL" CONTENT="NO-CACHE">


If this is in place the site does not get cached.

I hope Google is responding to such frivilous complaints and lawsuits by completely removing those sites from their index. If they do not remove those companies, they are doing evil through omission by allowing other companies to do evil to remain in business.

Re:Personal Responsibility (3, Informative)

kimvette (919543) | more than 7 years ago | (#17997144)

Sorry that was the browser cache.

THIS is the correct tag:

<META NAME="ROBOTS" CONTENT="NOARCHIVE">

Sorry about the brain fart. I wish we could edit posts (preview, I know, but that would not have made me catch this one)

Re:Personal Responsibility (1)

Hijacked Public (999535) | more than 7 years ago | (#17997250)

Although your continued mastery of HTML's bold tag is impressive I feel like I should point out, a couple of earlier posts have done so as well, that copyright law generally places the responsibility on the person doing the copying, rather than the rights holder. If robots worked in the opposite direction and only copied material when a tag explicitly allowed them to it would be a better fit with existing law.

Not that copyright law doesn't need improvement in this area, but blaming the rights holders is off the mark.

Re:Personal Responsibility (3, Insightful)

Anonymous Coward | more than 7 years ago | (#17997378)

Well, if the rightsholders doesn't want people/robots to access their "jewels" then maybe they shouldn't fucking publish them on a public net in the first place?

Re:Personal Responsibility (0)

Anonymous Coward | more than 7 years ago | (#17997478)

This is why google should de-list ALL sites that have copyrighted material!

Re:Personal Responsibility (1)

poot_rootbeer (188613) | more than 7 years ago | (#17998222)

if the rightsholders doesn't want people/robots to access their "jewels" then maybe they shouldn't fucking publish them on a public net in the first place?

When they publish their work on a public net, that does not by any stretch mean they are relinquishing copyright to the work.

Re:Personal Responsibility (1)

MightyYar (622222) | more than 7 years ago | (#17998398)

Since it is impossible to read a website without making a copy, aren't they implicitly allowing a copy by hosting their IP on a public web server? How is Google supposed to know that there are restrictions on the nature of the copy without any kind of notice? We already ask for permission to copy - our web browser sends a GET request. They can either deny or supply us with the copy after that. If they want to restrict the copy, they are free to send me a license agreement that I must agree to as a response to my initial GET request. Otherwise they are supplying a copy without restriction, are they not?

Re:Personal Responsibility (1, Insightful)

McDutchie (151611) | more than 7 years ago | (#17997744)

Google offers free access to a complete cached copy of your site by default. You should not have to opt out of having your copyright violated, any more than you should have to opt out of getting spammed, getting mugged in the street, etc. That is putting the world upside down. The violator should not have committed the violation to begin with. Offering complete cached/archived copies of websites should only happen with explicit permission.

Re:Personal Responsibility (5, Insightful)

jandrese (485) | more than 7 years ago | (#17997954)

Which is not only completely impractical (very few sites would set the "cacheme" flag because almost nobody would know about it), but counter to the way the internet works. By default you have to assume that anything you post on the internet will be tracked by search engines, blogged about, cached, etc... That happens to _everything_ on the internet, it's the nature of the beast. That's also why the internet works so well. If you want to make your page behave differently than all of the other pages on the internet, then you need to look into setting some very easy to use flags (robots.txt and the meta tags listed above) to change the behavior. You can't assume that just because it's yours that it will be treated specially. If you're really worried about it then don't post on the internet, plain and simple.

Re:Personal Responsibility (1)

McDutchie (151611) | more than 7 years ago | (#17998240)

Which is not only completely impractical (very few sites would set the "cacheme" flag because almost nobody would know about it), but counter to the way the internet works. By default you have to assume that anything you post on the internet will be tracked by search engines, blogged about, cached, etc... That happens to _everything_ on the internet, it's the nature of the beast.

Yes, tracking, caching, being blogged about, etc. is normal, natural, and okay. But just because your website gets tracked, cached and talked about does not mean that the cache is automatically republished wholesale! I don't put my browser cache on the Internet either, do I? That kind of republication is a conscious, intentional act on the part of sites like Google and archive.org for which they have no prior permission.

Being able to opt out is not good enough. In your own words: it's completely impractical (very few sites would set the "nocache" flag because almost nobody would know about it).

Try "caching" cnn.com on your own website and see how fast you'll get sued. Mere mortals like you and me won't get away with that; only big companies (Google) and US government entities (archive.org) do. They have successfully placed themselves above the law. This is simply class justice.

Re:Personal Responsibility (2, Insightful)

nstlgc (945418) | more than 7 years ago | (#17998056)

Being the devil's advocate:

Spam is a free service which is optional. Email address owners have total control over it. Use the unsubscribe link at the bottom of the email.

Assuming those unsubscribe links would work (we all know they don't), would you consider this a logical way of thinking? If tomorrow some other caching company comes along and introduces another way in which website owners have 'total control', will that clear them from copyright violation? What if I want my content to be cached on proxies, but I don't like them to be accessible from a massively public accessible and searchable cache?

Personal opinion:

To be honest, I don't think Google needs to stop caching anything automatically. The ruling states copyright owners need to contact Google and Google needs to respond by taking the content offline within 24 hours. That doesn't seem completely impossible to do, and that way they can keep caching those who don't contact them.

blocking belgium (1)

projektsilence (988729) | more than 7 years ago | (#17997130)

So now does Google block the entire country of Belgium in order to make sure they don't allow them to read cached material? If so, I say 'HA!'

Re:blocking belgium (1)

AxminsterLeuven (963108) | more than 7 years ago | (#17997220)

It's only for the French and German speaking parts of Belgium. [politicaljoke]So in this case, there is no Flemmish Block. [/politicaljoke]

Public Domain (1, Insightful)

C_Kode (102755) | more than 7 years ago | (#17997164)

The finding is that Google's cache offers effectively free access to articles that, while free initially, are archived and charged for via subscriptions.

The way I see it, once you release media free of charge to the general public its content becomes public domain.

Re:Public Domain (0)

Anonymous Coward | more than 7 years ago | (#17997352)

Hi,

> The way I see it, once you release media free of charge to the general public its content becomes public domain.

Yes, but it's not released free of charge - it's paid for by advertising on the page.

Plus, if you write some software and you release it free of charge for a few days to generate interest, does that mean It becomes public domain? And does that mean I can put it on my site to generate revenue for me and then not pay you for your trouble even after you decide to charge for it?

Cache tags and robots.txt files are not binding in any way and are completely ignored by some search engines.

Re:Public Domain (0)

C_Kode (102755) | more than 7 years ago | (#17997458)

If you release your source code freely to the public, you can't resend that offering. This was used against SCO Caldera in there attack on Linux. They were allowing free downloading of what they claimed was there private IP included in the Linux kernel. They couldn't resend that fact.

Re:Public Domain--Recind (1)

Nom du Keyboard (633989) | more than 7 years ago | (#17997680)

you can't resend that offering.

I think you mean recind. Resending means you'd send it to them again, even if you didn't want them to have it any longer.

Re:Public Domain--Recind (1)

Anonymous Coward | more than 7 years ago | (#17997708)

The word is rescind, folks.

*sighs sadly*

Re:Public Domain (2, Insightful)

grimwell (141031) | more than 7 years ago | (#17997934)

The way I see it, once you release media free of charge to the general public its content becomes public domain.


Wouldn't that undermine the GPL? If the linux kernel is in the public domain, companies could use it freely without having to give back.

Or what about street-performers performing their own material?

Re:Public Domain (3, Insightful)

kramer (19951) | more than 7 years ago | (#17998404)

The way I see it, once you release media free of charge to the general public its content becomes public domain.

Then, perhaps its good that the rest of the world doesn't see it the way you do.

Because if the world were to be the way you see it, the entire web content industry would immediately go pay-per-view or subscription only to avoid all their work becoming public domain. Yes, what you propose would literally destroy the useful and open environment of the Internet.

Servers, bandwidth, and writers don't pay for themselves. If these sites can be copied wholesale and put up elsewhere without the original author having a say in the matter, you've just destroyed any monetary incentive to create. Much as many people like to think otherwise, money is important, and a strong incentive to create.

More stupidity (1, Insightful)

Anonymous Coward | more than 7 years ago | (#17997206)

If a publisher doesn't want their page cached there are technical measures they can and should take. The legal system isn't a crutch for idiots who can't tie their own shoelaces or wipe their own assholes. If an organization lacks the technical proficiency to publish on the web, they should stop publishing on the web. Search engine caches are an important and useful feature, being ruined for everyone because some stupid twat sees a payoff from Google.

Content providers may shoot themselves... (3, Interesting)

jvkjvk (102057) | more than 7 years ago | (#17997212)

...in the foot.

I don't believe that Google currently is mandated to show users any particular results. The simplest technological solution for Google might be to drop indexing the sites that send these takedown notices entirely. No index, no cache; dump it all and don't look back.

They are in no way legally bound to do come up with a more advanced solution that would be more $$ and add more complexity to the codebase.

Now because there very well may be information that is unavailable anywhere else (although it seems relatively unlikely - yes, they might have copyrighted articles that are unavailable otherwise, but I cannot imagine the information contained therein is such, unless you're talking about creative works) Google may try to work something out. Oh, that and they are remarkably not evil compared to the power they currently wield.

Imagine how many takedown notices they would receive after the first few rounds of companies that complained cannot be found through Google...

Good!!! (1)

iminplaya (723125) | more than 7 years ago | (#17997320)

I'm for anything at all that will wake people up to the tyrrany of IP law. Keep it coming. Lockdown Windows. These are the things we need to provoke action, unfortunately. So, Bring it on! Until we puke.

Oblig Monty Python Reference (5, Funny)

Hoi Polloi (522990) | more than 7 years ago | (#17997342)

"Well now, the result of last week's competition when we asked you to find a derogatory term for the Belgians. Well, the response was enormous and we took quite a long time sorting out the winners. There were some very clever entries. Mrs Hatred of Leicester said 'Let's not call them anything, let's just ignore them.' and a Mr St John of Huntingdon said he couldn't think of anything more derogatory than Belgians. But in the end we settled on three choices: number three, the Sprouts, sent in by Mrs Vicious of Hastings, very nice; number two, the Phlegms, from Mrs Childmolester of Worthing; but the winner was undoubtedly from Mrs No-Supper-For-You from Norwood in Lancashire, Miserable Fat Belgian Bastards!"

Abstracts are illegal? (2, Interesting)

mshurpik (198339) | more than 7 years ago | (#17997344)

>Google claims that they only store short extracts, but the court determined that's still a violation.

Abstracts are generally a) uninformative and b) free. Seems like a huge overreaction on the EU's part.

Re:Abstracts are illegal? (2, Insightful)

pinky99 (741036) | more than 7 years ago | (#17997510)

Wow, I didn't notice that the EU was conquered by Belgium over night...

Re:Abstracts are illegal? (1)

mshurpik (198339) | more than 7 years ago | (#17997672)

Oh, you mean your version of Slashdot shows the *article* (not post) you're replying to?

Damn, Malda must have fixed that in the last five minutes.

Re:Abstracts are illegal? (2, Insightful)

poot_rootbeer (188613) | more than 7 years ago | (#17998412)


"Abstract" and "extract" are not interchangeable terms.

An abstract is a meta-description of a document, giving an overview of its content but usually not using any of the document content itself. An extract, on the other hand, is a literal subset of the document.

Simple Answer... (1)

andreMA (643885) | more than 7 years ago | (#17997350)

Google just makes a policy that they don't index any site that even once sends such a request. Problem solved. More seriously, maybe an extension to robots.txt that defines cache lifespan would be reasonable.

Extend robots.txt? (3, Insightful)

140Mandak262Jamuna (970587) | more than 7 years ago | (#17997362)

Can't google propose an extension of the robots.txt file format to allow the original publishers to set a time limit on when the search engines should expire the cache?

Implications for proxies (3, Informative)

l2718 (514756) | more than 7 years ago | (#17997380)

What do this say about proxy services, then? These also store content which may be subject to copyright and serve it to users.

Belgium! (1)

AJWM (19027) | more than 7 years ago | (#17997402)

Am I misremembering, or wasn't it also Belgium that ruled against Lindows in the trademark lawsuit that Microsoft brought? (After a US court said essentially that since "windows" was an English word, MSFT didn't stand much chance of winning the US suit.)

If so, perhaps there's good reason that in "Hitchhikers Guide to the Galaxy", belgium is a swear word.

Good, I don't want to find that! (3, Interesting)

Heddahenrik (902008) | more than 7 years ago | (#17997434)

I'm often getting irritated about that I find stuff with Google and then aren't able to read it. Who wants to find a short text describing what you're searching for, only to find out that I have to pay or go through some procedure to actually read the stuff?

I hope Google removes these sites totally. Then, as written by others too, we need a law that says that the ones putting stuff on the web has to write correct HTML and robot.txt files if they don't want their content cached. Google can't manually go through every site on the web and it would be even more impossible for Google's smaller competitors.

That fine... (1)

PHAEDRU5 (213667) | more than 7 years ago | (#17997530)

We call it a "Belgian Dip."

Just Pull Out (5, Insightful)

Nom du Keyboard (633989) | more than 7 years ago | (#17997536)

Google ought to just pull-out from indexing anyone who complains about their methods. You effectively disappear off of the Internet w/o Google, and these whiny complainers deserve exactly that. Maybe after they've lived in a black hole for a while they'll realize the benefit of having their free material easy for web users to find and view.

some statements from belgian media (-1, Troll)

circletimessquare (444983) | more than 7 years ago | (#17997572)

http://www.gva.be/nieuws/politiek/ [www.gva.be]

Vlaams Belang voert de strijd op tegen het uitsluiten van zijn kandidaten en militanten door de vakbonden. Zo verklaarde Marie-Rose Morel, voorzitster van de vakbondcel binnen de partij.
Er lopen vier procedures, waarvan er één morgen/woensdag wordt ingeleid te Dendermonde.

Deze procedure werd door een ex-werknemer van Volkswagen Vorst ingeleid. Onder meer ABVV-voorzitter Rudy De Leeuw en zijn ACV-collega Luc Cortebeeck werden gedaagvaard.
De ex-werknemer neemt het niet dat hij kort voor de dreigende sluiting van VW uit het ABVV gezet werd en dat die bond zijn gegevens doorgaf aan het ACV.

Daarnaast lopen nog twee procedures voor de privacycommissie, een eerste voor het doorgeven van gegevens aan andere vakbonden, een tweede omdat een Vlaams Belanglid uit de vakbond gesloten werd hoewel hij reeds 12 jaar geen lid meer was.
Volgens Morel is het feit dat de vakbonden gegevens doorgeven van uitsloten Vlaams Belangers in strijd met de privacywetgeving. In het verleden werden de vakbonden door de commissie reeds veroordeeld wegens het opstellen van lijsten, merkte het hoofd van de Juridische Dienst, Jurgen Ceder, op.

Morel wees er ook op dat vakbonden niet gelijkgesteld worden met gewone verenigingen omdat ze ook oveheidstaken uitvoeren.


oh no! what have i done!

i guess my next trip to antwerp i'll be sued :-(

you belgians better not read this comment, you're helping me break the law

Caching is Copying (2, Insightful)

Nom du Keyboard (633989) | more than 7 years ago | (#17997586)

If caching is copying, than every user who isn't watching a streaming feed -- which isn't the way text and single image pages are rendered -- is guilty of copyright infringement every time they view a page. Your browser makes a copy of the page on your own hard drive. Watch out!! Here come the lawyers now.

Re:Caching is Copying (2, Insightful)

drinkypoo (153816) | more than 7 years ago | (#17998214)

If caching is copying, than every user who isn't watching a streaming feed -- which isn't the way text and single image pages are rendered -- is guilty of copyright infringement every time they view a page.

I have news for you. When you stream your browser makes a local copy of portions of the stream, decodes them, and displays them.

If sampling is illegal (without permission) then clearly copying a portion of a video stream without permission would be illegal. However, since you can give permission to anyone you like, there's no crime being committed, as making a stream publicly availably is granting permission.

Ep...!? (-1, Troll)

Anonymous Coward | more than 7 years ago | (#17997622)

Sounds Good To Me (2, Interesting)

Imaria (975253) | more than 7 years ago | (#17997626)

If Google is not allowed to have any cache of these sites, then wouldn't that mean they would have nothing to index for their searches? If you send Google that email, and suddenly don't show up on any of their searches, congrats. On the plus side, no-one has access to your content anymore. On the downside, NO-ONE has any access to your content anymore, because no-one can find you.

The Internet, RIP (-1, Troll)

Anonymous Coward | more than 7 years ago | (#17997826)

Made in USA, 1969. Destroyed in Europe, 2007.

But how will Belgians get their daily kiddie porn fix now?

Outrageous (1)

timonvo (1063686) | more than 7 years ago | (#17998118)

I personally live in Belgium, but I have to say that this comes as shock. I didn't hear a thing about this on the news yet and don't really understand what the court tried to achieve with this. As some have already pointed out, it has been google's policy for years.

Simple really (3, Interesting)

RationalRoot (746945) | more than 7 years ago | (#17998184)

If someone does not want their extracts caches, remove them ENTIRELY from google.

I don't believe that anyone has added "being indexed" to human rights yet.

D

How other protect their copyright (1)

Fluppe42 (906509) | more than 7 years ago | (#17998254)

Personally, I am a Belgian, and I am actually wondering why our newspapers just don't apply the same protection like, e.g., IEEE on their journals. You often get a Google hit to an IEEE paper on the IEEE server, but you then get the login page. Without password, you can't get access to the content. Of course, the copyrighted contents of IEEE is in the form of pdf's. Is it so much harder to protect html pages than pdf's?

Belgium (1)

andr0meda (167375) | more than 7 years ago | (#17998342)

Belgium is much smaller than than China, but the way they can have themselves delisted from Google is waaaaaaaay cheaper!

All jokes aside, the real issue here is whether a technical option to opt out of a certain practice (like using a robots.txt) is sufficient to avoid lawsuits. In this case it clearly is not, so I'm wondering if anyone who screws up his robots.txt can put down a claim against Google just like that? Get-rich-quick scheme or grey zone in some Belgian lawyer's head?

Speaking as a Belgian, the issue is serisouly blown out of reasonable proportion, and only the french speaking news-papers can think of such foolishness. But even though Google is supposed to be the "nice" company, delisting from big brother's great caches somehow also has a happy ring to it..

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?