back to article Google infringes copyright by displaying and linking to news site content

A Belgian appeals court has upheld an earlier ruling that Google infringes on newspapers' copyright when its services display and link to content from newspaper websites, according to press reports. The search engine giant is responsible for infringing the copyrights of the papers when it links to the sites or copies sections …


This topic is closed for new posts.
  1. hplasm Silver badge

    How much revenue will the Belgian press lose

    When they effectively disappear from the 'net?

    Or only show up on Bing. Same thing.

    1. Maurice Shakeshaft

      one thing Belgians aren't is 'Stupid'

      I suspect this has been thought through or they wouldn't have gone this far?

      What is the end game here? If it is about money then it opens up another strand in financial debate on how content owners get paid fairly for their work.

      A lawyer might reasonably argue that their client doesn't need to introduce any tools into their website code to prevent copyrighted material being 'stolen'? The thief must simply not knowingly do it. Obviously, I'm no lawyer!

    2. Anonymous Coward
      Anonymous Coward

      Content linking

      It's linking to content, not linking to the newspaper's web sites in general that's the problem, so they won't disappear off the net. The basic upshot is that you'll have to go to the newspapers directly, rather than have Google post other companies work as it it's their own, allowing Google to profit from Ads, rather than the newspapers.

      1. Anonymous Coward
        Anonymous Coward

        Linking to content

        There's really nothing wrong with linking to the content. It's like telling someone the address. It's up to the paper to make it un-readable to non-paying members. I think Google is pushing it by copying the articles, but how does a newspaper stay in business on line when non-paying customers can access the same data that paying customers can. Do they not log into the site? I can't just access certain items on ESPN's website unless I've logged in with a paying account.

        1. Anonymous Coward
          Anonymous Coward

          @AC 11:32

          I think that a straight link would be ok, it's the sort of link that has a summary of the article that they're a bit narked with. You do a search on Google (et al) and see a link with a large summary of the article, the paper doesn't see anything in terms of ad revenue but Google does.

    3. Version 1.0 Silver badge

      Is it still a country?

      I thought that Belgium disappeared as a country about the same time that Pluto became and "un-planet"

  2. Jean-Paul

    Rare jongens die Belgen

    Need I say more?

  3. Sir Runcible Spoon Silver badge


    The newspapers may find they have won the battle but lost the war.

    What happens when google stops linking to the newspaper sites - completely. To most people, they would simply vanish from the internet.

    1. Anonymous Coward
      Anonymous Coward


      Disappearing off the internet is fine. Going bust? Not so good.

  4. LPF

    Why have they not used Robots.txt

    to stop their sites being indexed , or have I missed something ??

    1. Squiggle

      ...The title is required, and must contain letters and/or digits.

      ... I was wondering the same thing!

    2. Anonymous Coward

      my thoughts exactly.

      The others were, if you put it on the (public) internet and do NOT protect the folder its in, then expect it to be indexed.

      Its akin to putting up a poster up in a shopping mall and expecting people to ignore it.

  5. John Hawkins

    Google have been Belged... quote an old mate of mine who once worked in Belgium. Apparently 'been Belged' was the term used by the ex-pats when one of them ended up on the wrong end of a weird Belgian law, bureaucrat or whatever.

    Anybody else heard it?

  6. Anonymous Coward


    This is part hilarious, part pathetic.

    The "damage" can easily be avoided by 2 lines in their sites robots.txt file.

    No way Google (or any search engine) can work if this stands. The only safe way to go would be to reverse the current process (for EVERY single website) and use robots.txt to specifically opt-IN to have your site indexed by EVERY specific bot. What a wonderful world.

    Another fantastic show of the abuses copyright allows (where's Orlowski?, always want to hear his opinions on these blatant abuses of copyright).

  7. Turtle

    About time, too.

    A very interesting and enjoyable story.

    I look forward to reading *many* more like it in the future.

  8. John Tappin

    Surely this means ALL search engine results are in violation???

    i cant understand how any search engine could remain in business without linking to content? surely this is the end of search as we know it google or otherwise.

  9. Anonymous Coward


    Ever heard of robots.txt?

  10. jake Silver badge

    google is persona non grata around here ...

    But who, in the great scheme of things, gives a rat's ass about Belgium's home-grown news? Shirley this is a non-story ...

  11. Robert Carnegie Silver badge

    Makes it sound bad

    ""Google's business model, and that of some other search engines, relies on being allowed to exploit the 'fair use' exemptions within copyright laws."

    The word "exploit" makes it sound like they're being sneaky even when they aren't - or is that only how it sounds in one reader's head?

  12. Neil Hoskins

    How do you get fifty Belgians into a 2CV?

    You put a chip on the back seat.

    How do you get them all out again?

    You shout, "Come and get the rest!"

  13. Christoph Silver badge

    Oh Belgium!

    They are really saying they don't want any links to their sites, any mention of their sites, any comment on their sites?

    Remind me, why are they running those sites if they don't want anyone to know about them?

    Presumably they are spending lots of money advertising those sites, but don't want the same thing done for free?

    I begin to see why Douglas Adams picked that word as the worst swear word in the Universe.

  14. XMAN

    noindex tag

    I'm sure it wont be long before people whine that the newspapers could just add noindex tags to their site to stop Google from indexing their website.

    Well, the bottom line is that they shouldn't have to. Google do what they want and then expect people to opt-out. Usually they get away with it, but that doesn't make it right.

    If I setup a website which scraped all your lovely content and made money off it, would you be mad? Likely, yes. Would it make it better if I told you that all you have to do is add a "nosteal" tag? No.

    Google throw their weight around and mislead people into thinking that their opt-out (rather than opt-in) policies are industry standard. Some of them do become "standards" online simply because of Googles size, but that doesn't make them any more legitimate than If I were to make them up myself.

  15. Peter 82

    Am I the only one

    Who wants Google to apply this law to all websites hosted in Belgium for the next 24 hours. Flemish sites are allowed, other countries websites are allowed but anything "homegrown" disappears.

    Wait for all of the complaints that come in and use that as evidence in the next court case.

    1. Anonymous Coward
      Anonymous Coward


      1) A robots.txt file has been around since the earliest of search engines that I can remember. This wasn't a google invention - it is exactly the way it has always been. A search engine can completely ignore it - there is no law against that, but google like most others doesn't. It is a couple of lines that permanently stays on your site that is simple to use - far easier than the easiest webpage. If you had to use opt-in to have your site indexed you wouldn't have ANY useful internet today. The search engines could just be filled with Spam, marketing and corporations. Some of the really useful stuff could easily be missed.

      2) What are Google stealing? Maybe everyone is seeing a different Google News to me. All I see is a sentence at max of a story and then a link to the newspaper's site. From there they could sell the story to whoever, I just don't understand it.

      3) If Google can see it for free - anyone can see it for free. They have a dumb bot - it's not a hacking tool.

  16. Gordon861

    Search Engines

    What the search engines should do now as a group or just Google is make sure that the Belgian newspapers that bought this case don't appear in any Google searches at all. Otherwise the next stage will be wanting Google to pay the paper everytime they show up in a search.

  17. NinjasFTW

    cake and eat it too?

    They don't want to use robots.txt because they still want people to be able to find their way to their website if doing keyword searches etc

    So they basically want to pick and choose how google provide their service.

    99% of times if you see a story you want to read you still click on the link and end up at the papers website anyway.

    I bet if google simply removed all their sites from all of their indexes they would be complaining soon enough

  18. MinionZero

    Old newspapers, outdated business model

    This sounds like the newspapers are shooting themselves in the foot, as they won't be found by viewers who may have looked at other news pages on these newspapers websites as well. I've often found links to news articles that I've then looked at more of their content and some I've even bookmarked, because I found them interesting. (Its how I found TheRegister!). If I was unable to find these newspapers, I would move onto others that I could find, often without ever knowing the others existed, then the hidden newspapers go out of business, as they have no readers, no community around their site and so no advert revenue & no merchandising revenue etc..

    This sounds like a typical controlling move by dying companies who's entirely flawed outdated business model is based on their requirement to continue to control distribution of the data. Its obvious to everyone the Internet is distribution of data. Therefore any attempt to limit the spread of data limits their ability to find new customers, whilst at the same time their existing customers are being lured away by new media companies that are attracting new viewers.

    New media companies will still work around this ruling, by giving away all their news stories which will continue to undermine the old dying closed news distributors. These old companies don't get it. Their business model no longer works. They cannot control distribution, so they need to find another way to earn a living, but they refuse to see it.

    Its the same as the music industry. They seek to control distribution. Yet new bands are appearing that freely give their music away which allows people to find them and then these people are able to get into their music and then these new bands are starting to earn a living from concerts and other merchandising. Its only the controlling old music distributors and bands who are going out of business.

    The control freaks are being made redundant by them refusing to see control of distribution isn't the answer it once was for business. Meanwhile the new media companies will profit from the old companies going out of business, as even more viewers will come their way.

    1. Anonymous Coward
      Anonymous Coward


      In your brave new world of Nu Media, where amateurs (presumably, because otherwise you'd need to be indipendantly rich) give away their personally written news for free. Who does the investigative journalism? You know, the sitting in a records office for a month going through stuff that you think just might lead you onto something? Who pays for that?

      Or do you mean sites like the Reg where they're paid for by Ad revenue? Ad revenue which is being taken by Google by showing parts (or all) of the articles so noone has to go to the news site itself?

    2. GennadyVanin

      These are all wrong arguments

      This is wrong arguments that online newspaper will not find their readers by not being indexed by Google:


      There are so many newsfeed on the internet that being 2000th in SERP for small newspaper means nonexistence anyway


      Mostly I come to online newspapers:

      - through Email notifications to subscribed earlier news;

      - by search in twitter ;

      - by following interesting interesting Tweets and feeds of ppl I found interesting and followed earlier

      What gave us copyright infringement all-stealing-all Google model thrown on us:


      Before coming here I was dumped in 3 other bewspapers and blogs as spammer


      Mostly I find only spam rubbish by Google searches

  19. nowster

    Old hat

    Shetland Times versus Shetland News. 1996

  20. Anonymous Coward

    research/more details needed?

    If I remember the the details correctly, the newspapers _want_ google to index them, the newspapers _want_ to appear of the search results, the newspapers _want_ to appear on google news.

    But they want google to _pay_ them in order to show those results. Which google is not happy with, in fact, google removed them from its index at some point (as the article pointed out). But then the newspapers complained that they are being punished.

    Basically, the newspapers want things to remain as they are.... but they want google to pay them for the right to index them and show the results in the search engine.

  21. Hayden Clark Silver badge

    Caching paid content

    Isn't the issue that Google were indexing (probably OK, drives subscriptions) and =caching= content that was behind a paywall?

    The article seems to imply that "Google allows articles to be read for free" that users normally have to have a paid-for subscription to see?

    1. Steve Gill

      google cache

      has a habit of exposing secured or subscription-only pages

    2. Mike Moyle Silver badge

      Re: Caching paid content

      That was certainly how I read it.

      Indexing for search results and pointing to the original source is one thing, caching the whole thing on your own servers and robo-prefacing with "A report published in <newspaper> today said:..." in order to squeeze under the "reporting on/commenting on the news" fair use exemption -- which is what I gathered Google was doing, from the article -- seems to me to be something else entirely.

  22. Gilbert Wham

    "free access to paid-for content"

    Hang on a minute, how is that possible? Surely if it was *paid-for* content, it would be behind a paywall a la Murdoch, and therefore all Google would be doing was driving traffic to their door (assuming you wanted Belgian new written in French that is)? Or am I missing something? I'll admit I rarely if ever look at Google News.

    Granted, if it's on a freely accessible website then they're going to lose ad revenue if Google reproduce the article verbatim on their own news site. However, if they're linking to the actual site (which they generally do), that's an even larger crock, more replete with horseshit than their 'paid content' claim.

  23. Old Handle

    Linking? Linking?!?

    The caching part is a little suspect, I'll admit. I never quite understood how it was decided that it was okay for search engines to copy and distribute content from other sites unless they explicitly opt out. But linking? Seriously? That pretty much makes the whole web illegal (in Belgium). I thought such basic concepts had already been hammered out in courts some time last millennium.

  24. Ooo-wait-BUT!

    claiming copyrite on 'the truth' ? WTF

    Google, in their chosen role as search engine provider [other brands of random suggestion are also available], enable otherwise unknown web-based material to be found. That is all a search engine does. The user enters search criteria, the webpage returns it's best approximation of where you might locate that for which you search; complete with brief synopsis and link (yes, caching is outside this but then, iExplorer [other brands of trojan-like viral software are available] does that on every page it visits). If Google presented the pages as their own work then one would have a case - a bit like when M$ [other brands of disreputable global extortionists are available] presented Google search results as their own.

    The only claim that could possibly made against Google _would_ be that of 'plagerism' however, since everything is accurately referenced (follow the inevitable link), that objection is completely unfounded, even in an overzealous Belgian [other nations of total c**k suckers are also available] court.

    This post was brought to you by the English language. Each word can be found in "The Dictionary". Am I now to be sued by 'Collins'?

  25. Anonymous Coward


    Being flaming Flemish, I detest that you so readily assume that a Belgian court would say "NON". By majority rights it should say "NEE"! Such uninformed assumptions only serve to encourage the de facto-bankrupt Walloons to hold on to the silly notion of a Belgium -- not because they believe in it, but because they want us Flemish, who still have money, to keep greasing the squeaky wheel. There, I've said it.

  26. GennadyVanin


    Forcing author to protect itself by robot.txt and other invented by Google SEO tricks is subversion of copyright law and author's rights.

    It is the user (republisher, et al) of content should ask the author/owner's permissions but not vice versa when the thief insists that it can use whatever he likes because there were no warnings, locks, etc.

    This directly leads to promotion of spamming webfarms stealing content with minor programatic changes by bots that outranki original content as well as give advantages to blackhat SEO professionals specializing in the area of so-called SEO (using bots, sophisticated software, tricks, etc.) over human original content.

    That is the spammers using software to create technically sophisticated webfarms are given advantage, the Google's blessing to steal content. Merits of human writings, who wrote first and any permissions are completely neglected by Google, it is technical merits of sophisticated techniques of republishing of stolen content that are being promoted by Google.

    It is search engine's job to provide fair results of quality of content (but not hidden to user merits of wise SEO tricks density) without being manipulated by unrelated to quality of content SEO tricks.

    The authors/owners of content should not spend $zillions for never-ending competitiont with professional spammers outranking them by using their own stolen content

    It is Google by its subversion of copyright law and author's rights created and promoted the army of spammers (and resp. spamming webfarms) stealing content

    It is technically not that difficult to detect who republished from whom (without references) but Google is not interested in it. It promotes only SEO and principles of author's rights infringements - when ONLY technical tricks unrelated to quality of writing, who wrote first are taken into account

    1. bygjohn

      Nope, it's idiot companies that didn't bother to check how the web worked

      As previous commenters have already mentioned, robots.txt and other methods of preventing search engines indexing web content existed long before Google.

      The web was invented as an OPEN medium (it was never intended to be commercial - it was for the free and open exchange of academic information), without barriers, and has always been that way unless you put up your own barriers, which isn't hard to do in this case - well-established mechanisms have been there almost from the start.

      What you are saying is the equivalent of the person who wanted to cover the world with leather rather than wear shoes. The web doesn't work how corporate lawyers with mid-last-century mindsets think it should work. Tough. Nor can they change the colour of the sky etc etc.

      Nobody made these companies put their content on the web, but having chosen to do so, if they didn't want their content indexing, they should have used the standard methods of accomplishing that - robots.txt/restricted access etc. You can't say no-one should look at your content on an open medium without adding your own restrictions, any more than you can say you want the sky to be green because that's how you think it ought to be.

      But instead these companies want Google and other search engines to publicise their work for them and then (instead of paying the search engines for their work) have the search engines pay them. They know search engines cache content, but still want them to index their stuff and then bitch because it's cached.

      I'm not particularly a fan of Google, and do think they have a cavalier attitude to copyright when it comes to digitising books and trying to snaffle the rights to "orphan works" in particular, but in this case the companies involved just want to have their cake and eat it, and Google has been stealing nothing. In fact it's been doing them a favour, but they are too greedy to face that fact.

This topic is closed for new posts.

Biting the hand that feeds IT © 1998–2019