Google To Get Belgian News Hearing

    November 27, 2006
    WebProNews Staff

A couple of parties to a lawsuit in Belgium against Google for its news indexing have settled with the search advertising giant, but Google will have to wait until 2007 to fight an injunction against its scraping of news content managed by distributor Copiepresse.

Google and two associations, one representing photographers and another journalists, have reached accords with Google that will allow Google News to use their content, according to a Reuters report. “We reached an agreement with them that is going to help us make extensive use of their content in new ways,” Google spokeswoman Jessica Powell said in the report.

Google had been judged in violation of copyright and database laws in early September.

The company failed to appear at a hearing that led to that decision.

The Court of First Instance ordered Google to stop indexing content from agencies represented by Copiepresse. Google seemed to take particular exception to being required to post the court’s decision on the home page for a five-day period.

Now, Google has argued that they should be able to fight the injunction against it. The court has agreed, and the two sides will go at it after the holidays.

As with most disagreements in life, this one has money as a component.

Copiepresse objects to the caching of its content, which makes those stories available long after Copiepresse has tucked them behind a subscription wall.

A Copiepresse executive at the hearing noted the organization was interested in talks with Google, and a lawyer for Google likewise said the company was trying to “resume dialogue.”

Since the photographer and journalist groups have settled already, it will not be a surprise to find the two sides making a similar accord.

The bigger issue for Google would be the copyright one.

Google has generally come out ahead when it comes to indexing content, and has delivered on getting web users to relevant sites.

Any decision that impaired Google’s ability to freely index content not otherwise restricted by a robots.txt file would be very detrimental to the company.

Add to | Digg | Reddit | Furl

Bookmark WebProNews:

David Utter is a staff writer for WebProNews covering technology and business.