Google’s Knowledge Graph: Less Traffic To More Sites?By: Chris Crum - May 16, 2012
Back in March, the Wall Street Journal put out a big article about what apparently went on to become Google’s Knowledge Graph. Google made the formal announcement today. For more on the Knowledge Graph itself, see:
Or, you can just watch this video:
When the WSJ put that article out, we wrote one talking about how Google is giving users less reasons to clickthrough to other sites. The Knowledge Graph would seemingly be a major push in that direction – the direction of more info directly on Google’s pages.
Much of the Knowledge Graph seems to be powered by Wikipedia. You have to wonder how much less traffic Wikipedia will get out of this, as well as other sources providing the Knowledge Graph data, but more info on the page means potentially less clicks on other results on the page. Even if the Wikipedia page were the first result, it’s not necessarily the one the use would have clicked on. Seeing relevant information on the page before clicking, may just prevent clicks on any of the other links on the page.
Certainly, this makes Google more efficient, but what does it mean for other sites? If the Knowledge Graph grows and grows, that impact could be far greater in the future.
Danny Sullivan at Search Engine Land actually discussed this very topic with Google’s Amit Singhal (who announced the product on Google’s blog) at SMX London. Sullivan writes:
Singhal’s response is that publishers shouldn’t worry. He said that most of these types of queries, Google has found, don’t take traffic away from most sites. Part of this seems to be that the boxes encourage more searching, which in turn still eventually takes people to external sites.
Still, some are going to lose out, he admits. But he sees that as something that was going to happen inevitably, anyway, using a “2+2″ metaphor. If people are searching for 2+2, why shouldn’t Google give a direct answer to that versus sending searchers to a site? By the way, Google does do math like this already and has for years. Emphasis is mine.
Google is only going to want to improve its Knowledge Graph, which can only mean more data, and information on more results pages.
Additionally, Sullivan makes a great point about publishes putting together info that Wikipedia or Freebase (another of Google’s sources) could harvest. Dont’ forget that Wikipedia entries come with sources links. It doesn’t look like the original publishers who provided those sources will get much out of the Knowledge Graph’s offerings.
Singhal did say at SMX that Google’s Search Plus Your World personalized search feature is improving clickthrough rates for search results. Perhaps there is something to be said for social signals after all.