Quantcast

Google’s Policy on No follow and Reviews

Get the WebProNews Newsletter:
[ Search]

I’m not exactly sure what caused all this secondary fuss about no-follow and reviews lately but I think it’s time someone pointed out that Google is being extremely hypocritical about the entire thing and using fear, uncertainty and doubt (FUD) to corral web publishers to their way of thinking.

What we first need to do is get a bit of history lesson, Sherman set the Wayback Machine to January 18th 2005. Visiting the Google Blog we can see a very public and high profile announcement about the no follow tag Official Google Blog: Preventing comment spam

If you’re a blogger (or a blog reader), you’re painfully familiar with people who try to raise their own websites’ search engine rankings by submitting linked blog comments like “Visit my discount pharmaceuticals site.” This is called comment spam, we don’t like it either, and we’ve been testing a new tag that blocks it. From now on, when Google sees the attribute (rel=”nofollow”) on hyperlinks, those links won’t get any credit when we rank websites in our search results. This isn’t a negative vote for the site where the comment was posted; it’s just a way to make sure that spammers get no benefit from abusing public areas like blog comments, trackbacks, and referrer lists.

If you take notice of the title of that page and read through you’ll see plenty of guidance that no-follow is being used to prevent blog spam and should only be used on areas with user generated content. Let’s repeat that because it’s important no follow was implemented to combat blog spam, plain and simple, end of story, period.

So when did no follow get associated with text link advertising? For that we have to go back to the O’Rielly Radar Advertising incident on August 24th 2005 to a comment made by Google Matt Cutts O’Reilly Radar > Search Engine Spam?

As others have noted, if you’re going to sell text links that pass reputation/PageRank, the way to do it is to add rel=nofollow to those links. Tim points out that these these links have been sold for over two years. That’s true. I’ve known about these O’Reilly links since at least 9/3/2003, and parts of perl.com, xml.com, etc. have not been trusted in terms of linkage for months and months. Remember that just because a site shows up for a “link:” command on Google does not mean that it passes PageRank, reputation, or anchortext. Google’s view on this is quite close to Phil Ringnalda’s. Selling links muddies the quality of the web and makes it harder for many search engines (not just Google) to return relevant results. The rel=nofollow attribute is the correct answer: any site can sell links, but a search engine will be able to tell that the source site is not vouching for the destination page. Posted by: Matt Cutts at August 24, 2005 09:31 AM

So exactly what happened in those 8 months that made Google decide to alter a technical specification that was already accepted, who knows? Did Google issue a press release about this change like they did when they introduced the tag, not as far as I can find. Did Google bother to update the Webmaster Guidelines, again the answer is no, but we’ll come back to that a little bit later.

What happened on August 24th was the start of a grass roots campaign by Google to alter the way the web worked to suit it’s own agenda. The main tool in this campaign, creating fear, uncertainty and doubt or FUD in the minds of webmaster and web publishers across the globe. Over the coming months we’d have perfectly orchestrated displays to plant the seed that Google sees and knows all, as webmasters feared hearing the dreaded words “so tell me about your backlinks” (see another version of tell me about your backlinks). This campaign continues taking advantage of Google’s dominance in search, it’s hard to ignore someone when they are responsible for 70% or even 90% of your site’s traffic. This builds momentum and it even gets so bad Google can get away with saying things like “Google Senses Much” and people assume the fetal position in the corner wimpering with fear. Excuse me but exactly when did Google start to know if I’ve been bad or good, did they put Santa Claus or the Easter Bunny on the payroll?

Hey Google if you know so much how come you still haven’t figured out craig’s list is spamming you with 90 subdomains of duplicate content. If you know so much how come you get confused with duplicate meta content instead of actual on page content and only list two pages from a site [threadwatch.org]. If you know so much why are we still having 302 redirect problems in 2007 when Yahoo solved this years ago? It’s simple Google doesn’t know as much as they would have you believe, but instead they want to keep you in the dark, obfuscating data by “breaking the link command” and altering the way the “site:” command works.

Earlier in this post I mentioned the Google Webmaster Guidelines lets revisit that page. Under general guidelines we have the following statement:

Make pages for users, not for search engines.

Except for SEO’s who may have modified their browser settings no one else can see no-follow tags, the only one who cares about them are search engines and their spiders. So in essence what Google is really saying is:

Make pages for users, not for search engines, except when we arbitrarily change the rules without notice and don’t bother to tell you until after the fact, and don’t update our webmaster guidelines to reflect those changes.

OK maybe I’m having a little fun at Google’s expense, but they have fairly well established history of this kind of hypocritical contradictory approach, my particular favorite comes from Google’s Page on SEO where we find this quote:

No one can guarantee a #1 ranking on Google.

Beware of SEOs that claim to guarantee rankings, allege a “special relationship” with Google, or advertise a “priority submit” to Google.

Later on the exact same page we find this

For your own safety, you should insist on a full and unconditional money-back guarantee.

First they tell me no one can guarantee anything and then they tell me I should make sure I get a guarantee, excuse me if I wrinkle my brow like Mr Spock and inform you that’s highly illogical.

Let’s turn our attention to the review issue, in December of 2006 we have the Ramblings About SEO Blog Archive Weighing in on Link Buying (again!)

Matt Cutts Says: December 20th, 2006 at 1:01 pm I think you put this pretty well, Eric. Search engines want links to be real: editorial votes based on quality and merit. With Yahoo, you’re paying for the reviewing service; Yahoo rejects plenty of submissions.

However just over a month later we get this Matt Cutts: Gadgets, Google, and SEO What did I miss last week?

Yet another “pay-for-blogging” (PFB) business launched, this time by Text Link Brokers. It should be clear from Google’s stance on paid text links, but if you are blogging and being paid by services like Pay Per Post, ReviewMe, or SponsoredReviews, links in those paid-for posts should be made in a way that doesn’t affect search engines. The rel=”nofollow” attribute is one way, but there are numerous other ways to do paid links that won’t affect search engines, e.g. doing an internal redirect through a url that is forbidden from crawling by robots.txt.

Did we get another policy change that nobody told us about or is it just Yahoo who’s allowed to do paid reviews? Confused so am I.

Here’s the kicker want to know what’s really wrong with Google’s stance on reviews, by making a big stink about it they are going to drive it back underground. The way reviews are now they are disclosed in a form that humans can read, understand evaluate and make decisions about on their own. If you can’t figure out sentences like “this has been a sponsored post” or “the following has been a paid review” you’re probably the kind of person who believes headlines like search engineers practice cannibalism. What’s really happeneing is we’ve got another FUD campaign going, this time designed to stamp out paid reviews in an inconsistent, arbitrary and undocumented manor. Some people are allowed to do it, some aren’t, and we don’t know why, rather than take a risk people will just decide not to participate, due to a lack of a clear answer. What happens then, the market goes underground, the disclaimer requirement fades away, and no one, especially the people viewing a page know what editorial forces might be at work behind the scenes influencing that page. Way to go Google and good job of putting the users needs ahead of your own!

Comments

Bookmark WebProNews:

Michael Gray is SEO specialist and publishes a Search Engine Industry blog at www.Wolf-Howl.com. He has over 10 years experience in website development and internet marketing, helping both small and large companies increase their search engine visibility, traffic, and sales. Michael is a current member of Internet Marketing of New York ( IM-NY.org) and a guest speaker on Webmaster Radio. He is also an editor for the popular search engine new website Threadwatch.org.

Google’s Policy on No follow and Reviews
Comments Off
Top Rated White Papers and Resources

Comments are closed.

  • Join for Access to Our Exclusive Web Tools
  • Sidebar Top
  • Sidebar Middle
  • Sign Up For The Free Newsletter
  • Sidebar Bottom