Quantcast

A Loophole For Paid Links

Or: Link-Laundering 102

Get the WebProNews Newsletter:
[ Search]

It seems it was only a matter of time before the cleverer element of the SEO world developed a workaround for Google’s penalizing of paid links. The workaround involves a pretty creative "dynamic" linking strategy, and it’s playing a little bit dirty.

A Loophole For Paid Links

No longer the province of tax accountants, lawyers, and politicians, an elaborate loophole has been developed by Andy Beard proposing how to get around Google’s paid-link vigilance via robots.txt and paid reviews.*

Beard’s explanation is complicated, lengthy, and loaded with historical context so visit Beard’s blog for further clarification, complete with nifty diagrams. What we will provide here is an overview and basic introduction, and not necessarily an endorsement.

Beard’s proposal (or as he describes it, a red flag in the face of the charging bull) involves strategic use of robots.txt to redirect Google crawlers away from paid reviews. This is intended to take the penalty sting away as Google can’t penalize for what it’s not supposed to crawl in the first place.

In addition to the paid review that is blocked from crawlers, the author creates a follow-up review at another domain that is not paid and links back to the original review, with link juice in-tow. According to Beard, a client would pay for the original domain link, but the not the follow-up on a separate domain (but I imagine the price just got higher, huh?).

The link on the paid review is not a nofollow link, meaning that it will still also pass PageRank since Google shouldn’t know or care about it if it can’t be crawled, and the link on the follow-up review is also not a nofollow because it’s, technically, not a paid link.  

In theory, the original, blocked review will still pass a reduced amount of PageRank because Google still links to "dangling" pages, or pages it can’t see, if there are backlinks pointing to the page. The link juice it passes, however, is reduced, as is the link juice coming from backlinks to it. What happens next is a matter of determination and scale.

With enough backlinks (according to my understanding), especially authority backlinks, the decrease in link-juice can be overcome, thereby raising the blocked page’s PageRank eventually, which is then passed on to its intended paid review/link recipient.

Phew! So, it’s kind of like link-laundering.

Your first objection is probably that Google’s pretty vigilant about link-spam, too, and bursts of low-quality links over a short period of time will raise the spam alarms, thus either earning penalties anyway or negating the collective power of those links.

Quite right, which is why Andy has a plan for that too. This is where it gets a bit harder, since it involves a real commitment to getting that paid link some good juice to pass along. But it probably should be a part of your overall web-marketing campaign already and anyway. 

Beard proposes getting authority links via:

Social bookmarking: A short description, a title, and a link from BloggingZoom, Digg or other social site is all that is needed to carry a decent, relevant amount of link juice to the target.

Targeted RSS syndication: Syndicate the article, make sure it links back. Send to "hub pages" on content sites that accept syndicated articles via RSS (because Google won’t be looking in RSS feeds, either). Aggregators (which will index a snippet and a link) like Technorati also make use of RSS feeds.

Authorized and unauthorized article syndication: Beard syndicates his articles to other publications with high PageRank. Link back to an un-crawled page from there and you’ve given it some much-needed power. What he calls "unauthorized syndication" we usually call "scraping." On the bright side, publishers can make the most of scrapers by not making a fuss, and instead requiring a link.

Targeting Universal Search: Use images, video/audio descriptions, etc., in unpaid content (which is also syndicated, I assume, to sites intended for that type of format) to point back to paid content.

If Google doesn’t find a way to penalize, it could be a viable (if involved) strategy. But it is also more akin to traditional web marketing—taking advantage of the channels you have to promote.** It’s doubtful that less legitimate paid linkers will take the time and effort to promote this way, but you have to admire Beard’s never-say-die attitude.    

 

*This all hinges, of course, on whether it will work and for how long, and how much you rely on Google as a search-traffic generator. The hard truth is that Google is the defacto search engine on the Net, so making el Goog happy whether or not you agree with el Goog’s decrees is an important part of the game. And nobody likes unhappy el Goog.

**Google’s penalties seem also to be forcing webmasters to do (nearly) legitimate content and marketing work, which is an interesting side-development.  
 

A Loophole For Paid Links
Top Rated White Papers and Resources
  • http://www.quenet.org/ SEO Canada

    The only loophole in this loophole is that if Google follows Disallow as it should then the page would actually be removed completely from the index.  Not to mention your robots.txt makes for a good read to see where your paid reviews are…

  • http://liquidwealthonline.com Perfect Wealth Formula

    Well, we’ll see what kind of positive effect this may have for users on the net as the coming weeks and months unfold.  I would expect google to catch this eventually and create some crazy code to put a stop to it.  How I disdain google!

  • http://andybeard.eu Andy Beard – Niche Marketing

    Hi Jason

     

    It is a little complicated, but there isn’t anything really sinister about it, and it is something that actually happens naturally with good content.

    I have always advocated high editorial standards with paid review content. It has to stand on its own, and get high editorial praise.

    The blocked reviews don’t pass any juice, though they can rank because they are dangling pages that Google just can’t index, just like a terrible flash site in many ways.

    As a dangling page they also might not accumulate juice in the same way as a page that is part of the iteration process, but that is something hard to quantify.

    A lot of it boils down to a very distinct editorial process such as when WPM select articles for syndication. As an example I did highlight one of my reviews that was a featured article on WPN sister site SearchNewz.

    That is a very prominent example, but syndication happens on all blogs whether you like it or not, in the for of splogs and scrapers.

    The same happens with social media and social networks which allow you to resyndicate your feed.

    Linking through to such profiles is a natural occurance

    As with all naturally occuring phenomenens on the web, there are ways to encourage it, and sharing blog content is effectively the same as article marketing, only you have the ability to also provide graphical elements.

    This certainly doesn’t rely on a single occurance of an article on my domain ranking, I always have the alternative of a duplicate content page ranking instead (and I do nofollow all outgoing links on those pages with my nofollow those dupes plugin)

    Ultimately good content will find a way to be indexed, and if it also contains useful links, those links are likely to remain.

    With my syndicated content I allow modifications, thus if someone chose to add nofollow to any links to an advertiser that would be their choice in an editorial capacity – it is pure editorial, I wouldn’t suggest paying someone to syndicate my content.

    There are a few extra possibilites I haven’t yet discussed, but I need to get those more formalized before writing about them.

    p.s. for some reason the comment system was rejecting "Andy Beard"

    • Jason Lee Miller

      Hi Andy. I didn’t mean to imply this was anything “sinister.” That’s a strong word to me, just that it may be seen as a way to manipulate the system, but it’s hard to put a moral label on it, if there could be one, so I marked it as gray hat.

      So I’m confused about the dangling pages. I was going off this page at your site:

      http://andybeard.eu/2007/11/seo-linking-gotchas-even-the-pros-make.html

      where Matt Cutts first talks about NoIndex pages:

      Matt Cutts: A NoIndex page can accumulate PageRank, because the links are still followed outwards from a NoIndex page.

      Eric Enge: So, it can accumulate and pass PageRank.

      Matt Cutts: Right, and it will still accumulate PageRank, but it won’t be showing in our Index. So, I wouldn’t make a NoIndex page that itself is a dead end. You can make a NoIndex page that has links to lots of other pages.

      A little later you point to the definition of dangling links:

      “Dangling links are simply links that point to any page with no outgoing links. They affect the model because it is not clear where their weight should be distributed, and there are a large number of them. Often these dangling links are simply pages that we have not downloaded yet

      • http://andybeard.eu Andy Beard – Niche Marketing

        If you use nofollow on links, then the page is indexed, and at least theoretically the links are not meant to pass any juice.

        If you use robots.txt to block a page from being indexed, it is a bit like a black hole.

        Scientists know black holes exist and can even point in their direction, identifying their existance. They could probably give them a name. They wouldn’t know what happens to an object that passes into a black hole.

        The page can still appear in search results based on whatever offsite linking factors used to rank pages.

        As an example, I have some of my affiliate link redirects on my domain only blocked by robots.txt, and those links are indexed

        http://www.google.com/search?q=andybeard.eu%2FRecommends%2FSEO_Book.html

        I should really stick nofollow on all of those links to not waste juice. Just saying that is controversial in SEO circles as there are those that think you shouldn’t use nofollow for anything, and that you can’t leak juice.

        Just think of how many "secret" download locations you can find in Google that are supposedly blocked by robots.txt

         

         

        If you use meta robots noindex, then it is a bit like a time portal.

        Whilst people might point to it, it won’t appear in a search index. It can however still pass juice as it will still be spidered.

        In human terms I suppose this is like a dark secret room or tunnel. You can feel your way around inside it, and even find the exit, but you can’t see it, and you have been told not to tell anyone about it.

         

        A page that has in addition meta index nofollow won’t appear in search results, but will still soak up juice which is handled by the random surfer dampening factor and passed on to a random page in the interweb.

        This for a human is a dark secret cave with no exit

         

        If people link to a page that is blocked by robots.txt, the juice is not being used effectively and can’t flow to other pages. However long term it is possible to change the permalink, and redirect the juice to other pages that provide more of an overview for the topic.

        this is actually a very natural occurance, becuase whilst you might write about a company once as a paid review, or even a free article, you are more likely to write an update if situations change – this is especially true if you review a service and they make changes based upon that feedback. As an example I wrote a review of Volusion and they since that time added support for Aweber.

        If I subsequently write a 3rd post, then the ideal landing page for both search results and referrals might be a specially created page that then links to all 3 documents, as I described in this post.

        http://andybeard.eu/2007/11/optimizing-html-links.html

        Some might look at playing around with 301 redirects as a little greyhat, but the search engines would see the same as a user, and a user would have a better experience.

        It is actually timely to point out how this is now reflacted in my own search results.

        http://www.google.com/search?q=wordpress+seo&start=10

        From Europe I am seeing 11th or 12th most of the time, from the USA I have been told 9th at times.

        All that has happened so far is no loss of ranking, and the snippet has disappeared.

        Soon I will probably get a double listing as the version on Searchnewz will also rank

         

  • Bored

    I normally delete all the spam I get from webpronews, but I actually read some of this. I think you people give entirely too much power to google.

    The only way google is finding out about paid links is from people advertising them and then the snitches go and tell on them.

    Besides, when the hell did google create the rules for the internet? When they start caring about my bottom line, I might give a crap about all their idiotic ‘rules’.

  • http://www.firelightwebstudio.com Guest

    Why would I bother with such a convoluted and time consuming way to try to get around paid links? There are so many ways that you can legitimately pay someone to help you get backlinks that have way more power, for far less of a time commitment.

    It is just silly to devise a complex workaround when a straight shot exists in more than one avenue already.

  • http://www.coleman-cartoons.com Guest

    I see nothing wrong with this and give credit to the person who came up with it.  First of all, Google’s AdWords program is just another form of paid links anyway.  If a webmaster can find a way to get more traffic to his site by paying a little money out in advertising, why not?  Who died and made Google boss anyway?

    • Ohio

      I am in agreement with many here; Yahoo and Google appear to be playing with web site owners in the paid link areas. Getting around something that will certainly be countered at the SE’s next meeting adds to the Web Pro News ink but, when it comes down to it, who really cares? Like newspapers long ago, the SE’s make their money off ads, not subscriptions.

      When both SE’s can accept my search word – Oxford University – and not display footwear links, then that will be something for WPN to write about.

       

    • http://www.thebibleistheotherside.org Michael

      I agree, Google’s AdWords is another form of paid links, however does it improve your PR ranking if you have Google’s AdWords verses some website that doesn’t?   I believe buying a PR ranking is wrong, but I don’t find placing an Ad on another site is wrong.  The PR system should be fair for all not just the ones who have a huge budget. If a company or website even Christian in nature decides to buy a "feature link" in a directory for the purpose of attracting more traffic but wouldn’t improve a PR ranking, I find no problem with that. A code or something could ID it as a legit paid link, google’s bot could ID it as such, then it wouldn’t be caculated into the PR formula. Thus, no punishment such as reduced ranking,  nor helping a page rank improve and yet attract more traffic using paid links.

      This I think would work better than just be for paid links or just being against paid links. But Google might be doing a Microsoft with their AdWorks which of course benefits them. Compeition would deminish profits for Google, thus, the loophole.

  • http://www.healthandwellnessarticles.com/ Pedro

    I’d like to preface this by thanking the author for shedding some light on this subject.  The concept seems to make sense.  However, I there is one thing I still don’t understand about this entire situation.

    How does Google know the difference between a link that was paid for privately between webmasters and a link that was posted free-of-charge?  I understand that it’s fairly easy to spot TLA links, but how can they tell if there are no scripts involved or code that gives it away (assuming they’re not using gmail to set up and g-checkouts to facilitate the transaction)?