Google Panda’s Beauty Pageant For Geeks
Editor’s Note: The author of this post, Morris Rosenthal, runs Foner Books – a site which was heavily impacted by Google’s Panda update. WebProNews took a closer look at his story here. It’s an interesting look at how a legitimate author with seemingly high quality content, but perhaps not the most modern site design, took a big hit from the update. He offered his thoughts about it in a post earlier this year. Last month, he had another interesting Panda-related story, where one of his three sites, which he deemed his least quality of the three, actually went up in search visibility.
Ever since Google explained that their February Panda release was judging websites on whether you would be comfortable taking medical advice from the site or handing over your credit card, I’ve wondered about the criteria an algorithm could employ for this evaluation. So I wrote the article “Why Panda is the New Coke” for WebProNews, in which I stated that it must come down to a beauty contest, one that an algorithm could understand.
In recent weeks, a document reported by SEO professionals to be the Google raters guideline was leaked on the web. Google has employed human website raters for the last five years, hiring work-at-home moms and dads though agencies like LeapForce. So how do the human rating guidelines tie in to the Panda site-wide penalty algorithm, and are Panda penalties truly automated or are they really based on stored-up ratings from human evaluations?
The Google raters document primarily deals with recognizing blackhat SEO and spam sites, but there’s one category where the raters (or the Panda algorithm) are asked to make a judgment call based on what I’d call technical aesthetics. In the definition of thin affiliate sites, they start by stating that sites with affiliate links to stores like Amazon are probably thin affiliates. It’s the most confusing chapter in their guidelines as they pile on caveats as to when a site with affiliate links may be kosher and when it’s not. To make the final decision, they run a beauty contest, with the following components for recognizing a true “merchant”:
- Does the site have an active users forum?
- Does the site include an ever-present shopping cart?
- Can you log-in to the site, or establish a wish list?
- Does the site provide a physical address and a shipping calculator?
- Is there a way to track packages on the site?
While they go on to say that a site need not have ALL of these things to be legitimate, the damage is already done. My sites don’t have any of these features. Would human raters or the Panda algorithm, presented with whole chapters or entire books for free reading, realize that these pages aren’t coming from Amazon and aren’t legally available anywhere else on the web? For a fast moving human rater or an infant artificial intelligence algorithm following these guidelines, links to Amazon without lots of technical fixings are a sure sign of a turkey.
After Google tripled the visitor count to my IFITJAMS site over pre-Panda levels last month, I’ve become a bit obsessed with figuring out why my lowest effort, lowest authority, least linked site by far, has been rated higher quality than my eleven year old publishing company website and the fifteen year old website I started with a fellow author. The only reason I’ve been able to find for Google’s Panda to apply a penalty that lowered Google traffic to DAILEYINT and FONERBOOKS by a factor of five (80%) while boosting traffic to IFITJAMS by over 300% is that the older sites linked to Amazon Associates and eJunkie for buying our books.
The following two images are from above the fold (on a large screen) of the most popular pages on IFITJAMS and FONERBOOKS. Other than the book buying links and the very recent addition of social buttons to the FONERBOOKS page, they are identical in form and the approach to the subject matter. The FONERBOOKS page has drawn 966 links from 369 domains, and its parent page for computer troubleshooting has a couple thousand incoming links. The IFITJAMS page only has 154 links from 54 domains, but today it gets three or four times as many visitors from Google as the FONERBOOKS page.
Prior to Panda, the FONERBOOKS page got five times as many visitors from Google as the IFITJAMS page. That’s a 15X difference from Panda iterations and it has nothing to do with quality of the information or how people actually searching for information over the years have rewarded the pages with organic links. The Google raters, by the way, whether humans or bears, don’t search for information to do their rating. They are assigned a URL and a search phrase it ranked for and told to rate in accordance with the intent of the query. I can imagine how well that works for phrases that the humans or the bears never would have dreamed of using themselves.
Both of the older sites that got killed by Panda are publishing companies, and we gave up direct mail-order sales for books in favor of sending customers to Amazon years ago. The main reason is to benefit from Amazon’s virtuous circle, the more books you sell on Amazon, the more they sell for you. But there’s also a great advantage for mom-n-pop companies in having somebody else handle the shipping and tracking, not to mention the credit card handling.
Google spokespeople are fond of talking about their “over 500 ranking signals, etc,” but anybody who has been publishing online for longer than Google has existed can tell when a site-wide penalty is being applied. I’m not ashamed to admit that my IFITJAMS website is inferior by any and all measures to my FONERBOOKS and the shared DAILEYINT websites, yet as I detailed in a blog post last month , the serious sites are the ones that got Pandalized first.
There was never a question about the quality of content for these websites, it’s always come down to Google looking at some technical aesthetic and calling it “quality” because it suits their myth. Since Panda, I’ve routinely found and reported scraper sites ranking above the Pandalized websites for stolen text. In a chicken-and-egg scenario, the Panda algorithm may be unable to release my websites from the penalty box because Google now thinks I’m the one with the duplicate content. Or perhaps Panda 1.0 was driven by misguided human raters all along. Maybe they never revisit their original judgments, particularly after a site no longer ranks for whatever search phrase brought it into the rating queue to start with, as I’ve yet to hear of any substantial Panda 1.0 recoveries.
Before my IFITJAMS site recovered from the second Panda iteration, I had given up on Internet publishing and was looking for ways to move on. Now I’m so motivated to troubleshoot Pandalization that I’m thinking of running a contest and offering thousands of dollars in prize money for help. The problem is, with Panda updates of any sort only happening once a month or so, how could I associate a particular change with recovery, if that ever happens? For now, I’m planning to wait for New Years, and if neither site has recovered, I’ll pull the Amazon links from one of them and tell book buyers they’ll just have to go to Amazon and search on the title.
A note saying you can buy it on Amazon without a link, won’t that make me look like a technical guru.