Request Media Kit

Ranking Google Ranking Factors By Importance

Rand Fishkin and SEOmoz polled 132 SEO experts with data from over 10,000 Google search results, and have attempted to rank the importance of ranking signals. It’s not confirmed fact, obviously....
Ranking Google Ranking Factors By Importance
Written by Chris Crum
  • Rand Fishkin and SEOmoz polled 132 SEO experts with data from over 10,000 Google search results, and have attempted to rank the importance of ranking signals. It’s not confirmed fact, obviously. Google won’t provide such information, but I suppose the next best thing is the collective opinion of a large group of people who make their livings getting sites to rank in search engines, and Fishkin has put together an impressive presentation.

    Do you think Google is ranking search results effectively? Comment here.

    You can view the entire presentation here, but I’ve pulled out a few key slides that basically sum up the findings.

    The factors are actually broken down into the following subsets, where each is ranked against other related factors: overall algorithmic factors, page-specific link signals, domain-wide link signals, on-page signals, domain name match signals, social signals, and highest positively + negatively correlated metrics overall.

    The results find that page-level link metrics are the top algorithmic factors (22%), followed by domain-level, link authority features (21%). This is similar to the same SEOmoz poll for 2009, but there is a huge difference in the numbers, indicating that experts are less certain that page-level link metrics are as important. In 2009, they accounted for 43%.

    Search Ranking Factors

    Page-specific link signals are cited as metrics based on links that point specifically to the ranking page. This is how the results panned out there:

    Page-specific linking factors

    According to Fishkin, the main takeaways here are that SEOs believe the power of links has declined, that diversity of links is greater than raw quantity, and that the exact match anchor text appears slightly less well-correlated than partial anchor text in external links.

    Domain-wide link signals are cited as metrics based on links that point to anywhere on the ranking domain. Here is what the poll looked like in this department:

    Domain Level linking factors

    The report compares followed vs. nofollowed links to the domain and page, finding that nofollow links may indeed help with rankings:

    Nofollow

    On-page signals are cited as metrics based on keyword usage and features of the ranking document. Here’s what the poll looked like on these:

    on-page factors

    Fishkin determines that while it’s tough to differentiate with on-page optimization, longer documents tend to rank better (possibly as a result of Panda), long titles and URLs are still likely bad for SEO, and using keywords earlier in tags and docs “seems wise”.

    Here is how the domain name extensions in search results shook out:

    Domain extensions

    Here are the poll results on social-media-based ranking factors (which Google has seemingly been putting more emphasis on of late):

    Social Factors

    Fishkin suggests that Facebook may be more influential than Twitter, or that it might simply be that Facebook data is more robust and available for URLs in SERPs. He also determines that Google Buzz is probably not in use directly, as so many users simply have their tweet streams go to Buzz (making the data correlation lower). He also notes that there is a lot more to learn about how Google uses social.

    Andy Beard has been testing whether links posted in Google Buzz pass PageRank or help with indexing of content since February 2010. He is now claiming evidence that Buzz is used for indexing.

    Danny Sullivan asked Google’s Matt Cutts about the SEOmoz ranking factors survey in a Q&A session at SMX Advanced this week – specifically about the correlation between Facebook shares and Google rankings. Cutts is quoted as saying, “This is a good example of why correlation doesn’t equal causality because Google doesn’t get Facebook shares. We’re blocked by that data. We can see fan pages, but we can’t see Facebook shares.”

    The SEOmoz presentation itself has a lot more info about the methodology used and how the correlation worked out.

    All of the things covered in the presentation should be taken into consideration, particularly for sites that have experienced significant drops in rankings (because of things like the Panda update or other algorithm tweaks). We recently discussed with Dani Horowitz of Daniweb a number of other things sites can also do that may help rankings in the Post-panda Google search index. DaniWeb had been hit by Panda, but has seen a steady uptick in traffic since making some site adjustments, bringing up the possibility of Panda recovery.

    Barry Schwartz at Search Engine Roundtable polled his readers about Panda recovery, and 4% said they had fully recovered, while more indicated that they had recovered partially. Still, the overwhelming majority had not recovered, indicating that Google probably did its job right for the most part (that’s not to say that some sites that didn’t deserve to get hit didn’t get hit). In that same Q&A session, Cutts said, “The general rule is to push stuff out and then find additional signals to help differentiate on the spectrum. We haven’t done any pushes that would directly pull things back. We have recomputed data that might have impacted some sites. There’s one change that might affect sites and pull things back.”

    A new adjustment to the Panda update has been approved at Google, but has not rolled out yet, he says. This adjustment will be aimed at keeping scraped content from ranking over original content.

    Home Page Content

    There have also been other interesting bits of search-related information coming out of Google this week. Cutts posted a Webmaster Central video talking about the amount of content you should have on your homepage.

    “You can have too much,” said Cutts. “So I wouldn’t have a homepage that has 20MB. You know, that takes a long time to download, and users who are on a dial-up or a modem, a slow connection, they’ll get angry at you.”

    “But in general, if you have more content on a home page, there’s more text for Googlebot to find, so rather than just pictures, for example, if you have pictures plus captions – a little bit of textual information can really go a long way,” he continued.

    “If you look at my blog, I’ve had anywhere from 5 to 10 posts on my main page at any given time, so I tend to veer towards a little more content when possible,” he added.

    Who You Are May Count More

    Who you are appears to be becoming more important in Google. Google announced that it’s supporting authorship markup, which it will use in search results. The company is experimenting with using the data to help people find content from authors in results, and says it will continue to look at ways it could help the search engine highlight authors and rank results. More on this here.

    Search Queries Data from Webmaster Tools Comes to Google Analytics

    Google also launched a limited pilot for search engine optimization reports in Google Analytics, tying Webmaster Central data to Google Analytics, after much demand. It will use search queries data from WMT, which includes:

  • Queries: The total number of search queries that returned pages from your site results over the given period. (These numbers can be rounded, and may not be exact.)
  • Query: A list of the top search queries that returned pages from your site.
  • Impressions: The number of times pages from your site were viewed in search results, and the percentage increase/decrease in the daily average impressions compared to the previous period. (The number of days per period defaults to 30, but you can change it at any time.)
  • Clicks: The number of times your site’s listing was clicked in search results for a particular query, and the percentage increase/decrease in the average daily clicks compared to the previous period.
  • CTR (clickthrough rate): The percentage of impressions that resulted in a click to your site, and the increase/decrease in the daily average CTR compared to the previous period.
  • Avg. position: The average position of your site on the search results page for that query, and the change compared to the previous period. Green indicates that your site’s average position is improving.To calculate average position, we take into account the ranking of your site for a particular query (for example, if a query returns your site as the #1 and #2 result, then the average position would be 1.5).
  • This week, we also ran a very interesting interview between Eric Enge and Bill Slawski addressing Google search patents and how the might relate to the Google Panda update.

    Back to the SEOmoz data. Do you think the results reflect Google’s actual algorithm well? Tell us what you think.

    Get the WebProNews newsletter delivered to your inbox

    Get the free daily newsletter read by decision makers

    Subscribe
    Advertise with Us

    Ready to get started?

    Get our media kit