SES – How Search Algorithms Work (Probably)
The heart of search engine optimization is, of course, procuring a high ranking on any given search engine. These rankings are determined by closely guarded algorithmic formulas. The task is then left to search engine marketers to use deductive reasoning to decipher bits of information from search engine patent applications, trial and error, and general consensus from those who keep up.
Our own Michael McDonald is in attendance at the Search Engine Strategies Conference in San Jose, and will be sending us what he finds out as it goes. The following is taken from one of the first available seminars devoted to unraveling the secrets of the search algorithms.
Note: Discuss this topic in WebProWorld.
Rand Fishkin of SEOmoz.org entreated listeners to various factors influencing search engine algorithms based upon historical analysis and information deduced from Google’s patent application.
According to Fishkin, algorithmic calculations begin on the spider’s first crawl of site, discovered through the hyperlink. Registration data seems to be a player, such as the date of domain purchase and the length of the time for which a domain is purchased. Domains registered for more than a year are thought to more likely not to be spam or a “throw away” domain. See a larger explanation here.
Content changes seem to be especially weighted as when webpage content is updated, it becomes more relevant. What is not important to the spiders are cosmetic changes to the site such as background, color schemes, et cetera.
With regard to links, it would appear that links with staying power among various webpages are thought to be more valuable and/or relevant. In other words, if a website changes its style or content, but certain outbound links are static, then those links aid that page’s relevance and ranking.
Combining what we know of the value of regularly updated content and the power of static links, we can underscore the importance site selection when link building. An SEO pro will try to determine how often a page that links to his site will be updated and how static the link presence is on that site.
For every website, Google also keeps tabs on the rate of linking. If, in the temporal analysis, the average rate of linking over a given period of time has been “x,” but recently the sudden popularity of the link balloons dramatically, then page rank may also see an increase.
However, spikes in popularity can be a red flag for spamming techniques, and thus more qualitative analysis of those links is needed. Here are some of the factors that go into link analysis to protect against link spamming:
Freshness: How often is a link appearing or disappearing?
Trustworthiness: How trusted is the source of the links (i.e., is from a .gov or a .edu)?
Speed, Number, Topic: How fast are links to a website appearing? Why are there 5000 new links to a website in one day? Is this the result of spam or because of some topical phenomenon like the Tsunami of 2004? Tsunami topics would be deemed more natural than thousands of new “Work from Home” links.
Domain ranking history seems to be another point of examination. Jumps in page rank are apt to be scrutinized more heavily. Seasonality or “burstiness” also may play a role in this, similar to the topical phenomenon described above.
Inversely, a large reduction in traffic is also taken into account. Simply put, a large loss in traffic equals a loss of credibility and a loss of relevance.
Reflecting the popularity of your site, advertisers (the number and quality of ads) on your page also seem to affect relevance.
Finally, the number of times a page is selected from SERPs, the amount of time browsers spend on a page, the time between trips to Google, and measured usage of the Back button may all play a role in determining page rank and relevance.
Discuss this topic in WebProWorld.