Using Robots.txt To Prevent Search Indexing
Sometimes there are parts of your website you don’t want accessed by the search engines – for any number of reasons, like sensitive private data, articles that require subscriptions – whatever.
|Keeping Your Site Out Of Search Engines|
Google recently posted a tutorial about how to use the robots.txt file to block search engines from indexing specific parts of the site.
Though we’ve talked about robots.txt at length, it often good to remember the newbies (hi, newbies!). So it was good timing that this tutorial was posted, even if product manager Dan Crow apologetically butchered up a certain brilliant paragraph by Douglas Adams in the process.
He redeems himself by providing the following easy to decode example of how to implement one of these necessary files:
The User-Agent line specifies that the next section is a set of instructions just for the Googlebot. All the major search engines read and obey the instructions you put in robots.txt, and you can specify different rules for different search engines if you want to. The Disallow line tells Googlebot not to access files in the logs sub-directory of your site. The contents of the pages you put into the logs directory will not show up in Google search results.
Preventing access to a file
If you have a news article on your site that is only accessible by registered users, you’ll want it excluded from Google’s results. To do this, simply add a META tag into the html file, so it starts something like:
<meta name="googlebot" content="noindex">
This stops Google from indexing this file. META tags are particularly useful if you have permission to edit the individual files but not the site-wide robots.txt. They also allow you to specify complex access-control policies on a page-by-page basis.
If you understand that, you’re far, far ahead of Belgian newspapers and their courts.