Quantcast

What Happens If You Block The Google Crawler

Get the WebProNews Newsletter:
[ Search]

Although it’s a move few would consider, what would happen to a site if the webmaster decided to block the Google crawler from indexing their sites by stipulating such in the site’s robot.txt file?

This very subject was brought up on the WebmasterWorld forums by a poster who did just that, disallowing the Google crawler to access his site in the robot.txt file. Now the poster wonders what will happen to their site. Will it stay in the index and not be updated or will it be removed all together? The responses shed plenty of light:

Your page will be completely removed from the index. A detailed description is given here.

The response delves deeper the further down you go:

I would expect that all information are removed from the index (even URL only entries). The time until all pages are removed might depend on the crawling frequency.

Of course, has the thread continued, many wanted to know why the initial poster had decided on his course of action. Was the Google bot using too much bandwidth or was Google not providing enough visitors? Whatever the reason, if you have any desire to stay in Google’s index, don’t block their crawler.

Chris Richardson is a search engine writer and editor for WebProNews. Visit WebProNews for the latest search news.

What Happens If You Block The Google Crawler
Comments Off
Top Rated White Papers and Resources

Comments are closed.