Google has released a new Webmaster Help video in response to a question from a user who has been having trouble getting Google to fetch their robots.txt file. Here’s what the user said:
“I’m getting errors from Google Webmaster Tools about the Googlebot crawler being unable to fetch my robots.txt 50% of the time (but I can fetch it with 100% success rate from various other hosts). (On a plain old nginx server and an mit.edu host.)”
Google’s Matt Cutts begins by indicating that he’s not saying this is the case here, but…
“Some people try to cloak, and they end up making a mistake, and they end up reverse-cloaking. So when a regular browser visits, they server the content, and when Google comes and visits, they will serve empty or completely zero length content. So every so often, we see that – where in trying to cloak, people actually make a mistake and shoot themselves in the foot, and don’t show any content at all to Google.”
“But, one thing that you might not know, and most people don’t know (we just confirmed it ourselves), is you can use the free fetch as googlebot feature in Google Webmaster Tools on robots.txt,” he adds. “So, if you’re having failures 50% of the time, then give that a try, and see whether you can fetch it. Maybe you’re load balancing between two servers, and one server has some strange configuration, for example.”
Something to think about if this is happening to you (and hopefully you’re not really trying to cloak). More on Fetch as Google here.