How long will Google take to read my robots.txt after updating?
-
I updated www.egrecia.es/robots.txt two weeks ago and I still haven't solved Duplicate Title and Content on the website.
The Google SERP doesn't show those urls any more but SEOMOZ Crawl Errors nor Google Webmaster Tools recognize the change.
How long will it take?
-
What I mean is the website logs:
66.249.73.219 - - [21/May/2012:21:50:58 -0700] "GET /robots.txt HTTP/1.1" 200 435 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.206 - - [21/May/2012:21:53:00 -0700] "GET /robots.txt HTTP/1.1" 301 239 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
72.21.83.124 - - [21/May/2012:22:05:33 -0700] "GET /robots.txt HTTP/1.1" 304 - "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.219 - - [21/May/2012:22:50:58 -0700] "GET /robots.txt HTTP/1.1" 200 435 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.206 - - [21/May/2012:23:01:31 -0700] "GET /robots.txt HTTP/1.1" 301 239 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
72.21.83.124 - - [21/May/2012:23:44:15 -0700] "GET /robots.txt HTTP/1.1" 304 - "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.219 - - [21/May/2012:23:50:58 -0700] "GET /robots.txt HTTP/1.1" 200 435 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.206 - - [22/May/2012:00:16:58 -0700] "GET /robots.txt HTTP/1.1" 301 239 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
72.21.83.124 - - [22/May/2012:00:46:02 -0700] "GET /robots.txt HTTP/1.1" 304 - "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.219 - - [22/May/2012:00:50:59 -0700] "GET /robots.txt HTTP/1.1" 200 435 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.206 - - [22/May/2012:01:24:08 -0700] "GET /robots.txt HTTP/1.1" 301 239 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.219 - - [22/May/2012:01:51:00 -0700] "GET /robots.txt HTTP/1.1" 200 435 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
72.21.83.124 - - [22/May/2012:01:51:17 -0700] "GET /robots.txt HTTP/1.1" 304 - "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.206 - - [22/May/2012:02:32:28 -0700] "GET /robots.txt HTTP/1.1" 301 239 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.219 - - [22/May/2012:02:50:59 -0700] "GET /robots.txt HTTP/1.1" 200 435 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
72.21.83.124 - - [22/May/2012:02:56:28 -0700] "GET /robots.txt HTTP/1.1" 304 - "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.206 - - [22/May/2012:03:40:58 -0700] "GET /robots.txt HTTP/1.1" 301 239 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.219 - - [22/May/2012:03:51:00 -0700] "GET /robots.txt HTTP/1.1" 200 435 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
72.21.83.124 - - [22/May/2012:04:01:29 -0700] "GET /robots.txt HTTP/1.1" 304 - "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
72.21.88.227 - - [22/May/2012:04:38:59 -0700] "GET /robots.txt HTTP/1.1" 304 - "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.206 - - [22/May/2012:04:43:06 -0700] "GET /robots.txt HTTP/1.1" 301 239 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.219 - - [22/May/2012:04:51:02 -0700] "GET /robots.txt HTTP/1.1" 200 435 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" -
Thanks Alan, so to see the log you enter the cache version of the url?
-
Hello Christian.
It depends on many things.
In my logs, I see four googlebots today. Each one has read the robots.txt at hourly intervals.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Update
My rank has dropped quite a lot this past week and I can see from the Moz tools that there is an unconfirmed Google update responsible. Is there any information from Moz on this?
Intermediate & Advanced SEO | | moon-boots0 -
How to update campaign product on google merchant xml feed?
Hello Guys, I am using google merchant xml which updates on every morning 12 am, now many times we run campaign which starts at 8 am and for that product price differ due to that google reject such products. So my query is, is here anyway to set campaign start and end time in xml? Note - In API we can update product price anytime but in xml it can be done at one time only that is 12 am only. right?
Intermediate & Advanced SEO | | varo0 -
"noindex, follow" or "robots.txt" for thin content pages
Does anyone have any testing evidence what is better to use for pages with thin content, yet important pages to keep on a website? I am referring to content shared across multiple websites (such as e-commerce, real estate etc). Imagine a website with 300 high quality pages indexed and 5,000 thin product type pages, which are pages that would not generate relevant search traffic. Question goes: Does the interlinking value achieved by "noindex, follow" outweigh the negative of Google having to crawl all those "noindex" pages? With robots.txt one has Google's crawling focus on just the important pages that are indexed and that may give ranking a boost. Any experiments with insight to this would be great. I do get the story about "make the pages unique", "get customer reviews and comments" etc....but the above question is the important question here.
Intermediate & Advanced SEO | | khi50 -
Links how long do they show?
How long do links show for in software such as Majestic ect once the link has been removed.
Intermediate & Advanced SEO | | BobAnderson0 -
Blocking out specific URLs with robots.txt
I've been trying to block out a few URLs using robots.txt, but I can't seem to get the specific one I'm trying to block. Here is an example. I'm trying to block something.com/cats but not block something.com/cats-and-dogs It seems if it setup my robots.txt as so.. Disallow: /cats It's blocking both urls. When I crawl the site with screaming flog, that Disallow is causing both urls to be blocked. How can I set up my robots.txt to specifically block /cats? I thought it was by doing it the way I was, but that doesn't seem to solve it. Any help is much appreciated, thanks in advance.
Intermediate & Advanced SEO | | Whebb0 -
De-indexed by Google! ?
So it looks as though the content from myprgenie.com is no longer being indexed. Anyone know what happened and what they can do to fix it fast?
Intermediate & Advanced SEO | | siteoptimized0 -
Google Local oddity
So I spotted something a little weird... one of my client's Google Local placements in blended results has the domain name - complete with the .com extension appearing where the business name typically appears: Businessxyz.com www. businessxyz .com of Google reviews Has anyone seen this? I setup their Google Places account quite some time ago and used the business name - not the url. I also setup their Google+ and Local page - using the name. None of the page titles on the website contain the url. I simply can not pinpoint where G is pulling this from or why for that matter. All competitors are appearing with business name - only my client has the domain name visible for the particular local search query. Any ideas?
Intermediate & Advanced SEO | | SCW0 -
Subdomains - duplicate content - robots.txt
Our corporate site provides MLS data to users, with the end goal of generating leads. Each registered lead is assigned to an agent, essentially in a round robin fashion. However we also give each agent a domain of their choosing that points to our corporate website. The domain can be whatever they want, but upon loading it is immediately directed to a subdomain. For example, www.agentsmith.com would be redirected to agentsmith.corporatedomain.com. Finally, any leads generated from agentsmith.easystreetrealty-indy.com are always assigned to Agent Smith instead of the agent pool (by parsing the current host name). In order to avoid being penalized for duplicate content, any page that is viewed on one of the agent subdomains always has a canonical link pointing to the corporate host name (www.corporatedomain.com). The only content difference between our corporate site and an agent subdomain is the phone number and contact email address where applicable. Two questions: Can/should we use robots.txt or robot meta tags to tell crawlers to ignore these subdomains, but obviously not the corporate domain? If question 1 is yes, would it be better for SEO to do that, or leave it how it is?
Intermediate & Advanced SEO | | EasyStreet0