Robots.txt unblock
-
I'm currently having trouble with what appears to be a cached version of robots.txt. I'm being told via errors in my Google sitemap account that I'm denying Googlebot access to the entire site. I uploaded clean and "Allow" robots.txt yesterday, but receive the same error.
I've tried "Fetch as Googlebot" on the index and other pages, but still the error. Here is the latest:
| Denied by robots.txt |
| 11/9/11 10:56 AM |As I said, there in not blocking on the robots.txt for 24 hours.
HELP!
-
OK. I tried now the "Configuration > Crawler Access" but it still show Nov 8th.
Then, I took the risk, and clicked on "Fetch as Googlebot" - Taddam, I'm okay
I also able to resubmit the sitemaps XML.
-
You are showing the status from yesterday's date, Nov 8th. It seems Google has not attempted to crawl your site today. The next time it does attempt to crawl your site it will notice the changed robots.txt file and can then crawl normally.
I've reading that it can take 180 days(!) to unblock it. Is that right?
No. Google crawls much more frequently. Your site will likely be crawled again within the next day or so. Try checking again tomorrow.
-
Thank You Ryan.
It all started with human mistake. The webmaster copied a stage server into live server include "Disallow /" command
The robots text he uploaded yesterday, is the original. Should be okay.
Configuration > Crawler Access says
Downloaded Status Home page access thesite Nov 8, 2011 200 (Success) Googlebot is blocked from thesite I've reading that it can take 180 days(!) to unblock it. Is that right?
-
We cannot view your link as it is to your private Google WMT account.
Log into your Google WMT account then choose Site Configuration > Crawler Access. The screen will show how many hours or days it has been since your robots.txt file has been last updated.
Also, visit your site's robots.txt file to ensure it appears accurate. Most sites share their robots.txt file at www.mysite.com/robots.txt
Generally speaking, you want the cleanest robots.txt file possible. The robots.txt file should be the absolute last method to block crawler access in cases where no other method can be implemented. Many site owners do more harm then good with their settings.
If more assistance is needed please share the URL to your robots.txt file along with the URL to a page which is being inappropriately blocked.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots blocked by pages webmasters tools
a mistake made in software. How can I solve the problem quickly? help me. XTRjH
Intermediate & Advanced SEO | | mihoreis0 -
If I block a URL via the robots.txt - how long will it take for Google to stop indexing that URL?
If I block a URL via the robots.txt - how long will it take for Google to stop indexing that URL?
Intermediate & Advanced SEO | | Gabriele_Layoutweb0 -
Have a Robots.txt Issue
I have a robots.txt file error that is causing me loads of headaches and is making my website fall off the SE grid. on MOZ and other sites its saying that I blocked all websites from finding it. Could it be as simple as I created a new website and forgot to re-create a robots.txt file for the new site or it was trying to find the old one? I just created a new one. Google's website still shows in the search console that there are severe health issues found in the property and that it is the robots.txt is blocking important pages. Does this take time to refresh? Is there something I'm missing that someone here in the MOZ community could help me with?
Intermediate & Advanced SEO | | primemediaconsultants0 -
Robot.txt File Not Appearing, but seems to be working?
Hi Mozzers, I am conducting a site audit for a client, and I am confused with what they are doing with their robot.txt file. It shows in GWT that there is a file and it is blocking about 12K URLs (image attached). It also shows in GWT that the file was downloaded 10 hours ago successfully. However, when I go to the robot.txt file link, the page is blank. Would they be doing something advanced to be blocking URLs to hide it it from users? It appears to correctly be blocking log-ins, but I would like to know for sure that it is working correctly. Any advice on this would be most appreciated. Thanks! Jared ihgNxN7
Intermediate & Advanced SEO | | J-Banz0 -
Should comments and feeds be disallowed in robots.txt?
Hi My robots file is currently set up as listed below. From an SEO point of view is it good to disallow feeds, rss and comments? I feel allowing comments would be a good thing because it's new content that may rank in the search engines as the comments left on my blog often refer to questions or companies folks are searching for more information on. And the comments are added regularly. What's your take? I'm also concerned about the /page being blocked. Not sure how that benefits my blog from an SEO point of view as well. Look forward to your feedback. Thanks. Eddy User-agent: Googlebot Crawl-delay: 10 Allow: /* User-agent: * Crawl-delay: 10 Disallow: /wp- Disallow: /feed/ Disallow: /trackback/ Disallow: /rss/ Disallow: /comments/feed/ Disallow: /page/ Disallow: /date/ Disallow: /comments/ # Allow Everything Allow: /*
Intermediate & Advanced SEO | | workathomecareers0 -
How to Disallow Tag Pages With Robot.txt
Hi i have a site which i'm dealing with that has tag pages for instant - http://www.domain.com/news/?tag=choice How can i exclude these tag pages (about 20+ being crawled and indexed by the search engines with robot.txt Also sometimes they're created dynamically so i want something which automatically excludes tage pages from being crawled and indexed. Any suggestions? Cheers, Mark
Intermediate & Advanced SEO | | monster990 -
Robots
I have just noticed this in my code name="robots" content="noindex"> And have noticed some of my keywords have dropped, could this be the reason?
Intermediate & Advanced SEO | | Paul780 -
Should I robots block this directory?
There's about 43k pages indexed in this directory, and while helpful to end users, I don't see it being a great source of unique content for search engines. Would you robots block or meta noindex nofollow these pages in the /blissindex/ directory? ie. http://www.careerbliss.com/blissindex/petsmart-index-980481/ http://www.careerbliss.com/blissindex/att-index-1043730/ http://www.careerbliss.com/blissindex/facebook-index-996632/
Intermediate & Advanced SEO | | CareerBliss0