Our crawler was not able to access the robots.txt file on your site
-
Hello Mozzers!
I've received an error message saying the site can't be crawled because Moz is unable to access the robots.txt. I've spoken to the webmaster and he can't understand why the robot.txt can't be accessed in Moz.
https://www.thefurnshop.co.uk/robots.txt
and Google isn't flagging anything up to us.
Does anyone know how to solve this problem?
Thanks
-
@LoganRay This was our issue. Didn't know Moz tries to retrieve the HTTP robots.txt first. Our HTTPS redirect was not working on static files only, so the HTTP path to the robots.txt was failing. We did not notice it because the HSTS policy was forcing the browser to redirect.
-
Wanted to jump back in on this topic as I've just confirmed my initial suspicion.
I just added a new client to our Moz account and had the exact same issue, crawler unable to access the robots.txt file. It's a secure site and was configured in Moz without the HTTPS. When I go to the robots.txt file without https://www, it redirects to the same thing as yours where the / between the TLD and page path gets removed.
Reconfigure your site and it should begin to work.
-
There are 2 parts of your robots.txt that could be causing this, and it all just depends on how each bot is reading regular expressions in your robots.txt:
First, your Disallow: /? can be read as Disallow all paths starting with "/" with 0 to infinity characters "" and one character "?". Try replacing this part with Disallow: /*? to make it not crawl anything with a query string (which is what I believe you were going for).
Second, you have a open Disallow followed by the User-agent: rogerbot and while this should not be read this way, once again it all depends on how each bot reads the commands. To fix this you should change your Disallow following your Googlebot-Image as Disallow: /
-
Hi there,
There's something odd going on when I try to access your robots.txt file without the www. The www gets added back on, but when it does, the slash between the TLD and page path gets deleted, see below. I'm guessing your domain in Moz is configured without the www, which means RogerBot is getting redirected to this slash-less version of the file.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I want to increase the DA of my Site
Dear all i have established a new site and want to increase its DA fast. So i need your suggestions in this regard. This newly established blog is my first independant project so i want to make it perfect.
Getting Started | | hamza52520 -
What is the quickest and easiest way to run an SEO audit on a Wordpress site that at least shows all the mechanical problems?
What is the quickest and easiest way to run an SEO audit on a Wordpress site that at least shows all the mechanical problems?
Getting Started | | integratedproperty0 -
Can't Crawl Site - but deducting crawls.
Why am I being deducted crawls if MOZ keeps telling me that it can't crawl my site?
Getting Started | | BloggyMoms1 -
Moz site crawl doesn't work
The Moz site crawl isn't working for my campaign, but works for the site's on demand crawl. The search should not be disallowed by robots.txt or the headers. I'd like to be able to track the website for the campaign so I can see SEO gains / losses and increases / decreases in indexing.
Getting Started | | DrainKing0 -
How to seo my site ?
Hi, I'm owner of farsindex.com. I want to seo my site and improve page authority. What are your suggestions?
Getting Started | | amin_material0 -
Site with 2 domains - 1 domain SEO opimised & 1 is not. How best to handle crawlers?
Situation: I have a dual domain site:
Getting Started | | DGAU
Domain 1 - www.domain.com is SEO optimised with product pages and should of course be indexed.
Domain 2 - secure.domain.com is not SEO optimised and simply has checkout and payment gateway pages. I've discovered that Moz automatically crawls Domain 2 - the secure.domain.com site and consequently picks up hundreds of errors.
I have put an end to this by adding a robots.txt to stop rogerbot and dotbot (mozs crawlers) from crawling domain 2. This fixes my errors in Moz reports however after doing more research into 'Crawler Control' I figure this might be the best option. My Question: Instead of using robots.txt to stop moz from crawing all of Domain 2 should I use on each page of domain 2? I believe this would then allow moz and google to crawl Domain 2 but also tell them both not to index it.
My understanding is that this would be best, and might even help my overall SEO by telling google not to give any SEO value to the Domain 2 pages?0 -
My site is not being fully crawled
Our site has been crawled several times by RogerBot but each time only 6 pages are crawled even though we have more than 100 pages. Do I need to submit my sitemap.xml to Moz?
Getting Started | | Scurri0 -
Where is my access id?
Hi, i am using a 3rd party wordpress plugin (WPMU DEV - Infinite SEO). I've got a trial account and the plugin is asking me for: Access ID Secret Key Where can i find these? much appreciated graham
Getting Started | | aguyiknow0