Moz "Crawl Diagnostics" doesn't respect robots.txt
-
Hello, I've just had a new website crawled by the Moz bot. It's come back with thousands of errors saying things like:
- Duplicate content
- Overly dynamic URLs
- Duplicate Page Titles
The duplicate content & URLs it's found are all blocked in the robots.txt so why am I seeing these errors?
Here's an example of some of the robots.txt that blocks things like dynamic URLs and directories (which Moz bot ignored):Disallow: /?mode=
Disallow: /?limit=
Disallow: /?dir=
Disallow: /?p=*&
Disallow: /?SID=
Disallow: /reviews/
Disallow: /home/Many thanks for any info on this issue.
-
Hi Si, has this issue been resolved?
-
Hey Si,
Thanks for writing in. It doesn't seem that we are having an overarching issue with our crawler ignoring robots.txt files so I did some research in Google Webmaster Tools and it looks like most crawlers require an asterisk in the disallow directive to recognize that all pages of a dynamic URL are being disallowed. If you look in the "Pattern Matching" section of this resource here: http://support.google.com/webmasters/bin/answer.py?hl=en&answer=156449, that should give you more information about setting up the robots.txt with the correct disallow directives to block those pages.
If you add in the astrisk to the disallow directive and you are still seeing these pages crawled, it would help if you sent in an email with your campaign information to our support desk at [email protected] so we can have our engineers look into this more directly.
I hope this helps.
Chiaryn
-
If you have an "index,(no)follow" meta on those pages I think they will be crawled even though you have them blocked in robots.txt. So by adding "noindex" on those pages it might work as you want it to.
-
Is the / actually in the URL at that spot? Or is your link like http://www.example.com/abcd?p=147
If you give an example full URL that includes one of your blocked dynamic URLs we can take a better look. If your robots is setup correctly, it shouldn't find that stuff but give us more info if you're able.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moz Pro OnDemand Crawl fail on on WordPress site
Hello, I just can't seem to understand why OnDemand Crawl fails on further attempts only 4 pages out of 68 I am using WordPress, Divi Theme and on LiteSpeed server. Robots.txt allows rogerbot just can seem to find the issue
Moz Bar | | ChrisSanClaire0 -
I added a privacy policy link to my footer and now Moz is showing thousands of 4xx errors
My website didn't have a privacy policy so I added one and put the link in the footer menu. When I did this, Moz came back telling me that there are a lot of new errors on the site. Is this a bad thing? Do I need to address it? HY59Iks sYyAHCB
Moz Bar | | elisa175910 -
Site crawl warning - concatenated urls from Wordpress
I could use some help on how to fix this. I asked at the walkthrough but was told it was a Wordpress issue but so far I can't find anything to point me in the right direction. There are no errors in the files on server side and I have asked my hosting company too. I am hoping someone here may be able to shed some light on it. One of my websites it giving 404 errors on links that are formed as below and there are over 12.7K of them! Example: <mydomainurl>/www.instagram.com/www.instagram.com/<instagram username=""></instagram></mydomainurl> The link that relates to my website is valid and working, but I don't understand the rest. I am totally stumped on how to move forward with this. Any advice, suggestions, tips on how to fix these errors and stop these types of links getting generated. Thanks.
Moz Bar | | emercarr0 -
How do I cancel a crawl request?
I was farting around and exploring the Crawl Test tool, and accidentally sent out a crawl for a competitor's site (I wanted to see if the tool would decline to crawl without verification). I do NOT want to actually crawl that site, nor do I want the competitor to see that we requested it (for obvious reasons) - how do I cancel it?
Moz Bar | | mkbeesto0 -
Weird 404 in Crawl Diagnostics
I'am getting a lot of 404 errors (196 to be precise ) - but their pattern is weird.
Moz Bar | | oorbo
The page that the crawler is trying to find is (e.g):
http://www.oorbo.com/item/asufa-israeli-design-shop**/www.oorbo.com.
the linking page is** http://www.oorbo.com/item/asufa-israeli-design-shop meaning it adds to the end of the link the root URL - /www.oorbo.com. This happens in all 196 cases - trying to find a page http://www.oorbo.com/some-page/www.oorbo.com from a refferer page http://www.oorbo.com/some-page. Obviously this pages do not exist, and it's getting a 404. I've look into the pages themselves and digged into their code - It doesn't seem that the bad link is any where on the page. Did anyone came across this kind of issue? any one can point me to a solution ?0 -
Moz Crawler URL paramaters & duplicate content
Hi all, this is my first post on Moz Q&A 🙂 Questions: Does the Moz Crawler take into account rel="canonical" for search results pages with sorting / filtering URL parameters? How much time does it take for an issue to disappear from the issues list after it's been corrected? Does it come op in the next weekly report? I'm asking because the crawler is reporting 50k+ pages crawled, when in reality, this number should be closer to 1000. All pages with query parameters have the correct canonical tag pointing to the root URL, so I'm wondering whether I need to noindex the other pages for the crawler to report correct data?: Original (canonical URL): DOMAIN.COM/charters/search/mx/BS?search_location=cabo-san-lucas Filter active URL: DOMAIN.COM/charters/search/mx/BS?search_location=cabo-san-lucas&booking_date=&booking_days=1&booking_persons=1&priceFilter%5B%5D=0%2C500&includedPriceFilter%5B%5D=drinks-soft Also, if noindex is the only solution, will it impact the ranking of the pages involved? Note: Google and Bing are semi-successful in reporting index page count, each reporting around 2.5k result pages when using the site:DOMAIN.com query. The rel canonical tag was missing for a short period of time about 4 weeks ago, but since fixing the issue these pages still haven't been deindexed. Appreciate any suggestions regarding Moz Crawler & Google / Bing index count!
Moz Bar | | Vukan_Simic0 -
I got a 404 in the Crawl Test Tool Report
I, yesterday i ran an crawl on http://www.everlastinggarden.nl and i get an 404. Does anybody know why this happens? <colgroup><col width="1535"></colgroup>
Moz Bar | | IMforYou
| # ---------------------------------------- |
| Crawl Test Tool Report | Moz,http://pro.seomoz.org/tools/crawl-test |
| www.everlastinggarden.nl |
| Report created: 15 Jul 18:34 |
| # ---------------------------------------- |
| URL,Time Crawled,Title Tag,Meta Description,HTTP Status Code,Referrer,Link Count,Content-Type Header,4XX (Client Error),5XX (Server Error),Title Missing or Empty,Duplicate Page Content,URLs with Duplicate Page Content (up to 5),Duplicate Page Title,URLs with Duplicate Title Tags (up to 5),Long URL,Overly-Dynamic URL,301 (Permanent Redirect),302 (Temporary Redirect),301/302 Target,Meta Refresh,Meta Refresh Target,Title Element Too Short,Title Element Too Long,Too Many On-Page Links,Missing Meta Description Tag,Search Engine blocked by robots.txt,Meta-robots Nofollow,Blocked by X-robots,X-Robots-Tag Header,Blocked by meta-robots,Meta Robots Tag,Rel Canonical,Rel-Canonical Target,Blocking All User Agents,Blocking Google,Blocking Yahoo,Blocking Bing,Internal Links,Linking Root Domains,External Links,Page Authority |
| http://www.everlastinggarden.nl,2014,404 : Received 404 (Not Found) error response for page.,Error attempting to request page | Best regards, Jos0