Can I exclude pages from my Crawl Diagnostics?
-
Right now my crawl diagnostic information is being skewed because it's including the onsite search from my website. Is there a way to remove certain pages like search from the errors and warnings of the crawl diagnostic? My search pages are coming up as:
- Long URL
- Title Element Too Long
- Missing Meta Description
- Blocked by meta-robots (Which is how I want it)
- Rel Canonical
Here is what the crawl diagnostic thinks my page URL looks like:
website.com/search/gutter%25252525252525252525252525252525252525252525252525252525
252525252525252525252525252525252525252525252525252525252525252
525252525252525252525252525252525252525252525252525252525252525
252525252525252525252525252525252525252525252525252525252525252
52525252525252525252525252525252525252525252525252Bcleaning/
Thank you,
Jonathan
-
Thanks! That was easy.
-
You can do so in robots.txt
SeoMoz bot is called rogerbot
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
403 error but page is fine??
Hi, on my report im getting 4xx error. When i look into it it says the error is crital fo4r 403 error on this page https://gaspipes.co.uk/contact-us/ i can get to the page and see it fine but no idea why its showing a 403 error or how to fix it. This is the only page that the error is coming up on, is there anything i can check/do to get this resolved? Thanks
Moz Pro | | JU-Mark0 -
Duplicate title on Crawl
Ok, this should hopefully be a simple one. Not sure if this is a Moz crawl issue of redirect issue. Moz is reporting duplicate title for www.site.co.uk , site.co.uk and www.site.co.uk/home.aspx is this a canonical change or a moz setting I need to get this number lower.
Moz Pro | | smartcow0 -
Functionality of SEOmoz crawl page reports
I am trying to find a way to ask SEOmoz staff to answer this question because I think it is a functionality question so I checked SEOmoz pro resources. I also have had no responses in the Forum too it either. So here it is again. Thanks much for your consideration! Is it possible to configure the SEOMoz Rogerbot error-finding bot (that make the crawl diagnostic reports) to obey the instructions in the individual page headers and http://client.com/robots.txt file? For example, there is a page at http://truthbook.com/quotes/index.cfm month=5&day=14&year=2007 that has – in the header -
Moz Pro | | jimmyzig
<meta name="robots" content="noindex"> </meta name="robots" content="noindex"> This page is themed Quote of the Day page and is duplicated twice intentionally at http://truthbook.com/quotes/index.cfm?month=5&day=14&year=2004 and also at http://truthbook.com/quotes/index.cfm?month=5&day=14&year=2010 but they all have <meta name="robots" content="noindex"> in them. So Google should not see them as duplicates right. Google does not in Webmaster Tools.</meta name="robots" content="noindex"> So it should not be counted 3 times? But it seems to be? How do we gen a report of the actual pages shown in the report as dups so we can check? We do not believe Google sees it as a duplicate page but Roger appears too. Similarly, one can use http://truthbook.com/contemplative_prayer/ , here also the http://truthbook.com/robots.txt tells Google to stay clear. Yet we are showing thousands of dup. page content errors when Google Webmaster tools as shown only a few hundred configured as described. Anyone? Jim0 -
Why is my crawl STILL in progress?
I'm a bit new here, but we've had a few crawls done already. They are always finished by Wednesday night. Our website is not large (by any means), but the crawl still says it's in progress now 3 days later. What's the deal here?!?
Moz Pro | | Kibin0 -
If i have just started a campaign, and the crawl is happening - can i log off and shut down my PC
If i have just started a campaign, and the crawl is happening - can i log off and shut down my PC or will this cause the crawl to stop. My campaign was showing as "crawl in process". I then shut down and logged onto my SEOmoz account on another PC. When I did the crawl didnt seem to be in process, even though the total time was still about 30 min from the crawl start...... sorry if this is a stupid question, just a bit new to this
Moz Pro | | duff1010 -
I have a Rel Canonical "notice" in my Crawl Diagnostics report. I'm presuming that means that the spider has detected a rel canonical tag and it is working as opposed to warning about an issue, is this correct?
I know this seems like a really dumb question but the site I'm working on is a BigCommerce one and I've been concerned about canonicalisation issues prior to receiving this report (I'm a SEOmoz pro newbie also!) and I just want to be clear I am reading this notice correctly. I presume this means that the site crawl has detected the rel canonical tag on these pages and it is working correctly. Is this correct?? Any input is much appreciated. Thanks
Moz Pro | | seanpearse0 -
Fixing the Too Many On-Page Links
In our campaign I see that it reported that some of our pages have too many on-page links. But I think most of the links that was seen by MozBot is related to our images. There are a lot of images in our site and at the same time we support 11 languages which adds additional links One of the pages that have a lot of links is www.florahospitality.com/dining.aspx What can you <a></a>suggest to fix this? Thanks. <a></a><a></a><a></a><a></a><a></a><a></a><a></a><a></a><a></a><a></a><a></a><a></a><a></a><a></a><a></a><a></a><a></a>
Moz Pro | | shebinhassan0 -
Why is Roger crawling pages that are disallowed in my robots.txt file?
I have specified the following in my robots.txt file: Disallow: /catalog/product_compare/ Yet Roger is crawling these pages = 1,357 errors. Is this a bug or am I missing something in my robots.txt file? Here's one of the URLs that Roger pulled: <colgroup><col width="312"></colgroup>
Moz Pro | | MeltButterySpread
| example.com/catalog/product_compare/add/product/19241/uenc/aHR0cDovL2ZyZXNocHJvZHVjZWNsb3RoZXMuY29tL3RvcHMvYWxsLXRvcHM_cD02/ Please let me know if my problem is in robots.txt or if Roger spaced this one. Thanks! |0