Should I set blog category/tag pages as "noindex"? If so, how do I prevent "meta noindex" Moz crawl errors for those pages?
-
From what I can tell, SEO experts recommend setting blog category and tag pages (ie. "http://site.com/blog/tag/some-product") as "noindex, follow" in order to keep the page quality of indexable pages high. However, I just received a slew of critical crawl warnings from Moz for having these pages set to "noindex." Should the pages be indexed? If not, why am I receiving critical crawl warnings from Moz and how do I prevent this?
-
In the situation outline by the OP, these pages are noindexed. There’s no value to clutterig up crawl reports on these pages. Block rogerbot from non-critical parts of your site, unless you want to be alerted of issues, then don’t.
-
Thanks, I'm not concerned about the crawl depth of the search engine bots, there is nothing in your fix that would affect that, I'm curious of the decrease in crawl depth of the site with the Moz as we use that to spot issues with the site.
One of the clients I implemented the fix on went from 4.6K crawled pages to 3.4K and the fix would have removed an expected 1.2K pages.
The other client went from 5K to 3.7K and the fix would have removed an expected 1.3K pages.
TL;DR - Good News everybody, the robots.txt fix didn't reduce the crawl depth of the moz crawler!
-
I agree, unfortunately Moz doesn't have an internal disallow feature that gives you the option to feed them info on where rogerbot can and can't go. I haven't come across any issues with this approach, crawl depth by search engine bots will not be affected since the user-agent is specified.
-
Thanks for the solution! We have been coming across a similar issue with some of our sites and I although I'm not a big fan of this type of workaround, I don't see any other options and we want to focus on the real issues. You don't want to ignore the rule in case other pages that should be indexed are marked noindex by mistake.
Logan, are you still getting the depth of crawls after making this type of fix? Have any other issues arisen from this approach?
Let us know
-
Hi Nichole,
You're correct in noindexing these pages, they serve little to no value from an SEO perspective. Moz is always going to alert you of noindex tags when they find them since it's such a critical issue if that tag shows up in unexpected places. If you want to remove these issues from your crawl report, add the following directive to your robots.txt file, this will prevent Moz from crawling these URLs and therefore reporting on them:
User-agent: rogerbot
Disallow: /tag/
Disallow: /category/*edit - do not prevent all user-agents from crawling these URLs, as it will prevent search engines from seeing your noindex tag, they can't obey what they aren't permitted to see. If you want, once all tag & category pages have been removed from the index, you can update your robots.txt to remove the rogerbot directive and add the disallows for tag & category to the * user agent.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to set up Goals/Conversions in Google Analytics with Moz
With my new Moz account, how would I go about setting up a useful Goal/Conversion on Google Analytics?
Moz Pro | | Brandon320 -
50,000 4xx errors listed in MOZ report :(
HI, I've just had a look at a customers MOZ report and discovered nearly 50,000 errors listed!! The site is a non dynamic, non database driven site so where these have come from is beyond me. Example - (Both links open in a new window)
Moz Pro | | Skips
LIVE PAGE - http://lunnonwaste.com/licenses/Heatherland-Limited-H&S-Policy-Document-JAN-15.pdf
NON EXISTANT PAGE - http://lunnonwaste.com/licenses/licenses/Heatherland-Limited-H&S-Policy-Document-JAN-15.pdf - if you hover over any links you'll see they're ALL ......./licenses/licenses/.......... which doesn't exist! It's the file system that seems to be the problem - .........../licenses/licenses/........... and this gets added onto (............./licenses/licenses/licenses/licenses/.............. , up to the 50,000 page errors LOL The problem is self replicating. It's very odd and not something I've ever seen before, NON of these 'extra' pages are listed on the server, so where are they coming from? Any suggestions or help would be gratefully appreciated1 -
Do I need to set the country in Webmaster Tools even if I am set to apply the hreflang tag to our different country pages?
We are global brand that will be migrating to a new platform in the next few months. As such, this will allow us to fix our current SEO issues. We are planning on using geo-targeted subfolders instead of country code top level domains primarily because of resource limitations. My question is - do I need to set the country in Webmaster Tools even though I am already going to tell Google that certain pages are targeted towards a specific country?
Moz Pro | | marshseo0 -
1 week has passed: Crawled pages still N/A
Roughly one week ago I went pro, and then I created a campaing for the smallish webshop that I'm employed at, however it doesn't seem to crawl. I've check our visitors log and while we find other bots such as google, bing, yandex and so fourth, seomoz bot hasn't been visible. Perhaps I'm looking for a normal useragent, ohwell, onwards. While I thought it might take time, as a small test I added a domain that I've owned for sometime but don't really use, that target site is only 17 pages, now this site was crawled almost within the hour, and I realised that our ~5000pages on the main campaing would take some time, but wouldn't the initial 250 pages be crawled by now? I should add, that I didn't add http:// to the original Campaing, but the one that got crawled I did. I cannot seem to change this myself inorder to spot if that's the problem or not. Anyone has any ideas, should I just wait or is there something I can activly do to force it to start rolling?
Moz Pro | | Hultin0 -
Where is the labs LDA topics tool? The articles mentioning it point to http://www.seomoz.org/labs/lda but that forwards me to the tools landing page.
I searched SEOMOZ for this but haven't yet found the answer. It seems that it's been removed. I'm sure it was announced somewhere that I missed
Moz Pro | | bluenote0 -
Http://lsapi.seomoz.com pop up
I am getting this pop up on every page I visit: A username and password are being requested by http://lsapi.seomoz.com. The site says: "SEOmoz" I've searched some forums and see that others are experiencing it as well. The advice was to log into my seomoz account, however that did not work...any ideas?
Moz Pro | | texmeix0 -
"no urls with duplicate content to report"
Hi there, i am trying to clean up some duplicate content issues on a website. The crawl diagnostics says that one of the pages has 8 other URLS with the same content. When i click on the number "8" to see the pages with duplicate content, i get to a page that says "no urls with duplicate content to report". Why is this happening? How do i fix it?
Moz Pro | | fourthdimensioninc0 -
Is this an error in SEO Moz Pro Web App?
In the rankings section if I add these two terms to monitor They are both monitored: Fresh fish fresh fish Do search engines give more weight to capitals on some small scale or is this an error in the app? Thanks, Daniel.
Moz Pro | | iSenseWebSolutions0