Blocked by Meta Robots.
-
Hi,
I get this warning on my reporting.
- Blocked by Meta Robots - This page is being kept out of the search engine indexes by meta-robots.
what does that means ? and how to solve that, if i using wordpress as my website engine.
and about rel=canonical , in which page I should put this tag, in original page, or in copy page ?
thanks for all of your answer, it will be means a lot
-
There are wordpress plugins you can use to modify your robots.txt, wordpress makes it difficult.
http://yoast.com/example-robots-txt-wordpress/
Also, make sure it is an important page for your blog- Google is just being proactive on your behalf, it might be an irrelevant page to your overall plan.
-
Actually it would not be in the meta robots noindex. The meta tag does not prevent Google from crawling the page it is on. If it did that, then Google would not be able to crawl the page and then it would not be able to read the tag :-). The meta robots will tell Google to remove the page from the index and so it is very effective for that application.
That said, the GWT warning, is probably related to you robots.txt file located at
http://www.yourdomain.ext/robots.txt
Put that in your browser and see if you have any of your files, pages Disallowed in that file. If that is the case, then Google will not be able to spider a page to start with, let alone read the meta tags. Do some searching on Google on how robots.txt works Moz obviously has one
http://moz.com/learn/seo/robotstxt
Here is a video on how to use Wordpress and robot.txt - it may or may not relate to your config, but will show a plugin that you can use to adjust
http://www.youtube.com/watch?v=JY9A5OqHTvw
You can figure out how to understand it and then what you need to update it. Get with your IT person or whoever admins your site
-
It means there is a meta tag on the page that is blocking the page. Look in the head section of the page for a tag. Remove this and you should be good to go. Check your WordPress settings, sometimes these tags are automatically assigned to pages as a default. You could also download a SEO plug in to help manage the meta robots tags.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt blocking Moz
Moz are reporting the robots.txt file is blocking them from crawling one of our websites. But as far as we can see this file is exactly the same as the robots.txt files on other websites that Moz is crawling without problems. We have never come up against this before, even with this site. Our stats show Rogerbot attempting to crawl our site, but it receives a 404 error. Can anyone enlighten us to the problem please? http://www.wychwoodflooring.com -Christina
Moz Pro | | ChristinaRadisic0 -
Should I set blog category/tag pages as "noindex"? If so, how do I prevent "meta noindex" Moz crawl errors for those pages?
From what I can tell, SEO experts recommend setting blog category and tag pages (ie. "http://site.com/blog/tag/some-product") as "noindex, follow" in order to keep the page quality of indexable pages high. However, I just received a slew of critical crawl warnings from Moz for having these pages set to "noindex." Should the pages be indexed? If not, why am I receiving critical crawl warnings from Moz and how do I prevent this?
Moz Pro | | NichGunn0 -
Robots.txt
I have a page used for a reference that lists 150 links to blog articles. I use in in a training area of my website. I now get warnings from moz that it has too many links. I decided to disallow this page in robots.text. Below is the what appears in the file. Robots.txt file for http://www.boxtheorygold.com User-agent: * Disallow: /blog-links/ My understanding is that this simply has google bypass the page and not crawl it. However, in Webmaster Tools, I used the Fetch tool to check out a couple of my blog articles. One returned an expected result. The other returned a result of "access denied" due to robots.text. Both blog article links are listed on the /blog/links/ reference page. Question: Why does google refuse to crawl the one article (using the Fetch tool) when it is not referenced at all in the robots.text file. Why is access denied? Should I have used a noindex on this page instead of robots.txt? I am fearful that robots.text may be blocking many of my blog articles. Please advise. Thanks,
Moz Pro | | Rong
Ron0 -
Meta-Robots noFollow and Blocked by Meta-Robots
On my most recent campaign report, I have 2 Notices that we can't find any cause for: Meta-Robots nofollow-
Moz Pro | | gfiedel
http://www.fateyes.com/the-effect-of-social-media-on-the-serps-social-signals-seo/?replytocom=92
"noindex nofollow" for the page: http://www.fateyes.com/the-effect-of-social-media-on-the-serps-social-signals-seo/ Blocked by Meta-Robots -Meta-Robots nofollow-
http://www.fateyes.com/the-effect-of-social-media-on-the-serps-social-signals-seo/?replytocom=92
"noindex nofollow" for the page: http://www.fateyes.com/the-effect-of-social-media-on-the-serps-social-signals-seo/ We are unable to locate any code whatsoever that may explain this. Any ideas anyone?0 -
Does the Crawl Diagnosis - Duplicate Page Content account for a canonical meta tags?
I see the same page listed 3 time (with different query params). But on each I have a meta tag pointing to the correct canonical url. By still seeing all three listed, does that mean there is an error with my meta tag?
Moz Pro | | Simantel0 -
Why does the crawl report say I should have meta description and title tags in my xml files?
Just had my first crawl report today which has been very useful in finding missing and duplicated title tags and meta descriptions but it has flagged up the fact that my xml files are missing these. Surely non HTML documents shouldn't have them (or need them) so why are they showing up in the report?
Moz Pro | | PandyLegend0 -
To block with robots.txt or canonicalize?
I'm working with an apt community with a large number of communities across the US. I'm running into dup content issues where each community will have a page such as "amenities" or "community-programs", etc that are nearly identical (if not exactly identical) across all communities. I'm wondering if there are any thoughts on the best way to tackle this. The two scenarios I came up with so far are: Is it better for me to select the community page with the most authority and put a canonical on all other community pages pointing to that authoritative page? or Should i just remove the directory all-together via robots.txt to help keep the site lean and keep low quality content from impacting the site from a panda perspective? Is there an alternative I'm missing?
Moz Pro | | JonClark150 -
How does SeoMoz works with noindex meta tags?
In my last SeoMoz Crawl I've found a lot of warnings about duplicated content in page with a noindex meta tag. Is that normal? These pages should not be considered as indexable content of my website, isn't it?
Moz Pro | | jgomes0