1200 pages no followed and blocked by robots on my site. Is that normal?
-
Hi,
I've got a bunch of notices saying almost 1200 pages are no-followed and blocked by robots.
They appear to be comments and other random pages. Not the actual domain and static content pages.
Still seems a little odd. The site is www.jobshadow.com. Any idea why I'd have all these notices?
Thanks!
-
Ok cool, yeah I think it's mainly comment pages. Which no following them is standard on Wordpress.
The blocking of the robots notice threw me a little off though. Obviously you'd want your comments to show up in the search engines as well.
Thanks both of ya'll for the help!
-
hi
most of these notice are relating to your comment system and not to the robots.txt file.
If you have 'no follow' active, that means for search engine robots: not to follow links!
If that's the only issue and was intentionally, then the notice can be ignored.
the example below reveals that all your comments are set to "no follow".
example page -comment: http://www.jobshadow.com/interview-with-a-neurosurgeon/#comment-4168
the html source:
<span class="<a class="attribute-value">comment_time</a>"><a href="[#comment-4268](view-source:http://www.jobshadow.com/interview-with-a-neurosurgeon/#comment-4268)" title="<a class="attribute-value">Permalink to this comment</a>" _**rel="<a class="attribute-value">nofollow</a>">**_April 29, 2012 at 4:19 pma>span>
-
Any examples of pages that are blocked? I don't see any pages being specifically blocked in your robots.txt: http://www.jobshadow.com/robots.txt
For nofollow, it's perfectly normal to nofollow thousands of links if you get a lot of comments.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Scary bug in search console: All our pages reported as being blocked by robots.txt after https migration
We just migrated to https and created 2 days ago a new property in search console for the https domain. Webmaster Tools account for the https domain now shows for every page in our sitemap the warning: "Sitemap contains urls which are blocked by robots.txt."Also in the dashboard of the search console it shows a red triangle with warning that our root domain would be blocked by robots.txt. 1) When I test the URLs in search console robots.txt test tool all looks fine.2) When I fetch as google and render the page it renders and indexes without problem (would not if it was really blocked in robots.txt)3) We temporarily completely emptied the robots.txt, submitted it in search console and uploaded sitemap again and same warnings even though no robots.txt was online4) We run screaming frog crawl on whole website and it indicates that there is no page blocked by robots.txt5) We carefully revised the whole robots.txt and it does not contain any row that blocks relevant content on our site or our root domain. (same robots.txt was online for last decade in http version without problem)6) In big webmaster tools I could upload the sitemap and so far no error reported.7) we resubmitted sitemaps and same issue8) I see our root domain already with https in google SERPThe site is https://www.languagecourse.netSince the site has significant traffic, if google would really interpret for any reason that our site is blocked by robots we will be in serious trouble.
Intermediate & Advanced SEO | | lcourse
This is really scary, so even if it is just a bug in search console and does not affect crawling of the site, it would be great if someone from google could have a look into the reason for this since for a site owner this really can increase cortisol to unhealthy levels.Anybody ever experienced the same problem?Anybody has an idea where we could report/post this issue?0 -
Wordpress site, MOZ showing missing meta description but pages do not exist on backend
I've got a wordpress website (a client) and MOZ keeps showing missing meta descriptions. When I look at the pages these are nonsense pages, they do exist somewhere but I am not seeing them on the backend. Questions: 1) how do I fix this? Maybe it's a rel con issue? why is this referring to "non-sense" pages? When I go to the page there is nothing on it except maybe an image or the headline, it's very strange. Any input out there I greatly appreciate. Thank you
Intermediate & Advanced SEO | | SOM240 -
Any downsides of (permanent)redirecting 404 pages to more generic pages(category page)
Hi, We have a site which is somewhat like e-bay, they have several categories and advertisements posted by customers/ client. These advertisements disappear over time and turn into 404 pages. We have the option to redirect the user to the corresponding category page, but we're afraid of any negative impact of this change. Are there any downsides, and is this really the best option we have? Thanks in advance!
Intermediate & Advanced SEO | | vhendriks0 -
Will Google bots crawl tablet optimized pages of our site?
We are in the process of creating a tablet experience for a portion of our site. We haven’t yet decided if we will use a one URL structure for pages that will have a tablet experience or if we will create separate URLs that can only be access by tablet users. Either way, will the tablet versions of these pages/URLs be crawled by Google bots?
Intermediate & Advanced SEO | | kbbseo0 -
Googlebot Can't Access My Sites After I Repair My Robots File
Hello Mozzers, A colleague and I have been collectively managing about 12 brands for the past several months and we have recently received a number of messages in the sites' webmaster tools instructing us that 'Googlebot was not able to access our site due to some errors with our robots.txt file' My colleague and I, in turn, created new robots.txt files with the intention of preventing the spider from crawling our 'cgi-bin' directory as follows: User-agent: * Disallow: /cgi-bin/ After creating the robots and manually re-submitting it in Webmaster Tools (and receiving the green checkbox), I received the same message about Googlebot not being able to access the site, only difference being that this time it was for a different site that I manage. I repeated the process and everything, aesthetically looked correct, however, I continued receiving these messages for each of the other sites I manage on a daily-basis for roughly a 10-day period. Do any of you know why I may be receiving this error? is it not possible for me to block the Googlebot from crawling the 'cgi-bin'? Any and all advice/insight is very much welcome, I hope I'm being descriptive enough!
Intermediate & Advanced SEO | | NiallSmith1 -
Easy way to get some do-follow links for a new site
I am launching a new website and when I search for "list of do-follow websites" I find lots of people posting their list. Rather than individually sign up for hundreds of sites for one link at a time, is there a tool that can automate this?
Intermediate & Advanced SEO | | StreetwiseReports0 -
Are there any negative effects to using a 301 redirect from a page to another internal page?
For example, from http://www.dog.com/toys to http://www.dog.com/chew-toys. In my situation, the main purpose of the 301 redirect is to replace the page with a new internal page that has a better optimized URL. This will be executed across multiple pages (about 20). None of these pages hold any search rankings but do carry a decent amount of page authority.
Intermediate & Advanced SEO | | Visually0 -
Can you use more than one meta robots tag per page?
If you want to add both "noindex, follow" and "noopd" should you add two meta robots tags or is there a way to combine both into one?
Intermediate & Advanced SEO | | nicole.healthline0