Website blocked by Robots.txt in OSE
-
When viewing my client's website in OSE under the Top Pages tab, it shows that ALL pages are blocked by Robots.txt. This is extremely concerning because Google Webmaster Tools is showing me that all pages are indexed and OK. No crawl errors, no messages, no nothing. I did a "site:website.com" in Google and all of the pages of the website returned.
Any thoughts? Where is OSE picking up this signal? I cannot find a blocked robots tag in the code or anything.
-
No worries - glad to help!
-
Thanks for responding - I did, and I noticed that we are blocking a bunch of other spiders including the spider that crawls for OSE. So, that explains why they cannot retrieve the data.
Again, thanks.
-
Have you looked at your robots.txt file to see if you are blocking specific bots? Visit yoursite.com/robots.txt and check whether you have something like this:
User-agent: [example]
Disallow: /But you may have something else to specify that Googlebot is allowed to crawl the site:
User-agent: googlebot
Allow: /
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
OSE stats for 2 site: searoundus.org and www.seaaroundus.org.
Why are the numbers so different for the two site, one with and one without the www.? Which one is most accurate for external linking domains, for instance?
Moz Pro | | GaryDC0 -
Can I see when SEO Moz has crawled my website?
I would like to know if it's possible to see (maybe in my Google Analytics) if SEO Moz has crawled my website. I'm also curious if and where I can see when the robot of Google visited my website. Thanks!
Moz Pro | | Spotler0 -
Can someone explain why I have been seeing an increase in the number of Linking Page URLs in OSE that link directly to downloads?
Ever since the last couple Linkscape updates when doing competitive back link analysis I have noticed a large increase in the number of URLs of Linking Pages in OSE that result in an immediate file download. The majority of the time these downloads are not common files ie PDF, DOC files. For example, these were all in a competitors back link profile: http://download.unesp.br/linux/debian/pool/main/i/isc-dhcp/isc-dhcp-relay-dbg_4.1.1-P1-17_ia64.deb http://snow.fmi.fi/data/20090210_eurasia_sd_025grid.mat http://www.rose-hulman.edu/class/me/HTML/ES204_0708_S/working model examples/Le25 mad hatter.wm?a=p&id=145880&g=5&p=sia&date=iso&o=ajgrep These are just a few I came across for a single competitor. Is this sketchy black hat SEO, some sort of error, actual links, or something else? Any information on this subject would be helpful. Thank you.
Moz Pro | | Gyi0 -
In competitive domain analysis my website shows pages that I no longer have.
When I go to my campaign, link analysis, under competitive comparison/followed backlinks, the 5 pages under Anchor Text/Target URL, supposedly on my website, are not on my website anymore. I recently moved my domain name to another server (before signing up for seomoz) and deleted the forum associated with my website. Those 5 pages are all from that now extinct forum. What's going on?
Moz Pro | | ovistomih0 -
Problems with OSE downloads
Ordered 5 reports last 24 hours, none received. Anyone else with this problem ? I do expect better from an expensive subscription. C'mon Moz, fix this new OSE report system please.
Moz Pro | | blocker04082 -
SEOMoz only crawling 5 pages of my website
Hello, I've added a new website to my SEOmoz campaign tool. It only crawls 5 pages of the site. I know the site has way more pages then this and also has a blog. Google shows at least 1000 results indexed. Am I doing something wrong? Could it be that the site is preventing a proper crawl? Thanks Bill
Moz Pro | | wparlaman0 -
Why is Roger crawling pages that are disallowed in my robots.txt file?
I have specified the following in my robots.txt file: Disallow: /catalog/product_compare/ Yet Roger is crawling these pages = 1,357 errors. Is this a bug or am I missing something in my robots.txt file? Here's one of the URLs that Roger pulled: <colgroup><col width="312"></colgroup>
Moz Pro | | MeltButterySpread
| example.com/catalog/product_compare/add/product/19241/uenc/aHR0cDovL2ZyZXNocHJvZHVjZWNsb3RoZXMuY29tL3RvcHMvYWxsLXRvcHM_cD02/ Please let me know if my problem is in robots.txt or if Roger spaced this one. Thanks! |0 -
Why is blocking the SEOmoz crawler considered a red "error?"
Why is blocking the SEOmoz crawler considered a red "error?" Please see attached image... Y3Vay.png
Moz Pro | | vkernel0