How to fix and issue with robot.txt ?
-
I am receiving the following error message through webmaster tools
http://www.sourcemarketingdirect.com/: Googlebot can't access your site
Oct 26, 2012
Over the last 24 hours, Googlebot encountered 35 errors while attempting to access your robots.txt. To ensure that we didn't crawl any pages listed in that file, we postponed our crawl. Your site's overall robots.txt error rate is 100.0%.The site has dropped out of Google search.
-
Hi Stacey
What plugins do you have running - any caching plugins such the W3 Total Cache plugin?
Are you able to access your servers error logs to see if you can see anything there?
-
Thanks for your answer.
I have received this message from Google
**http://www.sourcemarketingdirect.com/ **using the Meta tag method (less than a minute ago). Your site's home page returns a status of 500 (Internal server error) instead of 200 (OK)
It looks like the permalink structure has changed but I'm not sure how.
-
I've seen several people ask this very same question over the last week in different forums. I am wondering if the major outages with hurricane Sandy have affected several hosts or DNS's.
Your robots.txt looks fine to me.
I'm guessing that you will completely recover once Google has a chance to fully crawl the site again.
-
just a quick check you have got wordpress visible to search engines set in the admin area? if not it will be set to disallow googlebot to crawl it.
it is in admin - options - privacy and select appropriate box - default is no index, no follow.
-
Thanks Matt.
There is no robots.txt as far as I can see. Is there a plugin I can use for wordpress?
The site was down for 2 days last month while hte original host transfered the site over to me.
Right now a site search says their are 13 pages indexed.
Just concerned that this site has always ranked number 1 for a company name search and now they are not on the first 10 pages in Google.
-
have you made sure your robots.txt is loading in your browser by adding robots.txt after your domain same as a normal page and can you see contents? has your site been down in this period? have you changed the contents of the file just before this issue? are you sure googlebot hasnt come back since that date - whats your analytics say? do an index site: search for your domain to see if it is in google.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to fix non-crawlable pages affected by CSS modals?
I stumbled across something new when doing a site audit in SEMRUSH today ---> Modals. The case: Several pages could not be crawled because of (modal:) in the URL. What I know: "A modal is a dialog box/popup window that is displayed on top of the current page" based on CSS and JS. What I don't know: How to prevent crawlers from finding them.
Web Design | | Dan-Louis0 -
How can I fix New 4XX Issue on Site Crawl?
Hi all, My recent site crawl shows 27 4xx issues on this website http://www.rrbusinessconsultants.com/ All of them are for 'posts' on this wordpress website. Here is an example of the issue: http://www.rrbusinessconsultants.com/rr-business-consultants-on-the-rise-of-glassdoor-and-how-companies-are-coping/void(null) The blog page seems to be creating links ending in void(null) which are defaulting to 404 pages. I cannot see the links on the site so cannot see how to remove them. Can anyone provide any insight into how to correct his issue? Many thanks in advance.
Web Design | | skehoe0 -
Block parent folder in robot.txt, but not children
Example: I want to block this URL (which shows up in Webmaster Tools as an error): http://www.siteurl.com/news/events-calendar/usa But not this: http://www.siteurl.com/news/events-calendar/usa/event-name
Web Design | | Zuken0 -
Bing Indexation and handling of X-ROBOTS tag or AngularJS
Hi MozCommunity, I have been tearing my hair out trying to figure out why BING wont index a test site we're running. We're in the midst of upgrading one of our sites from archaic technology and infrastructure to a fully responsive version.
Web Design | | AU-SEO
This new site is a fully AngularJS driven site. There's currently over 2 million pages and as we're developing the new site in the backend, we would like to test out the tech with Google and Bing. We're looking at a pre-render option to be able to create static HTML snapshots of the pages that we care about the most and will be available on the sitemap.xml.gz However, with 3 completely static HTML control pages established, where we had a page with no robots metatag on the page, one with the robots NOINDEX metatag in the head section and one with a dynamic header (X-ROBOTS meta) on a third page with the NOINDEX directive as well. We expected the one without the meta tag to at least get indexed along with the homepage of the test site. In addition to those 3 control pages, we had 3 pages where we had an internal search results page with the dynamic NOINDEX header. A listing page with no such header and the homepage with no such header. With Google, the correct indexation occured with only 3 pages being indexed, being the homepage, the listing page and the control page without the metatag. However, with BING, there's nothing. No page indexed at all. Not even the flat static HTML page without any robots directive. I have a valid sitemap.xml file and a robots.txt directive open to all engines across all pages yet, nothing. I used the fetch as Bingbot tool, the SEO analyzer Tool and the Preview Page Tool within Bing Webmaster Tools, and they all show a preview of the requested pages. Including the ones with the dynamic header asking it not to index those pages. I'm stumped. I don't know what to do next to understand if BING can accurately process dynamic headers or AngularJS content. Upon checking BWT, there's definitely been crawl activity since it marked against the XML sitemap as successful and put a 4 next to the number of crawled pages. Still no result when running a site: command though. Google responded perfectly and understood exactly which pages to index and crawl. Anyone else used dynamic headers or AngularJS that might be able to chime in perhaps with running similar tests? Thanks in advance for your assistance....0 -
Duplicate Title Issues using # anchor tags
Our homepage navigation uses anchor tags (?TabNumb=1#, ?TabNumb=1# etc) rather than directly linking to different pages to decrease load time (and simplify the build process I owuld imagine). These anchor links are showing up as duplicate titles in Moz. I am pretty sure if I were to use noindex or rel tags, that could have a negative affect on my search results. Any way to tackle this outside of a complete redesign of the structure? http://www.dedoose.com/about-us/?TabNum=2# as an example
Web Design | | sbnjl0 -
Robots.txt being blocked
I think there is an issue with this website I'm working on here is the URL: http://brownieairservice.com/ In Google Webmaster tools I am seeing this in the Robots.txt tester: User-agent: *
Web Design | | SOM24
Crawl-delay: 1
Disallow: /wp-content/plugins/
Disallow: /wp-admin/ Also when I look at "blocked resources" in the webmaster tools this is showing to be blocked: http://brownieairservice.com/wp-content/plugins/contact-form-7/includes/js/jquery.form.min.js?ver=3.51.0-2014.06.20It looks like the form plug in is giving the issues but I don't understand this. There are no site errors or URL errors so I don't understand what this crawl delay means or how to fix it. Any input would be greatly appreciated. Thank you0 -
Redirects Not Working / Issue with Duplicate Page Titles
Hi all We are being penalised on Webmaster Tools and Crawl Diagnostics for duplicate page titles and I'm not sure how to fix it.We recently switched from HTTP to HTTPS, but when we first switched over, we accidentally set a permanent redirect from HTTPS to HTTP for a week or so(!).We now have a permanent redirect going the other way, HTTP to HTTPS, and we also have canonical tags in place to redirect to HTTPS.Unfortunately, it seems that because of this short time with the permanent redirect the wrong way round, Google is confused as sees our http and https sites as duplicate content.Is there any way to get Google to recognise this new (correct) permanent redirect and completely forget the old (incorrect) one?Any ideas welcome!
Web Design | | HireSpace0 -
Has Anyone Had Issues With ASP.NET 4.0 URL Routing?
I'm seeing some odd results in my SEOMOZ results with a new site I just released that is using the ASP.NET 4.0 URL routing. I am seeing thousands(!) of duplicate results, for instance, because the crawl has uncovered something like this: http://www.mysite.com/
Web Design | | TroyCarlson
http://www.mysite.com/default.aspx (so far, so good, though I wish it wouldn't show both)
http://www.mysite.com/default.aspx/about/ (what the heck -?)
http://www.mysite.com/default.aspx/about/about/ (WTF!?)
http://www.mysite.com/default.aspx/about/about/products/ (and on and on ad infinitum) I'm also seeing problems pop up in my sitemap because extensionless urls have an odd "eurl.axd/abunchofnumbersgohere" appended to the end of every address which is breaking links. sigh Buyer beware. I've found articles that discuss the "eurl.axd" issue here and there (this one seems very good), but nothing about the weird crawl issue I outlined above. Any advice?0