.htaccess probelem causing 605 Error?
-
I'm working on a site, it's just a few html pages and I've added a WP blog. I've just noticed that moz is giving me the following error with reference to http://website.com: (webmaster tools is set to show the www subdomain, so it appears OK).
Error Code 605: Page Banned by robots.txt, X-Robots-Tag HTTP Header, or Meta Robots Tag
Here's the code from my htaccess, is this causing the problem?
RewriteEngine on
Options +FollowSymLinks
RewriteCond %{THE_REQUEST} ^./index.html
RewriteRule ^(.)index.html$ http://www.website.com/$1 [R=301,L]
RewriteCond %{THE_REQUEST} ^./index.php
RewriteRule ^(.)index.php$ http://www.website.com/$1 [R=301,L]RewriteCond %{HTTP_HOST} ^website.com$ [NC]
RewriteRule ^(.*)$ http://www.website.com/$1 [R=301,L]BEGIN WordPress
<ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]</ifmodule>END WordPress
Thanks for any advice you can offer!
-
Hey Matt Antonio!
I just wanted to clarify that this error isn't specific to the robots.txt file and can also indicate that we are being blocked by X-Robots Tag, HTTP Header, or Meta Robots Tag. Usually this error does indicate an actual issue with the site we are crawling rather than with our crawler.
The other Q&A post you mentioned is definitely an exception to that rule, but that issue was resolved in August 2014 and has not occurred again.
I hope that clears things up a bit. We are always happy to look into the specific issue causing the crawl error with a site, so I do agree that contacting the help team for these types of issues is often a good idea.
Thanks,
Chiaryn
-
Hi Stevie-G,
I just took a look at your campaign and I am actually getting a 300 http response for your robots.txt file in the browser and in a CURL request from our crawler: http://www.screencast.com/t/UjiIU0MD
The only responses we can accept from the robots.txt file as allowing access to your site are 200 and 404 responses. (301s are also okay if the target URL resolves as a 200 or 404.) Any other http response is considered as denying access to your site, so we aren't able to crawl the site due to the 300 response code we receive from the robots.txt file.
I hope this helps!
Chiaryn
Help Team Sensei -
This happened before but they seemed to be blocking Roger:
I'm not sure if that's the current issue but if your actual /robots.txt file isn't blocking rogerbot, I can't imagine why you'd pull a 605 short of a technical Moz issue. May want to contact support and direct them here to see if it's a similar issue.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How do i fix fatal error message?
I Am Trying To Remove A Robots.txt code i put in my root domain a while back because i didn't know what i was doing. everytime i enter my domain (domain.com/robots.txt) i get a fatal error message. How do I fix this fatal error message? ipALV2z
Technical SEO | | icebergsal0 -
Schema Markup Errors - Priority or Not?
Greetings All... I've been digging through the search console on a few of my sites and I've been noticing quite a few structured data errors. Most of the errors are related to: hcard, hentry and hatom. Most of them are missing author & entry-title, while the other one is missing: fn. I recently saw an article on SEL about Google's focus on spammy mark-up. The sites I use are built and managed by vendors, so I would have to impress upon them the impact of these errors and have them prioritize, then fix them. My question is whether or not this should be prioritized? Should I have them correct these errors sooner than later or can I take a phased approach? I haven't noticed any loss in traffic or anything like that, I'm more focused on what negative impact a "phased approach" could have. Any thoughts?
Technical SEO | | AfroSEO0 -
Is My Boilerplate Product Description Causing Duplicate Content Issues?
I have an e-commerce store with 20,000+ one-of-a-kind products. We only have one of each product, and once a product is sold we will never restock it. So I really have no intention to have these product pages showing up in SERPs. Each product has a boilerplate description that the product's unique attributes (style, color, size) are plugged into. But a few sentences of the description are exactly the same across all products. Google Webmaster Tools doesn't report any duplicate content. My Moz Crawl Report show 29 of these products as having duplicate content. But a Google search using the site operator and some text from the boilerplate description turns up 16,400 product pages from my site. Could this duplicate content be hurting my SERPs for other pages on the site that I am trying to rank? As I said, I'm not concerned about ranking for these products pages. Should I make them "rel=canonical" to their respective product categories? Or use "noindex, follow" on every product? Or should I not worry about it?
Technical SEO | | znagle0 -
Determining the Cause of a Penalty
I received a link removal request from a site who said that they were penalized. I confirmed that they were #1 for the competitive keyword phrase that is also their domain name and now they are #10. Here are some things I noticed about the site: Over 2,500 linking domains. Dozens of high quality linking domains like Huffington Post and Mashable. Some off topic guest post links, e.g. on a SEO site. Guest post anchor text was usually their site name which is an exact match domain. Lots of top 100 resource pages that received good organic links. Infographics with links using their domain name as the anchor text. Relatively few spammy links according to Open Site Explorer. Overall their site's links were engineered but using tactics that most would consider "white hat." I don't think they violated any Google Webmaster Guidelines. Why were they penalized? What do you think?
Technical SEO | | ProjectLabs0 -
Massive Increase in 404 Errors in GWT
Last June, we transitioned our site to the Magento platform. When we did so, we naturally got an increase in 404 errors for URLs that were not redirected (for a variety of reasons: we hadn't carried the product for years, Google no longer got the same string when it did a "search" on the site, etc.). We knew these would be there and were completely fine with them. We also got many 404s due to the way Magento had implemented their site map (putting in products that were not visible to customers, including all the different file paths to get to a product even though we use a flat structure, etc.). These were frustrating but we did custom work on the site map and let Google resolve those many, many 440s on its own. Sure enough, a few months went by and GWT started to clear out the 404s. All the poor, nonexistent links from the site map and missing links from the old site - they started disappearing from the crawl notices and we slowly went from some 20k 404s to 4k 404s. Still a lot, but we were getting there. Then, in the last 2 weeks, all of those links started showing up again in GWT and reporting as 404s. Now we have 38k 404s (way more than ever reported). I confirmed that these bad links are not showing up in our site map or anything and I'm really not sure how Google found these again. I know, in general, these 404s don't hurt our site. But it just seems so odd. Is there any chance Google bots just randomly crawled a big ol' list of outdated links it hadn't tried for awhile? And does anyone have any advice for clearing them out?
Technical SEO | | Marketing.SCG0 -
How to use internal tracking without causing duplicate content issues
Hi, We've been testing internal tracking for 4 weeks on a couple of pages using the basic string ?internalcampaign=X, but hese pages have started appearing in the search results. We don't currently have the facility to add canonical tags to correct this. Does anyone have any other solutions to this problem other than deleting the internal tracking or adding filters on the server? Thanks!
Technical SEO | | NSJ780 -
URL rewriting causing problems
Hi I am having problems with my URL rewriting to create seo friendly / user friendly URL's. I hope you follow me as I try to explain what is happening... Since the creation of my rewrite rule I am getting lots of errors in my SEOMOZ report and Google WMT reports due to duplicate content, titles, description etc For example for a product detail, it takes the page and instead of a URL parameter it creates a user friendly url of mydomain.com/games-playstation-vita-psp/B0054QAS However in the google index there is also the following friendly URL which is the same page - which I would like to remove domain.com/games-playstation-vita/B0054QAS The key to the rewrite on the above URLs is the /B0054QAS appended at the end - this tells the script which product to load, the details preceeding this could be in effect rubbish i.e. domain.com/a-load-of-rubbish/B0054QAS and it would still bring back the same page as above. What is the best way of resolving the duplicate URLs that are currently in the google index which is causing problems The same issue is causing a quite serious a 5XX error on one of the generated URLs http://www.mydomain.com/retailersname/1 - , if I click on the link the link does work - it takes you to the retailers site, but again it is the number appended at the end that is the key - the retailersname is just there for user friendly search reasons How can I block this or remove it from the results? Hope you are still with me and can shed some light on these issues please. Many Thanks
Technical SEO | | ocelot0 -
How is my competition causing bad crawl errors and links on my site
We have a compeditor who we are in a legal dispute at the moment, and they are using under hand tactics to cause us to have bad links and crawl errors and i do not know how they are doing it or how to stop it. The crawl errors we are getting is the site having two urls together, for example www.testsite.com/www.testsite.com and other errors are pages that we do not even have or pages that are spelt wrong or have a dot after the page name. We have been told off a number of people in our field that this has also happened to them and i would like to know how they are doing it so we can have this stopped Since they have been doing this our traffic has gone down by half
Technical SEO | | ClaireH-1848860