Search engine blocked by robots-crawl error by moz & GWT
-
Hello Everyone,.
For My Site I am Getting Error Code 605: Page Banned by robots.txt, X-Robots-Tag HTTP Header, or Meta Robots Tag, Also google Webmaster Also not able to fetch my site, tajsigma.com is my site
Any expert Can Help please,
Thanx
-
When was your last crawl date in Google Webmaster Tools/Search Console? It may be that your site was crawled with some kind of problem with the robots.txt and hasn't been re-crawled since.
-
Yes , Exactly
I am also worried For that only, Can you please help to identify my site problem
Thnx
-
That's very strange. The robots.txt looks fine, but here's what I see when I search for your site on Google.
-
Headers look fine and as you correctly said your robots and meta robots are also ok.
I have also noted that doing a site:www etc in google search is also returning pages for your site so again showing it is being crawled and indexed.
To all intents and purposes it looks ok to me. Someone else may be able to shed more light on the issue if they have experienced this error to this degree.
-
www.tajsigma.com This the Domain For Query robots 605- code
-
Just to check it that the live one, or just a test in GSC. Can you send a link to your site maybe in a PM.
-
Yes, It is Also okay there see Attached screenshot
-
Have you followed the following and in Google Search Console tried testing your robots file.
If you are allowing all, I would maybe suggest simply removing your robots.txt all together so it defaults to just crawling everything.
-
Thanx, Tim Holmes For Your Quick reply
But My robots.txt File is
User-agent: *
allow: /Also in All pages i have Add Meta-tag
Then After Page is Not Getting Fetched or Crawl by GWT.
Thnx
-
Hello Falguni,
I believe the error is saying pretty much everything you need to know. Your Robots file or robots meta would appear to be blocking your site from being crawled.
Have you checked your robots.txt file in your root - or type in http://www.yourdomain.com/robots.txt
To ensure your site is being crawled and for robots to have complete access the following should be in place
**User-agent: ***
Disallow: To exclude all robots from the entire server**User-agent: ***
Disallow: **/**If it is the a meta tag causing the issue you will require, or have it removed to default to the below.As opposed to the following combinations which could result in some areas not being indexed, crawled etc
_Hope that helps
Tim_
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Crawl Stats Decline After Site Launch (Pages Crawled Per Day, KB Downloaded Per Day)
Hi all, I have been looking into this for about a month and haven't been able to figure out what is going on with this situation. We recently did a website re-design and moved from a separate mobile site to responsive. After the launch, I immediately noticed a decline in pages crawled per day and KB downloaded per day in the crawl stats. I expected the opposite to happen as I figured Google would be crawling more pages for a while to figure out the new site. There was also an increase in time spent downloading a page. This has went back down but the pages crawled has never went back up. Some notes about the re-design: URLs did not change Mobile URLs were redirected Images were moved from a subdomain (images.sitename.com) to Amazon S3 Had an immediate decline in both organic and paid traffic (roughly 20-30% for each channel) I have not been able to find any glaring issues in search console as indexation looks good, no spike in 404s, or mobile usability issues. Just wondering if anyone has an idea or insight into what caused the drop in pages crawled? Here is the robots.txt and attaching a photo of the crawl stats. User-agent: ShopWiki Disallow: / User-agent: deepcrawl Disallow: / User-agent: Speedy Disallow: / User-agent: SLI_Systems_Indexer Disallow: / User-agent: Yandex Disallow: / User-agent: MJ12bot Disallow: / User-agent: BrightEdge Crawler/1.0 ([email protected]) Disallow: / User-agent: * Crawl-delay: 5 Disallow: /cart/ Disallow: /compare/ ```[fSAOL0](https://ibb.co/fSAOL0)
Intermediate & Advanced SEO | | BandG0 -
Robots.txt, Disallow & Indexed-Pages..
Hi guys, hope you're well. I have a problem with my new website. I have 3 pages with the same content: http://example.examples.com/brand/brand1 (good page) http://example.examples.com/brand/brand1?show=false http://example.examples.com/brand/brand1?show=true The good page has rel=canonical & it is the only page should be appear in Search results but Google has indexed 3 pages... I don't know how should do now, but, i am thinking 2 posibilites: Remove filters (true, false) and leave only the good page and show 404 page for others pages. Update robots.txt with disallow for these parameters & remove those URL's manually Thank you so much!
Intermediate & Advanced SEO | | thekiller990 -
Bad Domain Links - Penguin? - Moz vs. Search Console Stats?
I've been trying to figure out why my site www.stephita.com has lost it's google ranking the past few years. I had originally thought it was due to the Panda updates, but now I'm concerned it might be because of the Penguin update. Hard for me to pinpoint, as I haven't been actively looking at my traffic stats the past years. So here's what I just noticed. On my Google Search Console - Links to your Site, I discovered there are 301 domains, where over 75% seem to be spammy. I didn't actively create those links. I'm using the MOZ - Open site Explorer tool to audit my site, and I noticed there is a smaller set of LINKING DOMAINS, at about 70 right now. Is there a reason, why MOZ wouldn't necessarily find all 300 domains? What's the BEST way to clean this up??? I saw there's a DISAVOW option in the Google Search Console, but it states it's not the best way, as I should be contacting the webmasters of all the domains, which is I assume impossible to get a real person on the other end to REMOVE these link references. HELP! 🙂 What should I do?
Intermediate & Advanced SEO | | TysonWong0 -
Crawl rate drop
Hi Guys, I have a crawl rate drop in webmastertools and can't figure out way. In the last month I removed a lot o duplicate pages that we don't need anymore, there were at least 1.5 million pages. Can this be a motive? D7O5x1l
Intermediate & Advanced SEO | | Silviu0 -
Search box within search results question
I work for a Theater news website. We have two sister sites, theatermania.com in the US and whatsonstage.com in London. Both sites have largely the same codebase and page layouts. We've implemented markup that allows google to show a search box for our site in its results page. For some reason, the search box is showing for one site but not the other: http://screencast.com/t/CSA62NT8 We're scratching our heads. Does anyone have any ideas?
Intermediate & Advanced SEO | | TheaterMania0 -
Can URLs blocked with robots.txt hurt your site?
We have about 20 testing environments blocked by robots.txt, and these environments contain duplicates of our indexed content. These environments are all blocked by robots.txt, and appearing in google's index as blocked by robots.txt--can they still count against us or hurt us? I know the best practice to permanently remove these would be to use the noindex tag, but I'm wondering if we leave them they way they are if they can still hurt us.
Intermediate & Advanced SEO | | nicole.healthline0 -
High search volume keywords
The problem is that our index is not in serps anymore with the high volume keywords (Pfizer, Roche, johnson & johnson).
Intermediate & Advanced SEO | | bele
We still keep these keywords in title, but it brings not much results. We made page www.domain.com/pfizer , added there Pfizer products with unique descriptions.
Product pages started to drive visitors, but not the www.domain.com/pfizer page. If we add a blog to the top of this page and add unique posts about Pfizer company news, would it help?
In this case this page would be unique, refreshed with new info, and have rotating pfizer products. Maybe some other suggestions?0 -
Should product searches (on site searches) be noindex?
We have a large new site that is suffering from a sitewide panda like penalty. The site has 200k pages indexed by Google. Lots of category and sub category page content and about 25% of the product pages have unique content hand written (vs the other pages using copied content). So it seems our site is labeled as thin. I'm wondering about using noindex paramaters for the internal site search. We have a canonical tag on search results pointing to domain.com/search/ (client thought that would help) but I'm wondering if we need to just no index all the product search results. Thoughts?
Intermediate & Advanced SEO | | iAnalyst.com0