Robots.txt Blocking - Best Practices
-
Hi All,
We have a web provider who's not willing to remove the wildcard line of code blocking all agents from crawling our client's site (user-agent: *, Disallow: /). They have other lines allowing certain bots to crawl the site but we're wondering if they're missing out on organic traffic by having this main blocking line. It's also a pain because we're unable to set up Moz Pro, potentially because of this first line.
We've researched and haven't found a ton of best practices regarding blocking all bots, then allowing certain ones. What do you think is a best practice for these files?
Thanks!
User-agent: * Disallow: / User-agent: Googlebot Disallow: Crawl-delay: 5 User-agent: Yahoo-slurp Disallow: User-agent: bingbot Disallow: User-agent: rogerbot Disallow: User-agent: * Crawl-delay: 5 Disallow: /new_vehicle_detail.asp Disallow: /new_vehicle_compare.asp Disallow: /news_article.asp Disallow: /new_model_detail_print.asp Disallow: /used_bikes/ Disallow: /default.asp?page=xCompareModels Disallow: /fiche_section_detail.asp
-
Thanks for taking the time to respond in depth, GreenStone. We appreciate the advice and have passed your response along to the web hosting company (along with a frustrated email) explaining they're not adhering to anyone's best practices. Hopefully this will convince them!
-
Thanks, Dmitrii for your response! From our research we've seen similar recommendations and it helps to have more evidence to back it up. Hopefully these guys will give in a bit!
-
Completely agree, I really wouldn't want to host my stuff with a company that can't figure out what really the best practices are ;-). This is very well layed out why you shouldn't want to set up your robots.txt like it is right now.
-
In general, I definitely wouldn't recommend the way the web-provider is handling this.
- Disallowing all while adding exceptions should never be the norm. Allowing all to crawl while adding exceptions for other crawlers aside from google would be best practice generally,
- It makes a lot more sense to just allow crawlers full access, and then add crawl delays for non google crawlers, in addition to disallowing those specific sub-folders: Disallow: /new_vehicle_detail.asp Disallow: /new_vehicle_compare.asp Disallow: /news_article.asp Disallow: /new_model_detail_print.asp Disallow: /used_bikes/ Disallow: /default.asp?page=xCompareModels Disallow: /fiche_section_detail.asp.
- Googlebot Disallow: Crawl-delay: 5, does not do you any good, as google does not obey these commands. Only Search Console can control this.
- You can test what is visible to googlebot within search console's "robots" subsection, in order to verify what they can access.
- Disallowing all while adding exceptions should never be the norm. Allowing all to crawl while adding exceptions for other crawlers aside from google would be best practice generally,
-
Here is another video from Matt - https://www.youtube.com/watch?v=I2giR-WKUfY
Lots of good points there too.
-
Hi.
Super weird client - that's for sure.
User-agent: * Disallow: /
Every bot will be blocked off! how in the world are they ranking?
watch that video, there are good ideas of bot and crawlers controlling. As well as you can consider that as best practices. And yes, what they have now is ridiculous.
https://mza.seotoolninja.com/community/q/should-we-use-google-s-crawl-delay-setting
Here is a q/a about crawler delays. As far as I know Google ignores delays anyway, plus there is nothing good about it anyway.
Hope this helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Large robots.txt file
We're looking at potentially creating a robots.txt with 1450 lines in it. This will remove 100k+ pages from the crawl that are all old pages (I know, the ideal would be to delete/noindex but not viable unfortunately) Now the issue i'm thinking is that a large robots.txt will either stop the robots.txt from being followed or will slow our crawl rate down. Does anybody have any experience with a robots.txt of that size?
Intermediate & Advanced SEO | | ThomasHarvey0 -
Best SEO url woocommerce, what to do?
Hi! Today we have our product categories indexed (by misstake) and for one of our desired keywords, a category have the nr 1 rank. By misstake, we didnt set nofollow, noindex on our categories, just tags, archives etc. We are now migrating to from Ithemes Exchange to Woocommerce and ime looking on improving our SEO urls for the categories. For keyword "Key1" we rank with this url: http://site/product-category/Key1. The seo meta title and description where untouched when we launched the site last spring so it doesnt look so good.. The plan is to stripe out product-category and instead ad some description ( i have a newly written text of 95 words, 519 letters without space with they keyword precent 5 times in a natural way ) to that particular category and have the url as following: http://site/key1 and then have a 301 redirect for the old http://site/product-category/Key1. What do you think of this? What shall i consider? on the right track? Grateful for any help! // Jonas
Intermediate & Advanced SEO | | knubbz0 -
Best way to target multiple geographic locations
Hello Mozzers! If you are a service provider wanting to target geographic locations outside of the region where you're physically located, what's the best approach? For example, I have a service provider whose main market is not where they're located - they're based in Devon UK, yet main markets are London, Birmingham, Newcastle, Edinburgh. They have clients in all these cities, so I could definitely provide content relevant to each city - perhaps a page for each city detailing work and services (and possibly listing clients). However, does the lack of a physical presence (and local phone number) in these cities make such city pages virtually impossible to rank these days? Does Google require a physical presence/phone number? Thanks in advance, Luke
Intermediate & Advanced SEO | | McTaggart0 -
Should I block wordpress archive and tag?
I use Wodpress and Wordpress SEO by Yoast. I've set ip up to add noindex meta tag on all archive and tag pages. I don't think its useful to include thoses pages in search results because there's quite a few. Especialy the tag archive. Should I consider anything else or change my mind? What do you think? Thanks
Intermediate & Advanced SEO | | Akeif0 -
MOZ crawl report says category pages blocked by meta robots but theyr'e not?
I've just run a SEOMOZ crawl report and it tells me that the category pages on my site such as http://www.top-10-dating-reviews.com/category/online-dating/ are blocked by meta robots and have the meta robots tag noindex,follow. This was the case a couple of days ago as I run wordpress and am using the SEO Category updater plugin. By default it appears it makes categories noindex, follow. Therefore I edited the plugin so that the default was index, follow as I want google to index the category pages so that I can build links to them. When I open the page in a browser and view source the tags show as index, follow which adds up. Why then is the SEOMOZ report telling me they are still noindex,follow? Presumably the crawl is in real time and should pick up the new follow tag or is it perhaps because its using data from an old crawl? As yet these pages aren't indexed by google. Any help is much appreciated! Thanks Sam.
Intermediate & Advanced SEO | | SamCUK0 -
Files blocked in robot.txt and seo
I use joomla and I have blocked the following in my robots.txt is there anything that is bad for seo ? User-agent: * Disallow: /administrator/ Disallow: /cache/ Disallow: /components/ Disallow: /images/ Disallow: /includes/ Disallow: /installation/ Disallow: /language/ Disallow: /libraries/ Disallow: /media/ Disallow: /modules/ Disallow: /plugins/ Disallow: /templates/ Disallow: /tmp/ Disallow: /xmlrpc/ Disallow: /mailto:[email protected]/ Disallow: /javascript:void(0) Disallow: /.pdf
Intermediate & Advanced SEO | | seoanalytics0 -
Best way to SEO crowdsourcing site
What is the best way to SEO a crowdsourcing site? The websites content is entirely propagated by the user
Intermediate & Advanced SEO | | StreetwiseReports0 -
Best way to clean up a nasty backlink profile?
A new client of mine sadly has a TON of terrible links (3800 links from 1500 domains) which are pointing to landing pages that have been created specifically for manipulating engines. Besides contacting these sites and asking to have the links removed the only solution I can think of it to delete these pages and let them 404. Obviously I am not thrilled about that but I'm not sure what else to do. Does anyone have any other ideas for how to clean up this backlink profile? Thanks
Intermediate & Advanced SEO | | LukeMontgomery0