Robots.txt | any SEO advantage to having one vs not having one?
-
Neither of my sites has a robots.txt file. I guess I have never been bothered by any particular bot enough to exclude it.
Is there any SEO advantage to having one anyways?
-
It's good practice, especially if you are operating a CMS that can create accessible URLs that cause duplicate content problems, create "junk" pages, etc. For example: http://www.asos.com/robots.txt
Google dislikes search results pages being indexed, so you can block those off, e.g. http://moz.com/robots.txt
You can disallow the archive.org bot if you don't want old versions of your site appearing in its search engine, and as others have said you can point to your xml sitemap.
It's not a bad resource to have at your disposal for site hygiene / maintenance reasons, but it's not an absolute necessity either.
-
There are actually a couple good reasons but in short, it's "best practice" so it won't hurt by adding it in. It wont take more than a couple minutes.
-
Just good practice. One SEO advantage would be to include a reference to your sitemap within the robots.txt file.
Aside from that, if you want all of your pages crawled and don't have a sitemap (although you should), no need for a robots.txt file.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Blocking in Robots.txt and the re-indexing - DA effects?
I have two good high level DA sites that target the US (.com) and UK (.co.uk). The .com ranks well but is dormant from a commercial aspect - the .co.uk is the commercial focus and gets great traffic. Issue is the .com ranks for brand in the UK - I want the .co.uk to rank for brand in the UK. I can't 301 the .com as it will be used again in the near future. I want to block the .com in Robots.txt with a view to un-block it again when I need it. I don't think the DA would be affected as the links stay and the sites live (just not indexed) so when I unblock it should be fine - HOWEVER - my query is things like organic CTR data that Google records and other factors won't contribute to its value. Has anyone ever blocked and un-blocked and whats the affects pls? All answers greatly received - cheers GB
Technical SEO | | Bush_JSM0 -
Robots.txt in subfolders and hreflang issues
A client recently rolled out their UK business to the US. They decided to deploy with 2 WordPress installations: UK site - https://www.clientname.com/uk/ - robots.txt location: UK site - https://www.clientname.com/uk/robots.txt
Technical SEO | | lauralou82
US site - https://www.clientname.com/us/ - robots.txt location: UK site - https://www.clientname.com/us/robots.txt We've had various issues with /us/ pages being indexed in Google UK, and /uk/ pages being indexed in Google US. They have the following hreflang tags across all pages: We changed the x-default page to .com 2 weeks ago (we've tried both /uk/ and /us/ previously). Search Console says there are no hreflang tags at all. Additionally, we have a robots.txt file on each site which has a link to the corresponding sitemap files, but when viewing the robots.txt tester on Search Console, each property shows the robots.txt file for https://www.clientname.com only, even though when you actually navigate to this URL (https://www.clientname.com/robots.txt) you’ll get redirected to either https://www.clientname.com/uk/robots.txt or https://www.clientname.com/us/robots.txt depending on your location. Any suggestions how we can remove UK listings from Google US and vice versa?0 -
Pager + SEO - Is it possible?
Hi, I am having this issue. I know that pager are not friends with SEO, but I want to know which is the best to do in this situations. for example, I work in a news company, and I have a lot of news pages that are very extensive so I use a pager. Well here I have the problem. suppose that the url is www.mysite.com/news/id/here-comes-the-title When you enter that URL you are viewing the first page that has this meta: title keywords description Now, the problem comes when the user goes to the page 2 of this news article. What I shall do? 1- Change the url to www.mysite.com/news/id/here-comes-the-title-PAGE2 www.mysite.com/news/PAGE2-id/here-comes-the-title www.mysite.com/news/id/PAGE2/here-comes-the-title 2- in the page 2,3,4,5 ... add a meta robot noindex? In the option 2 I think that I am loosing the opportunity to index the body of my article. Is this correct? Thanks
Technical SEO | | informatica8100 -
Help needed with robots.txt regarding wordpress!
Here is my robots.txt from google webmaster tools. These are the pages that are being blocked and I am not sure which of these to get rid of in order to unblock blog posts from being searched. http://ensoplastics.com/theblog/?cat=743 http://ensoplastics.com/theblog/?p=240 These category pages and blog posts are blocked so do I delete the /? ...I am new to SEO and web development so I am not sure why the developer of this robots.txt file would block pages and posts in wordpress. It seems to me like that is the reason why someone has a blog so it can be searched and get more exposure for SEO purposes. IS there a reason I should block any pages contained in wodrpress? Sitemap: http://www.ensobottles.com/blog/sitemap.xml User-agent: Googlebot Disallow: /*/trackback Disallow: /*/feed Disallow: /*/comments Disallow: /? Disallow: /*? Disallow: /page/
Technical SEO | | ENSO
User-agent: * Disallow: /cgi-bin/ Disallow: /wp-admin/ Disallow: /wp-includes/ Disallow: /wp-content/plugins/ Disallow: /wp-content/themes/ Disallow: /trackback Disallow: /commentsDisallow: /feed0 -
Importance of SEO Friendly URLs
Hey SEOZ! How important do you all think SEO friendly URLs are to SEO. Here is an example: Non: http://www.domain.com/cart.php?m=product_detail&p=2964 Friendly: http://www.domain.com/product_name_here.html I have always heard mixed reviews but did an experiment comparing results on the same domain and actually noticed quite a difference with the friendly ones. Thanks!
Technical SEO | | 6thirty0 -
Robots.txt for subdomain
Hi there Mozzers! I have a subdomain with duplicate content and I'd like to remove these pages from the mighty Google index. The problem is: the website is build in Drupal and this subdomain does not have it's own robots.txt. So I want to ask you how to disallow and noindex this subdomain. Is it possible to add this to the root robots.txt: User-agent: *
Technical SEO | | Partouter
Disallow: /subdomain.root.nl/ User-agent: Googlebot
Noindex: /subdomain.root.nl/ Thank you in advance! Partouter0