Robots.txt question
-
Hello,
What does the following command mean -
User-agent: * Allow: /
Does it mean that we are blocking all spiders ? Is Allow supported in robots.txt ?
Thanks
-
It's a good idea to have an xml site map and make sure the search engines know where it is. It's part of the protocol that they will look in the robots.txt file for the location for your sitemap.
-
I was assuming that by including / after allow, we are blocking the spiders and also thought that allow is not supported by search engines.
Thanks for clarifications. A better approach would be
User-Agent: * Allow:
right ?
The best one of course is
**User-agent: * Disallow:**
-
That's not really necessary unless there URLs or directories you're disallowing after the allow in your robots.txt. Allow is a directive supported by major search engines, but search engines assume they're allowed to crawl everything they find unless you disallow it specifically in your robots.txt.
The following is universally accepted by bots and essentially means the same thing as what I think you're trying to say, allowing bots to crawl everything:
User-agent: * Disallow:
There's a sample use of the Allow directive on the wikipedia robots.txt page here.
-
There's more information about robots.txt from SEOmoz at http://www.seomoz.org/learn-seo/robotstxt
SEOmoz and the robots.txt site suggest the following for allowing robots to see everying and list your sitemap:
User-agent: *
Disallow:Sitemap: http://www.example.com/none-standard-location/sitemap.xml
-
Any particular reason for doing so ?
-
That robots txt should be fine.
But you should also add your XML sitemap to the robots.txt file, example:
User-Agent: * Allow: / Sitemap: http://www.website.com/sitemap.xml
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Blocking in Robots.txt and the re-indexing - DA effects?
I have two good high level DA sites that target the US (.com) and UK (.co.uk). The .com ranks well but is dormant from a commercial aspect - the .co.uk is the commercial focus and gets great traffic. Issue is the .com ranks for brand in the UK - I want the .co.uk to rank for brand in the UK. I can't 301 the .com as it will be used again in the near future. I want to block the .com in Robots.txt with a view to un-block it again when I need it. I don't think the DA would be affected as the links stay and the sites live (just not indexed) so when I unblock it should be fine - HOWEVER - my query is things like organic CTR data that Google records and other factors won't contribute to its value. Has anyone ever blocked and un-blocked and whats the affects pls? All answers greatly received - cheers GB
Technical SEO | | Bush_JSM0 -
Identify hidden content + Question of impact
I have a client that had a web agency which systematically implemented hidden content. Is there a tool that will compare source content vs what is readable on from the browser? Otherwise, can you recommend articles that focus on doing this? What are the impact on hidden content these days? Another client had hidden content on the first page and when we took over them they were ranking nr 2 with the home page on their brand name (nr 1 with a sub page). We have checked the link profile with Majestic, Ahrefs and Moz and nothing spammy comes up so I doubt it is Penguin related.
Technical SEO | | OscarSE0 -
Hi - I have a question about IP addresses
- would it hurt link juice to host a blog on a different server to the rest of your website? I have a web host saying they can't run Wordpress as they won't support PHP for "security reasons" - one solution would be to set up Wordpress on a different server and redirect domain.com/blog there (I presume this is do-able?). But I don't know if that affects the SEO adversely?
Technical SEO | | abisti21 -
Will a robots.txt disallow apply to a 301ed URL?
Hi there, I have a robots.txt query which I haven't tried before and as we're nearing a big time for sales, I'm hesitant to just roll out to live! Say for example, in my robots.txt I disallow the URL 'example1.html'. In reality, 'example1.html' 301s/302s to 'example2.html'. Would the robots.txt directive also apply to 'example2.html' (disallow) or as it's a separate URL, would the directive be ignored as it's not valid? I have a feeling that as it's a separate URL, the robots disallow directive won't apply. However, just thought I'd sense-check with the community.
Technical SEO | | ecommercebc0 -
Canonical Expert question!
Hello, I am looking for some help here with an estate agent property web site. I recently finished the MoZ crawling report and noticed that MoZ sees some pages as duplicate, mainly from pages which list properties as page 1,2,3 etc. Here is an example: http://www.xxxxxxxxx.com/property-for-rent/london/houses?page=2
Technical SEO | | artdivision
http://www.xxxxxxxxx.com/property-for-rent/london/houses?page=3 etc etc Now I know that the best practise says I should set a canonical url to this page:
http://www.xxxxxxxxx.com/property-for-rent/london/houses?page=all but here is where my problem is. http://www.xxxxxxxxx.com/property-for-rent/london/houses?page=1 contains good written content (around 750 words) before the listed properties are displayed while the "page=all" page do not have that content, only the properties listed. Also http://www.xxxxxxxxx.com/property-for-rent/london/houses?page=1 is similar with the originally designed landing page http://www.xxxxxxxxx.com/property-for-rent/london/houses I would like yoru advise as to what is the best way to can url this and sort the problem. My original thoughts were to can=url to this page http://www.xxxxxxxxx.com/property-for-rent/london/houses instead of the "page=all" version but your opinion will be highly appreciated.0 -
Google Enterprise Search Questions
Hi Everybody, A client has asked me to take a look at Google Enterprise Search for them. It has been a few years since I last fooled around with implementing a Google search box on a website, and that was the free version which included off-site results in the results. This appears to be the main page describing the paid product: http://www.google.com/enterprise/search/ I have three questions: The search testing function on the above page doesn't seem to be working. I'm typing in a URL and search term, as prompted, and the page is simply refreshing. It never provides me an example set of results. Is it working for you? This client has a moderately large e-commerce site (about 200 products). Have you implemented Google enterprise search on such a site and are you happy with its performance? The goal here is to let users search for a topic and be returned both product and informational pages. How well does this tool do this? Am I going to need to know any special types of coding (beyond html/css) to implement this? If so, what are they? If you have experience with this product, I would surely appreciate your feedback. Thank you!
Technical SEO | | MiriamEllis0 -
Wordpress question
I was curious when i run an OSE report on certain websites and their name.wordpress.com shows up with a PA of whatever and a DA of 100. But when I created my wordpress site and post on it, it only has a PA and DA of 1. is this because SEOmoz has not indexed it yet? It is a month old. http://shiftinsurance.wordpress.com/ Can anyone help pls?
Technical SEO | | greasy0 -
Subdomain Robots.txt
If I have a subdomain (a blog) that is having tags and categories indexed when they should not be, because they are creating duplicate content. Can I block them using a robots.txt file? Can I/do I need to have a separate robots file for my subdomain? If so, how would I format it? Do I need to specify that it is a subdomain robots file, or will the search engines automatically pick this up? Thanks!
Technical SEO | | JohnECF0