How ro write a robots txt file to point to your site map
-
Good afternoon from still wet & humid wetherby UK...
I want to write a robots text file that instruct the bots to index everything and give a specific location to the sitemap. The sitemap url is:http://business.leedscityregion.gov.uk/CMSPages/GoogleSiteMap.aspx
Is this correct:
User-agent: *
Disallow:
SITEMAP: http://business.leedscityregion.gov.uk/CMSPages/GoogleSiteMap.aspxAny insight welcome
-
Thank you so much for all your replies
[CASE CLOSED] -
Ryan's answer is correct. I just wanted to jump in to say that I know from first hand experience that Google and Bing are both able to read the sitemap file even if it is a different extension and even if you can't name it sitemap.xml.
-
Yes, your example is correct.
A great page for learning about robots.txt is: http://en.wikipedia.org/wiki/Robots_exclusion_standard#Sitemap
I will share the official method of declaring your sitemap location involves only the first letter being capitalized (i.e. Sitemap not SITEMAP) but I am almost certain it does not make a difference.
A few other suggestions which are best practices but do not have to be followed:
-
use all lowercase letters in URLs
-
name the sitemap file "sitemap" not "GoogleSiteMap"
-
submit XML sitemaps when possible. I am again almost certain Google can read other versions so if all you care about is Google then it's fine but otherwise I would suggest just using xml files.
example: business.leedscityregion.gov.uk/cmspages/sitemap.xml
Some other helpful links:
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=183668
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
No index tag robots.txt
Hi Mozzers, A client's website has a lot of internal directories defined as /node/*. I already added the rule 'Disallow: /node/*' to the robots.txt file to prevents bots from crawling these pages. However, the pages are already indexed and appear in the search results. In an article of Deepcrawl, they say you can simply add the rule 'Noindex: /node/*' to the robots.txt file, but other sources claim the only way is to add a noindex directive in the meta robots tag of every page. Can someone tell me which is the best way to prevent these pages from getting indexed? Small note: there are more than 100 pages. Thanks!
Technical SEO | | WeAreDigital_BE
Jens0 -
How to use robots.txt to block areas on page?
Hi, Across the categories/product pages on out site there are archives/shipping info section and the texts are always the same. Would this be treated as duplicated content and harmful for seo? How can I alter robots.txt to tell google not to crawl those particular text Thanks for any advice!
Technical SEO | | LauraHT0 -
If I want clean up my URLs and take the "www.site.com/page.html" and make it "www.site.com/page" do I need a redirect?
If I want clean up my URLs and take the "www.site.com/page.html" and make it "www.site.com/page" do I need a redirect? If this scenario requires a 301 redirect no matter what, I might as well update the URL to be a little more keyword rich for the page while I'm at it. However, since these pages are ranking well I'd rather not lose any authority in the process and keep the URL just stripped of the ".html" (if that's possible). Thanks for you help! [edited for formatting]
Technical SEO | | Booj0 -
Then why my site is not ranking
My website's DA and PAs are good compare with my competitors. Then why my site is not ranking.
Technical SEO | | Somanathan0 -
Why would this site outrank a Pr2 site with higher domain authority?
I am trying to get a pr2 site to be on top 7 local spot for the keyword Van Nuys Bail bonds but have discovered a site which has barely any back links and is not even a year old on top results. Their backlinks are from lower authority domains than what we have. How could this site be beating a 7 year old pr2 website? The site I'm working on is http://bbbail.com/ The site that is ranking in 5th spot local with pr0 is http://www.vipbailbonds.org/ is it maybe because it is a .org site? Also I notice that all websites in top spots have www, could that be a factor as well?
Technical SEO | | jesse13410 -
What should I do with 10 Similar Domains pointing to one site?
Hello,
Technical SEO | | thealika
I was recently put in charge of one website with about 15 extra Domains. I previously asked if Domain Parking hurts your SEO as duplicate content - and i learned that the answer seems to be yes. My next logical question is:
What should I do with all those EMD, keyword friendly domains? As I understand, 301 redirecting the domains to the root domain is a safe bet, but that means the extra domains will point to the original root domain no mater what comes after the forward slash. So www.ExtraDomain.com/AboutUS will point to www.OriginalDomain.com - is there a way to keep the forward slashes relavent to the content on the original root domain, if so How? Your Advice, Links, and Comments are greatly appreciated.
Thanks
Nikita0 -
Removing robots.txt on WordPress site problem
Hi..am a little confused since I ticked the box in WordPress to allow search engines to now crawl my site (previously asked for them not to) but Google webmaster tools is telling me I still have robots.txt blocking them so am unable to submit the sitemap. Checked source code and the robots instruction has gone so a little lost. Any ideas please?
Technical SEO | | Wallander0 -
Should I set up a disallow in the robots.txt for catalog search results?
When the crawl diagnostics came back for my site its showing around 3,000 pages of duplicate content. Almost all of them are of the catalog search results page. I also did a site search on Google and they have most of the results pages in their index too. I think I should just disallow the bots in the /catalogsearch/ sub folder, but I'm not sure if this will have any negative effect?
Technical SEO | | JordanJudson0