2 sitemaps on my robots.txt?
-
Hi,
I thought that I just could link one sitemap from my site's robots.txt but... I may be wrong.
So, I need to confirm if this kind of implementation is right or wrong:
robots.txt for Magento Community and Enterprise
...
Sitemap: http://www.mysite.es/media/sitemap/es.xml
Sitemap: http://www.mysite.pt/media/sitemap/pt.xmlThanks in advance,
-
We recently changed our protocol to https
We have in our robots.txt our new https sitemap link
Our agency is recommending we add another sitemap in our robots.txt file to our insecure sitemap - while google is reindexing our secure protocol. They recommend this as a way for all SEs to pick up on 301 redirects and swap out unsecured results in the index more efficiently.
Do you agree with this?
I am in the camp that we should have have our https sitemap and google will figure it out and having 2 sitemaps one to our old http and one to our new https in our robots.txt is redundant and may be viewed as duplicate content, not as a positive of helping SEs to see 301s better to reindex secure links.
Whats your thought? Let me know if I need to explain more.
-
Well if both sitemaps are for same site then it's OK. But it's much better to implement hreflang as this is explained here:https://support.google.com/webmasters/answer/2620865?hl=en
I'm not sure that Magento can do this but you always can hire 3rd party dev for building plugin/module for this.
-
ok, just one detail: these domains are for a multilang site.
I mean, both have quite the same content: one in spanish and the other un portuguese.
Thanks a lot.
-
You can also have multiple sitemaps on 3rd sites. Look at Moz robots.txt:
Sitemap: https://mza.bundledseo.com/blog-sitemap.xml
Sitemap: https://mza.bundledseo.com/ugc-sitemap.xml
Sitemap: https://mza.bundledseo.com/profiles-sitemap.xml
Sitemap: http://d2eeipcrcdle6.cloudfront.net/past-videos.xml
Sitemap: http://app.wistia.com/sitemaps/36357.xmlAlso Google.com robots.txt:
Sitemap: http://www.gstatic.com/culturalinstitute/sitemaps/www_google_com_culturalinstitute/sitemap-index.xml
Sitemap: http://www.gstatic.com/dictionary/static/sitemaps/sitemap_index.xml
Sitemap: http://www.gstatic.com/earth/gallery/sitemaps/sitemap.xml
Sitemap: http://www.gstatic.com/s2/sitemaps/profiles-sitemap.xml
Sitemap: http://www.gstatic.com/trends/websites/sitemaps/sitemapindex.xml
Sitemap: https://www.google.com/sitemap.xmlAlso Bing.com robots.txt:
Sitemap: http://cn.bing.com/dict/sitemap-index.xml
Sitemap: http://www.bing.com/offers/sitemap.xmlSo using multiple sitemaps it's OK and they can be also hosted on 3rd party server.
-
Hello,
Yes, multiple sitemaps are okay, and sometimes even advised!
You can read Google's official response here."..it's fine for multiple Sitemaps to live in the same directory (as many as you want!)..."
And you can see a case study showing how multiple sitemaps has helped traffic here on Moz.
Hope this helps,
Don
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Blocking in Robots.txt and the re-indexing - DA effects?
I have two good high level DA sites that target the US (.com) and UK (.co.uk). The .com ranks well but is dormant from a commercial aspect - the .co.uk is the commercial focus and gets great traffic. Issue is the .com ranks for brand in the UK - I want the .co.uk to rank for brand in the UK. I can't 301 the .com as it will be used again in the near future. I want to block the .com in Robots.txt with a view to un-block it again when I need it. I don't think the DA would be affected as the links stay and the sites live (just not indexed) so when I unblock it should be fine - HOWEVER - my query is things like organic CTR data that Google records and other factors won't contribute to its value. Has anyone ever blocked and un-blocked and whats the affects pls? All answers greatly received - cheers GB
Technical SEO | | Bush_JSM0 -
Blocking pages from Moz and Alexa robots
Hello, We want to block all pages in this directory from Moz and Alexa robots - /slabinventory/search/ Here is an example page - https://www.msisurfaces.com/slabinventory/search/granite/giallo-fiesta/los-angeles-slabs/msi/ Let me know if this is a valid disallow for what I'm trying to. User-agent: ia_archiver
Technical SEO | | Pushm
Disallow: /slabinventory/search/* User-agent: rogerbot
Disallow: /slabinventory/search/* Thanks.0 -
Would a Search Engine treat a sitemap hosted in the cloud in the same way as if it was simply on /sitemap.htm?
Mainly to allow updates without the need for publishing - would Google interpret any differently? Thanks
Technical SEO | | RichCMF0 -
How do I setup sitemaps for an international website?
I am adding translated versions of my sites to a subdomain for example es.example.com. Will I add each subdomain into Google Webmaster Tools? Will each need its own sitemap?
Technical SEO | | EcommerceSite0 -
Robots.txt | any SEO advantage to having one vs not having one?
Neither of my sites has a robots.txt file. I guess I have never been bothered by any particular bot enough to exclude it. Is there any SEO advantage to having one anyways?
Technical SEO | | GregB1230 -
Sitemap as Referrer in Crawl Error Report
I have just downloaded the SEOMoz crawl error report, and I have a number of pages listed which all show FALSE. The only common denominator is the referrer - the sitemap. I can't find anything wrong, should I be worried this is appearing in the error report?
Technical SEO | | ChristinaRadisic0 -
Should search pages be disallowed in robots.txt?
The SEOmoz crawler picks up "search" pages on a site as having duplicate page titles, which of course they do. Does that mean I should put a "Disallow: /search" tag in my robots.txt? When I put the URL's into Google, they aren't coming up in any SERPS, so I would assume everything's ok. I try to abide by the SEOmoz crawl errors as much as possible, that's why I'm asking. Any thoughts would be helpful. Thanks!
Technical SEO | | MichaelWeisbaum0 -
Robots.txt for subdomain
Hi there Mozzers! I have a subdomain with duplicate content and I'd like to remove these pages from the mighty Google index. The problem is: the website is build in Drupal and this subdomain does not have it's own robots.txt. So I want to ask you how to disallow and noindex this subdomain. Is it possible to add this to the root robots.txt: User-agent: *
Technical SEO | | Partouter
Disallow: /subdomain.root.nl/ User-agent: Googlebot
Noindex: /subdomain.root.nl/ Thank you in advance! Partouter0