Is a Rel="cacnonical" page bad for a google xml sitemap
-
Back in March 2011 this conversation happened.
Rand: You don't want rel=canonicals.
Duane: Only end state URL. That's the only thing I want in a sitemap.xml. We have a very tight threshold on how clean your sitemap needs to be. When people are learning about how to build sitemaps, it's really critical that they understand that this isn't something that you do once and forget about. This is an ongoing maintenance item, and it has a big impact on how Bing views your website. What we want is end state URLs and we want hyper-clean. We want only a couple of percentage points of error.
Is this the same with Google?
-
LOL thanks!
-
You're very welcome.
And just try to think about it this way... every best practice you employ for your site is another best practice your competitors have to employ to keep up with you
-
Yes I understand that. It is just a lot more work for us to do with our site map! Thanks for your advice.
-
To clarify, when I say rel="canonical" pages, I mean pages that are using that link tag to point to another page (i.e., the pages that are NOT the canonical page). These are also the pages that Duane and Rand were talking about.
I am not saying you shouldn't include pages that are included in the actual link tag.
Let's assume you have 3 pages: A, B, and C.
Pages B and C have a rel="canonical" link that points to A.
In this scenario, you would include A in your XML Sitemap (assuming A is a high-quality page that is important to your site), and you would NOT include B and C.
-
I see. but the rel="canonical" pages are good page. I get the broken links and all that part but I guess i do not agree with rel="canonical" as much. I can see their standpoint. Do you do a lot with your site map and assign the different values to different pages?
-
Yes, it is safe to assume that all search engines want your XML Sitemaps to be as clean and accurate as possible.
XML Sitemaps give you an opportunity to tell search engines about your most important pages, and you want to take advantage of this opportunity.
Think about it another way. Let's pretend your site and Google are both real people. In that hypothetical world, Google's first impression of your site is established through your site's XML Sitemaps. If those Sitemaps are full of broken links, redirecting URLs, and rel="canonical" pages, your site has already made a bad first impression ("If this site can't maintain an up-to-date Sitemap, I'm terrified of what I'll find once I get to the actual pages").
On the other hand, if your XML Sitemaps are full of live links that point to your site's most important pages, Google will have a positive first impression and continue on with the relationship
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
New SEO manager needs help! Currently only about 15% of our live sitemap (~4 million url e-commerce site) is actually indexed in Google. What are best practices sitemaps for big sites with a lot of changing content?
In Google Search console 4,218,017 URLs submitted 402,035 URLs indexed what is the best way to troubleshoot? What is best guidance for sitemap indexation of large sites with a lot of changing content? view?usp=sharing
Technical SEO | | Hamish_TM1 -
My some pages are not showing cached in Google, WHY?
I have website http://www.vipcollisionlv.com/ and when i check the cache status with tags **site:http:vipcollisionlv.com, **some page has no cache status.. you can see this in image. How to resolve this issue. please help me.
Technical SEO | | 1akal0 -
Google not returning an international version of the page
I run a website that duplicates some content across international editions. These are differentiated by the country codes e.g. /uk/folder/article1/ /au/folder/article1/ The UK version is considered the origin of the content. We currently use hreflang to differentiate content, however there is no actual regional or language variation between the content on these pages. Recently the UK version of a specific article is being indexed by Google as I am able to access via keyword search, however when I try to search for it via: site:domain.com/uk/folder/article1/then it is not displaying, however the AU version is. Identical articles in the same folder are not having this issue. There are no errors within webmaster tools and I have recently refetched the specific URL. Additionally when checking for internal links to the UK and AU edition of the article, I am getting internal links for the AU edition of the article however no internal links for the UK edition of the article. The main reason why this is problematic is because the article is now no longer appearing on the UK edition of the site for internal site search. How can I find out why Google is not getting a result when the URL is entered but it is coming up when doing a specific search?
Technical SEO | | AndDa0 -
"non-WWW" vs "WWW" in Google SERPS and Lost Back Link Connection
A Screaming Frog report indicates that Google is indexing a client's site for both: www and non-www URLs. To me this means that Google is seeing both URLs as different even though the page content is identical. The client has not set up a preferred URL in GWMTs. Google says to do a 301 redirect from the non-preferred domain to the preferred version but I believe there is a way to do this in HTTP Access and an easier solution than canonical.
Technical SEO | | RosemaryB
https://support.google.com/webmasters/answer/44231?hl=en GWMTs also shows that over the past few months this client has lost more than half of their backlinks. (But there are no penalties and the client swears they haven't done anything to be blacklisted in this regard. I'm curious as to whether Google figured out that the entire site was in their index under both "www" and "non-www" and therefore discounted half of the links. Has anyone seen evidence of Google discounting links (both external and internal) due to duplicate content? Thanks for your feedback. Rosemary0 -
Page disappeared from Google index. Google cache shows page is being redirected.
My URL is: http://shop.nordstrom.com/c/converse Hi. The week before last, my top Converse page went missing from the Google index. When I "fetch as Googlebot" I am able to get the page and "submit" it to the index. I have done this several times and still cannot get the page to show up. When I look at the Google cache of the page, it comes up with a different page. http://webcache.googleusercontent.com/search?q=cache:http://shop.nordstrom.com/c/converse shows: http://shop.nordstrom.com/c/pop-in-olivia-kim Back story: As far as I know we have never redirected the Converse page to the Pop-In page. However the reverse may be true. We ran a Converse based Pop-In campaign but that used the Converse page and not the regular Pop-In page. Though the page comes back with a 200 status, it looks like Google thinks the page is being redirected. We were ranking #4 for "converse" - monthly searches = 550,000. My SEO traffic for the page has tanked since it has gone missing. Any help would be much appreciated. Stephan
Technical SEO | | shop.nordstrom0 -
Google ignores Meta name="Robots"
Ciao from 24 degrees C wetherby UK, On this page http://www.perspex.co.uk/products/palopaque-cladding/ this line was added to block indexing: But it has not worked, when you google "Palopaque PVC Wall Cladding" the page appears in the SERPS. I'm going to upload a robots txt file in a second attempt to block indexing but my question is please:
Technical SEO | | Nightwing
Why is it being indexed? Grazie,
David0 -
Has Google stopped rendering author snippets on SERP pages if the author's G+ page is not actively updated?
Working with a site that has multiple authors and author microformat enabled. The image is rendering for some authors on SERP page and not for others. Difference seems to be having an updated G+ page and not having a constantly updating G+ page. any thoughts?
Technical SEO | | irvingw0 -
NoIndex/NoFollow pages showing up when doing a Google search using "Site:" parameter
We recently launched a beta version of our new website in a subdomain of our existing site. The existing site is www.fonts.com with the beta living at new.fonts.com. We do not want Google to crawl the new site until it's out of beta so we have added the following on all pages: However, one of our team members noticed that google is displaying results from new.fonts.com when doing an "site:new.fonts.com" search (see attached screenshot). Is it possible that Google is indexing the content despite the noindex, nofollow tags? We have double checked the syntax and it seems correct except the trailing "/". I know Google still crawls noindexed pages, however, the fact that they're showing up in search results using the site search syntax is unsettling. Any thoughts would be appreciated! DyWRP.png
Technical SEO | | ChrisRoberts-MTI0