Using canonical for duplicate contents outside of my domain
-
I have 2 domains for the same company, example.com and example.sg
Sometimes we have to post the same content or event on both websites so to protect my website from duplicate content plenty i use canonical tag to point to either .com or .sg depend on the page.
Any idea if this is the right decision
Thanks
-
Unfortunately, that's a lot more tricky. If you're trying to rank both the .com and .sg version for, let's say, US residents, and those sites have duplicate content, then you do run the risk of Google filtering one of them out. If you use canonical tags or something like that, then one site will be taken out of contention for ranking - in that case, you won't rank for both sites on the same term. The only way to have your cake and eat it too is to make the sites as unique as possible.
Even then, you're potentially going to duplicate effort and cannibalize your own rankings, so it's a risky proposition. In some cases, it may be better to try to promote your social profiles and other pages outside of your site that have some authority. It doesn't have to be your own site ranking, just a site that's generally positive or neutral.
-
Thanks Peter you answer has enrich the discussion
I think your suggestion is the proper way for different local domains versions of the same company or blog
My case is little different that actually lately i am trying to rank both of them in the seek of reputation management
It wasn't intended to be like that on the beginning but now we are trying to take advantage of our other local domain like .sg , .ch and .ae
-
Do you want the .sg site to only rank regionally in Singapore? You could use rel=alternate hreflang to designate the language/region for the two sites, and help Google more accurately know when to display which sites. This also acts as a soft canonicalization signal and tells Google that the pages are known duplicates:
-
Here's an article about rel=canonical where Dr. Pete answers some rel=canonical questions. With regards to rel=canonical passing PageRank he says:
"This is very difficult to measure, but if you use rel=canonical appropriately, and if Google honors it, then it appears to act similarly to a 301-redirect. We suspect it passes authority/PageRank for links to the non-canonical URL, with some small amount of loss (similar to a 301)."
http://moz.com/blog/rel-confused-answers-to-your-rel-canonical-questions
At the end of the following Matt Cutts video (2:10), he says that there isn't a lot of difference between the page rank passing via rel=canonical and page rank passing a 301 redirect.
http://www.youtube.com/watch?v=zW5UL3lzBOA
When it comes to the content of the page, yes, the two versions of the page should be pretty close to identical. I've seen Google refer to it as "highly similar". Here's what Google says:
"A large portion of the duplicate page’s content should be present on the canonical version. One test is to imagine you don’t understand the language of the content—if you placed the duplicate side-by-side with the canonical, does a very large percentage of the words of the duplicate page appear on the canonical page? If you need to speak the language to understand that the pages are similar; for example, if they’re only topically similar but not extremely close in exact words, the canonical designation might be disregarded by search engines."
See: http://googlewebmastercentral.blogspot.co.uk/2013/04/5-common-mistakes-with-relcanonical.html
So, if your pages are too dissimilar then Google may ignore the rel-canonical "suggestion" and the "wrong page" or both pages may appear in Google's index.
-
i think this is useful resource that answer a lot of questions around canonical
-
Thanks Doug for your useful response
Just i need to clarify your sentence
"Be aware that the value of any inbound links to that article will be allocated to the canonical version. "
Do you mean canonical link is passing the page rank similar to 301 Redirect?
What if the 2 pages wasnt 100% identical ?
-
Check this Video Out : http://moz.com/blog/handling-duplicate-content-across-large-numbers-of-urls
-
Yes, this sounds absolutely correct.
You can check it's working by doing a search for some unique content in your article or using the query with the article's title:
site:{domain} "title"
If everything is working correctly you should only see the canonical version of the article in Google's index. (you can also use the inurl: to check too.
Be aware that the value of any inbound links to that article will be allocated to the canonical version. (This doesn't apply to social follows/likes though.) So think carefully about the audience for the article before deciding which version is canonical.
It may not apply in your case, but it can be a good idea to think about your readers too. By adding a link in the article to the other site, you can help to cross-promote them. You may find tat if some of your visitors find your cross posted article relevant and useful to them they may be more interested in other article on the source site.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content, although page has "noindex"
Hello, I had an issue with some pages being listed as duplicate content in my weekly Moz report. I've since discussed it with my web dev team and we decided to stop the pages from being crawled. The web dev team added this coding to the pages <meta name='robots' content='max-image-preview:large, noindex dofollow' />, but the Moz report is still reporting the pages as duplicate content. Note from the developer "So as far as I can see we've added robots to prevent the issue but maybe there is some subtle change that's needed here. You could check in Google Search Console to see how its seeing this content or you could ask Moz why they are still reporting this and see if we've missed something?" Any help much appreciated!
Technical SEO | | rj_dale0 -
Using a 302 redirect for language variants. How should I use the canonical?
Hi there, I have a question regarding the canonical tag. The current setup is like so... www.site.com 302 redirects to.. www.site.com/de/ I want to add canonical tags on every page to avoid duplicate content but I'm not sure about the homepage. Should the canonical URL be www.site.com or www.site.com/de/ ? I'm concerned that I could be about to hurt my ranking. Thanks,
Technical SEO | | zuriwolf
Mitch0 -
Duplicate Content Mystery
Hi Moz community! I have an ongoing duplicate mystery going on here and I'm hoping someone here can answer my question. We have an Ecommerce site that has a variety of product pages and category pages. There are Rel canonicals in place, along with parameters in GWT, and there are also URL rewrites. Here are some scenarios, maybe you can give insight as to what’s exactly going on and how to fix it. All the duplicates look to be coming from category pages specifically. For example:
Technical SEO | | Ecom-Team-Access
This link re-writes: http://www.incipio.com/cases/tablet-cases/amazon-kindle-cases-sleeves.html?cat=407&color=152&price=20- To: http://www.incipio.com/cases/tablet-cases/amazon-kindle-cases-sleeves.html The rel canonical tag looks like this: http://www.incipio.com/cases/tablet-cases/amazon-kindle-cases-sleeves.html" /> The CONTENT is different, but the URLs are the same. It thinks that the product category view is the same as the all products view, even though there is a canonical in there telling it which one is the original. Some of them don’t have anything to do with each other. Take a look: Link identified as duplicate: http://www.incipio.com/cases/smartphone-cases/htc-smartphone-cases/htc-windows-phone-8x-cases.html?color=27&price=20- Link this is a duplicate of: http://www.incipio.com/cases/macbook-cases/macbook-pro-13in-cases.html Any idea as to what could be happening here?0 -
Duplicate Content Issues
We have some "?src=" tag in some URL's which are treated as duplicate content in the crawl diagnostics errors? For example, xyz.com?src=abc and xyz.com?src=def are considered to be duplicate content url's. My objective is to make my campaign free of these crawl errors. First of all i would like to know why these url's are considered to have duplicate content. And what's the best solution to get rid of this?
Technical SEO | | RodrigoVaca0 -
Tips and duplicate content
Hello, we have a search site that offers tips to help with search/find. These tips are organized on the site in xml format with commas... of course the search parameters are duplicated in the xml so that we have a number of tips for each search parameter. For example if the parameter is "dining room" we might have 35 pieces of advice - all less than a tweet long. My question - will I be penalized for keyword stuffing - how can I avoid this?
Technical SEO | | acraigi0 -
Tags and Duplicate Content
Just wondering - for a lot of our sites we use tags as a way of re-grouping articles / news / blogs so all of the info on say 'government grants' can be found on one page. These /tag pages often come up with duplicate content errors, is it a big issue, how can we minimnise that?
Technical SEO | | salemtas0 -
Duplicate Page Content and Titles
A few weeks ago my error count went up for Duplicate Page Content and Titles. 4 errors in all. A week later the errors were gone... But now they are back. I made changes to the Webconfig over a month ago but nothing since. SEOmoz is telling me the duplicate content is this http://www.antiquebanknotes.com/ and http://www.antiquebanknotes.com Thanks for any advise! This is the relevant web.config. <rewrite><rules><rule name="CanonicalHostNameRule1"><match url="(.*)"><conditions><add input="{HTTP_HOST}" pattern="^www.antiquebanknotes.com$" negate="true"></add></conditions>
Technical SEO | | Banknotes
<action type="Redirect" url="<a href=" http:="" www.antiquebanknotes.com="" {r:1"="">http://www.antiquebanknotes.com/{R:1}" />
</action></match></rule>
<rule name="Default Page" enabled="true" stopprocessing="true"><match url="^default.aspx$"><conditions logicalgrouping="MatchAll"><add input="{REQUEST_METHOD}" pattern="GET"></add></conditions>
<action type="Redirect" url="/"></action></match></rule></rules></rewrite>0 -
Duplicate Content issue
I have been asked to review an old website to an identify opportunities for increasing search engine traffic. Whilst reviewing the site I came across a strange loop. On each page there is a link to printer friendly version: http://www.websitename.co.uk/index.php?pageid=7&printfriendly=yes That page also has a link to a printer friendly version http://www.websitename.co.uk/index.php?pageid=7&printfriendly=yes&printfriendly=yes and so on and so on....... Some of these pages are being included in Google's index. I appreciate that this can't be a good thing, however, I am not 100% sure as to the extent to which it is a bad thing and the priority that should be given to getting it sorted. Just wandering what views people have on the issues this may cause?
Technical SEO | | CPLDistribution0