Using canonical for duplicate contents outside of my domain
-
I have 2 domains for the same company, example.com and example.sg
Sometimes we have to post the same content or event on both websites so to protect my website from duplicate content plenty i use canonical tag to point to either .com or .sg depend on the page.
Any idea if this is the right decision
Thanks
-
Unfortunately, that's a lot more tricky. If you're trying to rank both the .com and .sg version for, let's say, US residents, and those sites have duplicate content, then you do run the risk of Google filtering one of them out. If you use canonical tags or something like that, then one site will be taken out of contention for ranking - in that case, you won't rank for both sites on the same term. The only way to have your cake and eat it too is to make the sites as unique as possible.
Even then, you're potentially going to duplicate effort and cannibalize your own rankings, so it's a risky proposition. In some cases, it may be better to try to promote your social profiles and other pages outside of your site that have some authority. It doesn't have to be your own site ranking, just a site that's generally positive or neutral.
-
Thanks Peter you answer has enrich the discussion
I think your suggestion is the proper way for different local domains versions of the same company or blog
My case is little different that actually lately i am trying to rank both of them in the seek of reputation management
It wasn't intended to be like that on the beginning but now we are trying to take advantage of our other local domain like .sg , .ch and .ae
-
Do you want the .sg site to only rank regionally in Singapore? You could use rel=alternate hreflang to designate the language/region for the two sites, and help Google more accurately know when to display which sites. This also acts as a soft canonicalization signal and tells Google that the pages are known duplicates:
-
Here's an article about rel=canonical where Dr. Pete answers some rel=canonical questions. With regards to rel=canonical passing PageRank he says:
"This is very difficult to measure, but if you use rel=canonical appropriately, and if Google honors it, then it appears to act similarly to a 301-redirect. We suspect it passes authority/PageRank for links to the non-canonical URL, with some small amount of loss (similar to a 301)."
http://moz.com/blog/rel-confused-answers-to-your-rel-canonical-questions
At the end of the following Matt Cutts video (2:10), he says that there isn't a lot of difference between the page rank passing via rel=canonical and page rank passing a 301 redirect.
http://www.youtube.com/watch?v=zW5UL3lzBOA
When it comes to the content of the page, yes, the two versions of the page should be pretty close to identical. I've seen Google refer to it as "highly similar". Here's what Google says:
"A large portion of the duplicate page’s content should be present on the canonical version. One test is to imagine you don’t understand the language of the content—if you placed the duplicate side-by-side with the canonical, does a very large percentage of the words of the duplicate page appear on the canonical page? If you need to speak the language to understand that the pages are similar; for example, if they’re only topically similar but not extremely close in exact words, the canonical designation might be disregarded by search engines."
See: http://googlewebmastercentral.blogspot.co.uk/2013/04/5-common-mistakes-with-relcanonical.html
So, if your pages are too dissimilar then Google may ignore the rel-canonical "suggestion" and the "wrong page" or both pages may appear in Google's index.
-
i think this is useful resource that answer a lot of questions around canonical
-
Thanks Doug for your useful response
Just i need to clarify your sentence
"Be aware that the value of any inbound links to that article will be allocated to the canonical version. "
Do you mean canonical link is passing the page rank similar to 301 Redirect?
What if the 2 pages wasnt 100% identical ?
-
Check this Video Out : http://moz.com/blog/handling-duplicate-content-across-large-numbers-of-urls
-
Yes, this sounds absolutely correct.
You can check it's working by doing a search for some unique content in your article or using the query with the article's title:
site:{domain} "title"
If everything is working correctly you should only see the canonical version of the article in Google's index. (you can also use the inurl: to check too.
Be aware that the value of any inbound links to that article will be allocated to the canonical version. (This doesn't apply to social follows/likes though.) So think carefully about the audience for the article before deciding which version is canonical.
It may not apply in your case, but it can be a good idea to think about your readers too. By adding a link in the article to the other site, you can help to cross-promote them. You may find tat if some of your visitors find your cross posted article relevant and useful to them they may be more interested in other article on the source site.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Content Issues: Duplicate Content
Hi there
Technical SEO | | Kingagogomarketing
Moz flagged the following content issues, the page has duplicate content and missing canonical tags.
What is the best solution to do? Industrial Flooring » IRL Group Ltd
https://irlgroup.co.uk/industrial-flooring/ Industrial Flooring » IRL Group Ltd
https://irlgroup.co.uk/index.php/industrial-flooring Industrial Flooring » IRL Group Ltd
https://irlgroup.co.uk/index.php/industrial-flooring/0 -
Magento Duplicate Content help!
How can I remove the duplicate page content in my Magento store from being read as duplicate. I added the Magento robots file that i have used on many stores and it keeps giving us errors. Also we have enabled the canonical links in magento admin I am getting 3616 errors and can't seem to get around it .. any suggestions?
Technical SEO | | adamxj20 -
How different should content be so that it is not considered duplicate?
I am making a 2nd website for the same company. The name of the company, our services, keywords and contact info will show up several times within the text of both websites. The overall text and paragraphs will be different but some info may be repeated on both sites. Should I continue this? What precautions should I take?
Technical SEO | | savva0 -
How to protect against duplicate content?
I just discovered that my company's 'dev website' (which mirrors our actual website, but which is where we add content before we put new content to our actual website) is being indexed by Google. My first thought is that I should add a rel=canonical tag to the actual website, so that Google knows that this duplicate content from the dev site is to be ignored. Is that the right move? Are there other things I should do? Thanks!
Technical SEO | | williammarlow0 -
I have a ton of "duplicated content", "duplicated titles" in my website, solutions?
hi and thanks in advance, I have a Jomsocial site with 1000 users it is highly customized and as a result of the customization we did some of the pages have 5 or more different types of URLS pointing to the same page. Google has indexed 16.000 links already and the cowling report show a lot of duplicated content. this links are important for some of the functionality and are dynamically created and will continue growing, my developers offered my to create rules in robots file so a big part of this links don't get indexed but Google webmaster tools post says the following: "Google no longer recommends blocking crawler access to duplicate content on your website, whether with a robots.txt file or other methods. If search engines can't crawl pages with duplicate content, they can't automatically detect that these URLs point to the same content and will therefore effectively have to treat them as separate, unique pages. A better solution is to allow search engines to crawl these URLs, but mark them as duplicates by using the rel="canonical" link element, the URL parameter handling tool, or 301 redirects. In cases where duplicate content leads to us crawling too much of your website, you can also adjust the crawl rate setting in Webmaster Tools." here is an example of the links: | | http://anxietysocialnet.com/profile/edit-profile/salocharly http://anxietysocialnet.com/salocharly/profile http://anxietysocialnet.com/profile/preferences/salocharly http://anxietysocialnet.com/profile/salocharly http://anxietysocialnet.com/profile/privacy/salocharly http://anxietysocialnet.com/profile/edit-details/salocharly http://anxietysocialnet.com/profile/change-profile-picture/salocharly | | so the question is, is this really that bad?? what are my options? it is really a good solution to set rules in robots so big chunks of the site don't get indexed? is there any other way i can resolve this? Thanks again! Salo
Technical SEO | | Salocharly0 -
Duplicate Content Issue
Hello, We have many pages in our crawler report that are showing duplicate content. However, the content is not duplicateon the pages. It is somewhat close, but different. I am not sure how to fix the problem so it leaves our report. Here is an example. It is showing these as duplicate content to each other. www.soccerstop.com/c-119-womens.aspx www.soccerstop.com/c-120-youth.aspx www.soccerstop.com/c-124-adult.aspx Any help you could provide would be most appreciated. I am going through our crawler report and resolving issues, and this seems to be big one for us with lots in the report, but not sure what to do about it. Thanks
Technical SEO | | SoccerStop
James0 -
Complex duplicate content question
We run a network of three local web sites covering three places in close proximity. Each sitehas a lot of unique content (mainly news) but there is a business directory that is shared across all three sites. My plan is that the search engines only index the business in the directory that are actually located in the place the each site is focused on. i.e. Listing pages for business in Alderley Edge are only indexed on alderleyedge.com and businesses in Prestbury only get indexed on prestbury.com - but all business have a listing page on each site. What would be the most effective way to do this? I have been using rel canonical but Google does not always seem to honour this. Will using meta noindex tags where appropriate be the way to go? or would be changing the urls structure to have the place name in and using robots.txt be a better option. As an aside my current url structure is along the lines of: http://dev.alderleyedge.com/directory/listing/138/the-grill-on-the-edge Would changing this have any SEO benefit? Thanks Martin
Technical SEO | | mreeves0 -
Avoiding duplicate content/same pages
hi I have been checking through all the Q and A but i i'm still not sure how you get http://www.domain.co.uk/index.html to be just http://www.domain.co.uk/? Do you add canonical to the index page to point to the page you prefer and then add a 301 redirect? thanks
Technical SEO | | challen0