Duplicate Content
-
Hi everyone,
I have a TLD in the UK with a .co.uk and also the same site in Ireland (.ie). The only differences are the prices and different banners maybe.
The .ie site pulls all of the content from the .co.uk domain. Is this classed as content duplication? I've had problems in the past in which Google struggles to index the website. At the moment the site appears completely fine in the UK SERPs but for Ireland I just have the Title and domain appearing in the SERPs, with no extended title or description because of the confusion I caused Google last time.
Does anybody know a fix for this?
Thanks
-
@Ryan Kent, are you suggessting that it is okey for the site to be a complete duplicate as long as it's set up for a specific country (in this case Ireland)
yes
-
Thanks for the quick replies
The design of the two websites are practically identical:
If you search for photobox in Google Ireland you can see how the title tag and description are not being crawled due to some problems we've had in the past.
The .ie site pulls all of the content etc from the co.uk site.
@Ryan Kent, are you suggessting that it is okey for the site to be a complete duplicate as long as it's set up for a specific country (in this case Ireland)
Thanks
-
Royb,
You should be able to offer 100% duplicate sites as long as each site is set up to target a specific country. You should definitely adjust each site to accommodate monetary units and all other differences between the countries, but you should have no concerns about Google penalizing your site for duplication.
You already have the correct ccTLD which is an exceptionally strong indicator for your market. You can further support this direction by a few additional steps:
-
set your geographic target in Google WMT for each site
-
use the appropriate language meta tag for each country
-
adjust language, monetary units, measurements and all other difference for your target country
Some supporting information from Google on this topic:
-
-
Hi... you're saying that both sites have the same contents (.ie pulls content from .co.uk)... that means that the only difference in in the cTLD. Questions: also the design is the same? Because duplication doesn't concern just content, also code.
Not being able to see the sites, what I can suggest are just common international SEO tips:
- try to differentiate the most you can the two sites: Brits and Irish are different, we all know it... i.e.: if we talk about an eCommerce the first have Pounds, the others Euro.
- if you are not just an online entity, but the webs represent a physical business, try to put geotargeted inputs to the search engines, like address, phone numbers and so on, for instance in the footer
- for the same reason, differentiate the two About Us and Contact pages
- even though a recent experiment by Hannah Smith of Distilled diminished the value of local links, surely it would help the search engines to understand if one site is targeting Ireland and the other United Kingdom.
-
We weren't penalised, it just didn't index the data correctly. There must be a simple way of doing this as our competitors have sites in the UK and Ireland and it looks to me like they are just pulling the content from their .co.uk domain...
Thanks for the quick response
-
I cant give a definitive answer for UK / Ireland domains, but in the US we had issues with dupe content on US / .ca domains. Google did see dupe content and penalize, so I would assume that the same issue would arise for you.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Http vs. https - duplicate content
Hi I have recently come across a new issue on our site, where https & http titles are showing as duplicate. I read https://mza.bundledseo.com/community/q/duplicate-content-and-http-and-https however, am wondering as https is now a ranking factor, blocked this can't be a good thing? We aren't in a position to roll out https everywhere, so what would be the best thing to do next? I thought about implementing canonicals? Thank you
Intermediate & Advanced SEO | | BeckyKey0 -
Will we be penalised for duplicate content on a sub-domain?
Hi there, I run a WordPress blog and I use [community platform] Discourse for commenting. When we publish a post to Wordpress, a duplicate of that post is pushed to a topic on Discourse, which is on a sub-domain. Eg: The original post and the duplicated post. Will we be penalised for duplicating our own content on a subdomain? If so, other than using an excerpt, what are our options? Thanks!
Intermediate & Advanced SEO | | ILOVETHEHAWK0 -
Scraped content ranking above the original source content in Google.
I need insights on how “scraped” content (exact copy-pasted version) rank above the original content in Google. 4 original, in-depth articles published by my client (an online publisher) are republished by another company (which happens to be briefly mentioned in all four of those articles). We reckon the articles were re-published at least a day or two after the original articles were published (exact gap is not known). We find that all four of the “copied” articles rank at the top of Google search results whereas the original content i.e. my client website does not show up in the even in the top 50 or 60 results. We have looked at numerous factors such as Domain authority, Page authority, in-bound links to both the original source as well as the URLs of the copied pages, social metrics etc. All of the metrics, as shown by tools like Moz, are better for the source website than for the re-publisher. We have also compared results in different geographies to see if any geographical bias was affecting results, reason being our client’s website is hosted in the UK and the ‘re-publisher’ is from another country--- but we found the same results. We are also not aware of any manual actions taken against our client website (at least based on messages on Search Console). Any other factors that can explain this serious anomaly--- which seems to be a disincentive for somebody creating highly relevant original content. We recognize that our client has the option to submit a ‘Scraper Content’ form to Google--- but we are less keen to go down that route and more keen to understand why this problem could arise in the first place. Please suggest.
Intermediate & Advanced SEO | | ontarget-media0 -
Is a different location in page title, h1 title, and meta description enough to avoid Duplicate Content concern?
I have a dynamic website which will have location-based internal pages that will have a <title>and <h1> title, and meta description tag that will include the subregion of a city. Each page also will have an 'info' section describing the generic product/service offered which will also include the name of the subregion. The 'specific product/service content will be dynamic but in some cases will be almost identical--ie subregion A may sometimes have the same specific content result as subregion B. Will the difference of just the location put in each of the above tags be enough for me to avoid a Duplicate Content concern?</p></title>
Intermediate & Advanced SEO | | couponguy0 -
How do I best handle Duplicate Content on an IIS site using 301 redirects?
The crawl report for a site indicates the existence of both www and non-www content, which I am aware is duplicate. However, only the www pages are indexed**, which is throwing me off. There are not any 'no-index' tags on the non-www pages and nothing in robots.txt and I can't find a sitemap. I believe a 301 redirect from the non-www pages is what is in order. Is this accurate? I believe the site is built using asp.net on IIS as the pages end in .asp. (not very familiar to me) There are multiple versions of the homepage, including 'index.html' and 'default.asp.' Meta refresh tags are being used to point to 'default.asp'. What has been done: 1. I set the preferred domain to 'www' in Google's Webmaster Tools, as most links already point to www. 2. The Wordpress blog which sits in a /blog subdirectory has been set with rel="canonical" to point to the www version. What I have asked the programmer to do: 1. Add 301 redirects from the non-www pages to the www pages. 2. Set all versions of the homepage to redirect to www.site.org using 301 redirects as opposed to meta refresh tags. Have all bases been covered correctly? One more concern: I notice the canonical tags in the source code of the blog use a trailing slash - will this create a problem of inconsistency? (And why is rel="canonical" the standard for Wordpress SEO plugins while 301 redirects are preferred for SEO?) Thanks a million! **To clarify regarding the indexation of non-www pages: A search for 'site:site.org -inurl:www' returns only 7 pages without www which are all blog pages without content (Code 200, not 404 - maybe deleted or moved - which is perhaps another 301 redirect issue).
Intermediate & Advanced SEO | | kimmiedawn0 -
Duplicate Content
http://www.pensacolarealestate.com/JAABA/jsp/HomeAdvice/answers.jsp?TopicId=Buy&SubtopicId=Affordability&Subtopicname=What%20You%20Can%20Afford http://www.pensacolarealestate.com/content/answers.html?Topic=Buy&Subtopic=Affordability I have no idea how the first address exists at all... I ran the SEOMOZ tool and I got 600'ish DUPLICATE CONTENT errors! I have errors on content/titles etc... How do I get rid of all the content being generated from this JAABA/JSP "jibberish"? Please ask questions that will help you help me. I have always been 1st on google local and I have a business that is starting to hurt very seriously from being number three 😞
Intermediate & Advanced SEO | | JML11790 -
Duplicate content from development website
Hi all - I've been trawling for duplicate content and then I stumbled across a development URL, set up by a previous web developer, which nearly mirrors current site (few content and structure changes since then, but otherwise it's all virtually the same). The developer didn't take it down when the site was launched. I'm guessing the best thing to do is tell him to take down the development URL (which is specific to the pizza joint btw, immediately. Is there anything else I should ask him to do? Thanks, Luke
Intermediate & Advanced SEO | | McTaggart0 -
What constitutes duplicate content?
I have a website that lists various events. There is one particular event at a local swimming pool that occurs every few months -- for example, once in December 2011 and again in March 2012. It will probably happen again sometime in the future too. Each event has its own 'event' page, which includes a description of the event and other details. In the example above the only thing that changes is the date of the event, which is in an H2 tag. I'm getting this as an error in SEO Moz Pro as duplicate content. I could combine these pages, since the vast majority of the content is duplicate, but this will be a lot of work. Any suggestions on a strategy for handling this problem?
Intermediate & Advanced SEO | | ChatterBlock0