Duplicate Content
-
Hello,
I'm managing a site which shows as having duplicate page issues (in the crawl analyser) for 3 pages. Basically the site is offering 3 different options of the same product so depending on which size you select, you are directed to the relevant page. These 3 pages are basically identical apart from a slight difference in copy regarding the size (small, medium, large)
Is this likely to be a big issue regarding SEO, and what would the moz community suggest re this?
Thank you!
-
Thanks Matt and Erica. I think we'll have to take the band aid approach at first, but it's definitely something I'll now think about with future updates and developments
-
Really good point. The best thing you could do would be simply to not have a unique page for each size of the same product.
-
It's most likely to hurt you from a perspective of low-quality content, which a lot of retail sites get dinged with. In an ideal world, having size as option to choose and only one product page is best with information about all sizes on that page. However, Matt's canonical suggestion also works as a bandaid.
-
It's not likely to be a huge issue, in my opinion. The problem with this sort of duplicate content is simply that the search engines won't necessarily know which page to serve in response to a user's query, so people searching for you could end up being served the wrong result (or no result from you). It ties into my all-time favorite piece of SEO advice: Don't make Google think.
Personally, I'd add a canonical link to each of those pages pointing to the one you'd prefer folks to see in the search results.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Large site with content silo's - best practice for deep indexing silo content
Thanks in advance for any advice/links/discussion. This honestly might be a scenario where we need to do some A/B testing. We have a massive (5 Million) content silo that is the basis for our long tail search strategy. Organic search traffic hits our individual "product" pages and we've divided our silo with a parent category & then secondarily with a field (so we can cross link to other content silo's using the same parent/field categorizations). We don't anticipate, nor expect to have top level category pages receive organic traffic - most people are searching for the individual/specific product (long tail). We're not trying to rank or get traffic for searches of all products in "category X" and others are competing and spending a lot in that area (head). The intent/purpose of the site structure/taxonomy is to more easily enable bots/crawlers to get deeper into our content silos. We've built the page for humans, but included link structure/taxonomy to assist crawlers. So here's my question on best practices. How to handle categories with 1,000+ pages/pagination. With our most popular product categories, there might be 100,000's products in one category. My top level hub page for a category looks like www.mysite/categoryA and the page build is showing 50 products and then pagination from 1-1000+. Currently we're using rel=next for pagination and for pages like www.mysite/categoryA?page=6 we make it reference itself as canonical (not the first/top page www.mysite/categoryA). Our goal is deep crawl/indexation of our silo. I use ScreamingFrog and SEOMoz campaign crawl to sample (site takes a week+ to fully crawl) and with each of these tools it "looks" like crawlers have gotten a bit "bogged down" with large categories with tons of pagination. For example rather than crawl multiple categories or fields to get to multiple product pages, some bots will hit all 1,000 (rel=next) pages of a single category. I don't want to waste crawl budget going through 1,000 pages of a single category, versus discovering/crawling more categories. I can't seem to find a consensus as to how to approach the issue. I can't have a page that lists "all" - there's just too much, so we're going to need pagination. I'm not worried about category pagination pages cannibalizing traffic as I don't expect any (should I make pages 2-1,000) noindex and canonically reference the main/first page in the category?). Should I worry about crawlers going deep in pagination among 1 category versus getting to more top level categories? Thanks!
Moz Pro | | DrewProZ1 -
Duplicate Content in WordPress Taxonomies & Noindex, Follow
Hello Moz Community, We are seeing duplicate content issues in our Moz report for our WordPress site’s Tag pages. After a bit of research, it appears one of the best solutions is to set Tag pages to “no index, follow” within Yoast. That makes sense, but we have a few questions: In doing this, how are we affecting our opportunity to show up in search results? Are there any other repercussions to making this change? What would it take to make the content on these pages be seen as unique?
Moz Pro | | CoreyHicks1 -
Feedback on Content Ideation / "Skyscraper" Spreadsheet Template
Hi All - I've been getting a ton of use out of the MOZ API for discovering the popularity of content - which I'm using for content ideation or to implement the Skyscraper concept. I built a spreadsheet template that combines MOZ with some other APIs to apply this to new topics of my choosing, and my friends encouraged me to clean it up a bit and share with the broader community. So, here it is - fire away! I'd love any and all feedback about the spreadsheet - it's a prototype still so it could stand to pull back more results. For example: would you want to include Domain Authority in the results? Focus more or less on the social sharing elements - or let you choose the thresholds? Would love to know if there other methodologies for which you'd be interested in seeing spreadsheet templates produced. Cheers! skyscraper-template.png
Moz Pro | | paulkarayan0 -
What's more valuable: new content or optimizing old content
We are a niche legacy print publication that's been around for close to 20 years. Recently, we combined several old sites in one new responsive site. We have over 7,000 articles -many of which are evergreen and can be repurposed when needed. Most of the old pieces although published, have not been optimized for SEO. However, as we create new pieces, we optimize them for search and social and they tend to get more organic traffic. Where we're torn is on how much we should balance our limited editorial resources between cleaning up and optimizing our extensive archive to improve our organic reach, vs. pumping out new original pieces each week. I realize that without a lot of data the answers will be varied - I guess I'm looking for a best practices approach for content publishers. If it helps at all, our main conversion goal is selling subscriptions to our print and digital publications. We know that organic traffic tends to be more engaged than our social referrals. Unfortunately, due to the nature of the magazine fulfilment business, it's tough to know which channels convert better. Thanks!
Moz Pro | | RicardoSalcedo0 -
Joomla Opencart Duplicate Link Issue
With a few recent catalog changes my client is having tons of duplicate content issues. The most peculiar is one that I cannot see visibly on the site pages. This page: http://www.CLIENTDOMAIN.com/catalog/lights.html shows to have several duplicate content issues with URLs such as: http://www.CLIENTDOMAIN.com/catalog/lights/Lamp-DRW.htmlhttp:/www.CLIENTDOMAIN.com - where the domain is basically added to the end of the first URL. Has anyone seen this? Any fixes, plugins or thoughts? Thanks,
Moz Pro | | KristinaWitmer
Kristina0 -
Duplicate Content in Blog
Hi, SEOMoz on-page analysis is reporting that our blog has duplicate content when technically it doesn't. Is this something that we need to address as it will actually be hurting our ranking or is this just a SEOMoz software quirk? There is 100+ example like this but here is one example. SEOMoz is reporting http://www.invoicestudio.com/Blog/author/InvoiceStudio?page=1 and http://www.invoicestudio.com/Blog/author/InvoiceStudio?page=2 as a duplicate content and Title Tag. Thanks Andrew
Moz Pro | | Studio330 -
Duplicate page titles are the same URL listed twice
The system says I have two duplicate page titles. The page titles are exactly the same because the two URLs are exactly the same. These same two identical URLs show up in the Duplicate Page Content also - because they are the same. We also have a blog and there are two tag pags showing identical content - I have blocked the blog in robots.txt now, because the blog is only for writers. I suppose I could have just blocked the tags pages.
Moz Pro | | loopyal0 -
RSS feed showing up as duplicate content
Hi, I've just run an SEOMOZ Pro scan for the first time and it is picking up duplicate content errors from the RSS feed. For some reason it seems to be picking up two feeds, for example: http://blog.clove.co.uk/2009/05/13/htc-touch-diamond2-review/feed/ http://blog.clove.co.uk/2009/05/19/htc-touch-diamond2-review-2/feed/ Does anyone know why this is happening and how I can resolve this? Thanks
Moz Pro | | pugh0