SEOMOZ and non-duplicate duplicate content
-
Hi all,
Looking through the lovely SEOMOZ report, by far its biggest complaint is that of perceived duplicate content. Its hard to avoid given the nature of eCommerce sites that oestensibly list products in a consistent framework.
Most advice about duplicate content is about canonicalisation, but thats not really relevant when you have two different products being perceived as the same.
Thing is, I might have ignored it but google ignores about 40% of our site map for I suspect the same reason. Basically I dont want us to appear "Spammy". Actually we do go to a lot of time to photograph and put a little flavour text for each product (in progress).
I guess my question is, that given over 700 products, why 300ish of them would be considered duplicates and the remaning not?
Here is a URL and one of its "duplicates" according to the SEOMOZ report:
http://www.1010direct.com/DGV-DD1165-970-53/details.aspx
http://www.1010direct.com/TDV-019-GOLD-50/details.aspxThanks for any help people
-
The point I'm trying to get across is this:
"I asked the question of why these pages are considered duplicate, the answer appears to be : because textually they are even if visually they are not."
I don't think that's the complete answer, or even the most important part of the answer. Surely having mostly similar content across pages won't help, but as I've tried to point out, there are other factors that come into play here. It's not just about the content, but putting the content into context for the search engines. In order for them to understand what it is they're looking it, there's more that's important than just the content.
Michel
-
I think this highlights the fundamental problem with SEO and
eCommerce sites.We are all aware that the ultimate aim for search engines and
therefore ultimately SEO is to add value to users. But is "value" the
same for an eCommerce site as it is for a blog, or a travel information site or
a site offering information on health and advice?In my opinion, it is not. If I am looking to make a purchase, I
am looking for a site that is responsive, easy to navigate, has good imagery to
help me visualise, is secure and doesn’t clutter with in-your-face promotional
info, and of course offers value for money.Unique content therefore doesn’t really factor into it too much. Its hard enough for us, but I can only imagine how difficult it is for a company selling screws or rope, just how much creativity does that take to provide unique content for 3.5 inch brass screws over 2.5 inch steel ones?
The current mantra is to stop worrying about SEO tricks, and
focus on building a site with value. But this particular issue is an indication
we are still not there with that utopia yet.For example, as pointed out in the posts above .. these pages are considered duplicate, because by percentage the variable information is minimal; If you look at our product page we put the functionality of filling in your prescription below the product to make it
easier for the customer, but in order to solve the "percentage unique" issue, we would need to move that onto another page. Basically, we need to reduce value (convenience) to appear to add value (uniqueness).Anyway, little point complaining, I asked the question of why these pages are considered duplicate, the answer appears to be : because textually they are even if visually they are not.
I could be worrying about nothing, I believe all these pages are indexed (through crawling), its just a good proportion of our sitemap is being overlooked, I am assuming its perceived duplication as suggested in SEOMOZ. That in turn makes me concerned google is marking us down as spammy.
I appreciate all your comments.
Thanks
Paul
-
I do not agree. I see these kinds of pages on e-commerce websites on a daily basis. For webshops that sell only a certain kind of product, almost all product pages will look alike.
In this case, the H1 is different, the page title is different, and the description is different. This is only a small portion of the page but that's not uncommon, so I would argue that it cannot be just that.
I would look into URLs, marking up your data using http://schema.org/Product, possibly making small changes to accomodate the tags. For instance splitting up brand, color etc. so that you can mark them accordingly.
-
Tom has this spot on. Google doesn't only look for direct duplication, but also very similar, and these really are I'm afraid.
You need to find ways to make each page unique in its own right - let Google see that no two pages are the same and there is a real reason to rank them.
-
I wonder if the details.aspx has something to do with it?
www.1010direct.com/TDV-019-GOLD-50/details.aspx
www.1010direct.com/DGV-DD1165-970-53/details.aspxBasically, both pages are called details.aspx. Depending on how you look at it, you have 2 pages that are named the same (with mostly similar content, though not unusual for e-commerce websites) in different subfolders. I'm not sure if there's some kind of difference in the way Moz works, and if that's part of why Moz marks this as duplicate content?
Are you unable to create 'prettier' URL's? Such as:
www.1010direct.com/tim-dilsen-019-gold-50-glasses.aspx
www.1010direct.com/dolce-gabbana-dd1165-970-53-glasses.aspxWith or without the aspx of course.
-
I'm not surprised Moz is flagging those pages as duplicate content and I wouldn't be totally surprised if Google did in the future.
Put it this way, the pages are identical bar for a single sentence title description, a price and roughly a 20 word section describing the product. Everything else is identical. It's duplicate.
Look at it another through Google's eyes. Here's how the two pages look when crawled by Google:
(If that doesn't work, try yourself at http://www.seo-browser.com/)
Just look at how much text and HTML is shared between the two pages. Yes, there are key differences on the pages (namely the product), but the Google bot nor the Mozbot is going to recognise those elements when it crawls it.
Presuming Google ignores the site nav, it still has a bunch of text and crawlable elements that are shared - pretty much everything under the product description. It doesn't see the individual images and the flavour text is frankly too small to make any sort of dent in the duplicate content %.
I'd seriously recommend at revising how your product pages look - there's far too much repeated content per page (you can still promote these things on each page but in a much, much smaller way) and the individual descriptions for the products, in my eyes, are not substantial enough.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
International SEO And Duplicate Content Within The Same Language
Hello, Currently, we have a .com English website serving an international clientele. As is the case we do not currently target any countries in Google Search Console. However, the UK is an important market for us and we are seeing very low traffic (almost entirely US). We would like to increase visibility in the UK, but currently for English speakers only. My question is this - would geo-targeting a subfolder have a positive impact on visibility/rankings or would it create a duplicate content issue if both pieces of content are in English? My plan was: 1. Create a geo-targeted subfolder (website.com/uk/) that copies our website (we currently cannot create new unique content) 2. Go into GSC and geo-target the folder to the UK 3. Add the following to the /uk/ page to try to negate duplicate issues. Additionally, I can add a rel=canonical tag if suggested, I just worry as an already international site this will create competition between pages However, as we are currently only targeting a location and not the language at this very specific point, would adding a ccTLD be advised instead? The threat of duplicate content worries me less here as this is a topic Matt Cutts has addressed and said is not an issue. I prefer the subfolder method as to ccTLD's, because it allows for more scalability, as in the future I would like to target other countries and languages. Ultimately right now, the goal is to increase UK traffic. Outside of UK backlinks, would any of the above URL geo-targeting help drive traffic? Thanks
Technical SEO | | Tom3_150 -
How do I avoid this issue of duplicate content with Google?
I have an ecommerce website which sells a product that has many different variations based on a vehicle’s make, model, and year. Currently, we sell this product on one page “www.cargoliner.com/products.php?did=10001” and we show a modal to sort through each make, model, and year. This is important because based on the make, model, and year, we have different prices/configurations for each. For example, for the Jeep Wrangler and Jeep Cherokee, we might have different products: Ultimate Pet Liner - Jeep Wrangler 2011-2013 - $350 Ultimate Pet Liner - Jeep Wrangler 2014 - 2015 - $350 Utlimate Pet Liner - Jeep Cherokee 2011-2015 - $400 Although the typical consumer might think we have 1 product (the Ultimate Pet Liner), we look at these as many different types of products, each with a different configuration and different variants. We do NOT have unique content for each make, model, and year. We have the same content and images for each. When the customer selects their make, model, and year, we just search and replace the text to make it look like the make, model, and year. For example, when a custom selects 2015 Jeep Wrangler from the modal, we do a search and replace so the page will have the same url (www.cargoliner.com/products.php?did=10001) but the product title will say “2015 Jeep Wrangler”. Here’s my problem: We want all of these individual products to have their own unique urls (cargoliner.com/products/2015-jeep-wrangler) so we can reference them in emails to customers and ideally we start creating unique content for them. Our only problem is that there will be hundreds of them and they don’t have unique content other than us switching in the product title and change of variants. Also, we don’t want our url www.cargoliner.com/products.php?did=10001 to lose its link juice. Here’s my question(s): My assumption is that I should just keep my url: www.cargoliner.com/products.php?did=10001 and be able to sort through the products on that page. Then I should go ahead and make individual urls for each of these products (i.e. cargoliner.com/products/2015-jeep-wrangler) but just add a “nofollow noindex” to the page. Is this what I should do? How secure is a “no-follow noindex” on a webpage? Does Google still index? Am I at risk for duplicate content penalties? Thanks!
Technical SEO | | kirbyfike0 -
Duplicate content on report
Hi, I just had my Moz Campaign scan 10K pages out of which 2K were duplicate content and URL's are http://www.Somesite.com/modal/register?destination=question%2F37201 http://www.Somesite.com/modal/register?destination=question%2F37490 And the title for all 2K is "Register" How can i deal with this as all my pages have the register link and login and when done it comes back to the same page where we left and that it actually not duplicate but we need to deal with it propely thanks
Technical SEO | | mtthompsons0 -
Duplicate Content
Hi, I'm working on a site and I'm having some issues with its structure causing duplicate content. The first issue is that the search pages will show up as duplicates.
Technical SEO | | OOMDODigital
A search for new inventory may be new.aspx
The duplicate may be something like new.aspx=page1, or something like that and so on. The second issue is with inventory. When new inventory gets put into the stock of the store, a new page for that item will be populated with duplicate content. There appears to be no canonical source for that page. How can I fix both of these? Thanks!0 -
Duplicate content problem?
Hello! I am not sure if this is a problem or if I am just making something too complicated. Here's the deal. I took on a client who has an existing site in something called homestead. Files cannot be downloaded, making it tricky to get out of homestead. The way it is set up is new sites are developed on subdomains of homestead.com, and then your chosen domain points to this subdomain. The designer who built it has kindly given me access to her account so that I can edit the site, but this is awkward. I want to move the site to its own account. However, to do so Homestead requires that I create a new subdomain and copy the files from one to the other. They don't have any way to redirect the prior subdomain to the new one. They recommend I do something in the html, since that is all I can access. Am I unnecessarily worried about the duplicate content consequences? My understanding is that now I will have two subdomains with the same exact content. True, over time I will be editing the new one. But you get what I'm sayin'. Thanks!
Technical SEO | | devbook90 -
Duplicate Footer Content
A client I just took over is having some duplicate content issues. At the top of each page he has about 200 words of unique content. Below this is are three big tables of text that talks about his services, history, etc. This table is pulled into the middle of every page using php. So, he has the exact same three big table of text across every page. What should I do to eliminate the dup content. I thought about removing the script then just rewriting the table of text on every page... Is there a better solution? Any ideas would be greatly appreciated. Thanks!
Technical SEO | | BigStereo0 -
Aspx filters causing duplicate content issues
A client has a url which is duplicated by filters on the page, for example: - http://www.example.co.uk/Home/example.aspx is duplicated by http://www.example.co.uk/Home/example.aspx?filter=3 The client is moving to a new website later this year and is using an out-of-date Kentico CMS which would need some development doing to it in order to enable implementation of rel canonical tags in the header, I don't have access to the server and they have to pay through the nose everytime they want the slightest thing altering. I am trying to resolve this duplicate content issue though and am wondering what is the best way to resolve it in the short term. The client is happy to remove the filter links from the page but that still leaves the filter urls in Google. I am concerned that a 301 redirect will cause a loop and don't understand the behaviour of this type of code enough. I hope this makes sense, any advice appreciated.
Technical SEO | | travelinnovations0 -
Solution for duplicate content not working
I'm getting a duplicate content error for: http://www.website.com http://www.website.com/default.htm I searched for the Q&A for the solution and found: Access the.htaccess file and add this line: redirect 301 /default.htm http://www.website.com I added the redirect to my .htaccess and then got the following error from Google when trying to access the http://www.website.com/default.htm page: "This webpage has a redirect loop
Technical SEO | | Joeuspe
The webpage at http://www.webpage.com/ has resulted in too many redirects. Clearing your cookies for this site or allowing third-party cookies may fix the problem. If not, it is possibly a server configuration issue and not a problem with your computer." "Error 310 (net::ERR_TOO_MANY_REDIRECTS): There were too many redirects." How can I correct this? Thanks0