Duplicate Content & Rel Canonical Tag not working
-
I'm really questioning the legitimacy of the duplicate content flags with moz. I'm building a website that sells home decor products and a lot of the pages are similar in structure (As would be expected with a store that sells thousands of individual products). It seems a little overkill to me to flag the following pages as duplicate content. They have different urls, titles, h1, h2, and h3 tages, different meta tags, etc. Right now, it's saying that the following have duplicate page content:
http://www.countryporchhomedecor.com
http://countryporchhomedecor.com/park-designs/pillows/christmas-vacation-embroidered-pillow
http://www.countryporchhomedecor.com/donna-sharp/throws/camo-bear-throw
http://countryporchhomedecor.com/park-designs/teapots/wonderland-teapot
http://countryporchhomedecor.com/park-designs/rag-rugs/cambridge-rug-36x60
http://www.countryporchhomedecor.com/donna-sharp/lodge-quilts/king%2C-woodland
http://www.countryporchhomedecor.com/park-designs/rag-rugs/redmon-rag-rug-36x60
http://www.countryporchhomedecor.com/park-designs/valances/hearthside-valance-72x14
http://countryporchhomedecor.com/park-designs/valances/hearthside-valance-72x14
http://countryporchhomedecor.com/donna-sharp/lodge-quilts/king,-woodland
http://www.countryporchhomedecor.com/park-designs/teapots/wonderland-teapot
http://countryporchhomedecor.com/donna-sharp/throws/camo-bear-throw
http://countryporchhomedecor.com/park-designs/accessories/home-place-tumbler
http://www.countryporchhomedecor.com/donna-sharp/lodge-quilts/king,-woodland
http://www.countryporchhomedecor.com/park-designs/rag-rugs/cambridge-rug-36x60
http://www.countryporchhomedecor.com/park-designs/pillows/christmas-vacation-embroidered-pillow
http://www.countryporchhomedecor.com/donna-sharp/lodge-quilts/king%2C-woodland?pi=18
http://countryporchhomedecor.com/donna-sharp/lodge-quilts/king%2C-woodland
http://www.countryporchhomedecor.com/park-designs/accessories/home-place-tumbler
http://countryporchhomedecor.com/park-designs/rag-rugs/redmon-rag-rug-36x6Any ideas?
Also, it seems like it's not honoring the rel-canonical tag. It keeps saying that pages with a rel canonical tag are duplicates when some of the urls that it's flagging shouldn't even be indexed because of the canonical tag. The "pi" in the query string should not be indexed!
http://www.countryporchhomedecor.com/bedding-&-quilts/shams/standard-shams?pi=18&page=3
http://countryporchhomedecor.com/bedding-&-quilts/shams/standard-shams?page=6
http://www.countryporchhomedecor.com/bedding-&-quilts/shams/standard-shams?page=7
http://www.countryporchhomedecor.com/bedding-&-quilts/shams/standard-shams?pi=18&page=6
http://www.countryporchhomedecor.com/bedding-&-quilts/shams/standard-shams?page=10
http://www.countryporchhomedecor.com/bedding-&-quilts/shams/standard-shams?pi=18&page=8
http://www.countryporchhomedecor.com/bedding-&-quilts/shams/standard-shams?page=8
http://www.countryporchhomedecor.com/bedding-&-quilts/shams/standard-shams
http://countryporchhomedecor.com/bedding-&-quilts/shams/standard-shams?page=7
http://www.countryporchhomedecor.com/bedding-&-quilts/shams/standard-shams?pi=18&page=7
http://www.countryporchhomedecor.com/bedding-&-quilts/shams/standard-shams?page=1
http://countryporchhomedecor.com/bedding-&-quilts/shams/standard-shams?page=8
http://www.countryporchhomedecor.com/bedding-&-quilts/shams/standard-shams?pi=18&page=5
http://www.countryporchhomedecor.com/bedding-&-quilts/shams/standard-shams?pi=18&page=10
http://countryporchhomedecor.com/bedding-&-quilts/shams/standard-shams?page=3
http://countryporchhomedecor.com/bedding-&-quilts/shams/standard-shams
http://www.countryporchhomedecor.com/bedding-&-quilts/shams/standard-shams?page=5
http://www.countryporchhomedecor.com/bedding-&-quilts/shams/standard-shams?pi=18&page=4
http://www.countryporchhomedecor.com/bedding-&-quilts/shams/standard-shams?pi=18&page=9
http://www.countryporchhomedecor.com/bedding-%26-quilts/shams/standard-shams
http://www.countryporchhomedecor.com/bedding-&-quilts/shams/standard-shams?pi=18
http://www.countryporchhomedecor.com/bedding-&-quilts/shams/standard-shams?pi=18&page=1
http://www.countryporchhomedecor.com/bedding-&-quilts/shams/standard-shams?page=6
http://countryporchhomedecor.com/bedding-&-quilts/shams/standard-shams?page=1
http://countryporchhomedecor.com/bedding-&-quilts/shams/standard-shams?page=5
http://www.countryporchhomedecor.com/bedding-&-quilts/shams/standard-shams?page=2
http://www.countryporchhomedecor.com/bedding-&-quilts/shams/standard-shams?page=9
http://countryporchhomedecor.com/bedding-&-quilts/shams/standard-shams?page=4
http://www.countryporchhomedecor.com/bedding-&-quilts/shams/standard-shams?page=3
http://countryporchhomedecor.com/bedding-%26-quilts/shams/standard-shams
http://www.countryporchhomedecor.com/bedding-%26-quilts/shams/standard-shams?pi=18
http://countryporchhomedecor.com/bedding-&-quilts/shams/standard-shams?page=9
http://countryporchhomedecor.com/bedding-&-quilts/shams/standard-shams?page=10
http://www.countryporchhomedecor.com/bedding-&-quilts/shams/standard-shams?pi=18&page=2
http://countryporchhomedecor.com/bedding-&-quilts/shams/standard-shams?page=2
http://www.countryporchhomedecor.com/bedding-&-quilts/shams/standard-shams?page=4 -
No problem! Yes, same is true for HTTP and HTTPS
-
Is is the same way with https and non https pages? Should only one of those be accessible per page?
-
Ok thank you!
-
That is correct, you should be using rel=next/prev for markup on paginated sections. But after noticing the www and non-www issue, I don't think your problem is related to canonicals or prev/next.
Regardless of what you're doing with canonical tags or prev/next, you pages should never be accessible at both www and non-www versions. You're going to be at a duplicate content risk as long as both versions exist.
-
Thank you very much for your response! On the paginated pages I don't think you're supposed to use the canonical tag. Instead you're supposed to use the next/prev tag which is what I did. the next/prev tag points only to pages without query string values and those are the pages that are supposed to be indexed. So there shouldn't be individual pages that are separated by query string values right? They should all use non query string value pages.
Even though I do have both www and non www pages accessible, on all of the pages, I am either using the canonical tag or the next/prev tag on paginated pages. Shouldn't that tell search engines which to index??
-
Hi,
Regarding the first set of URLs: I took a look at a handful of those URLs, and it's entirely possible that you're getting duplicate notices on those. Rogerbot flags any 2 pages as duplicates if the source code of those URLs matches at 90% or more. So it's not identical, but not different enough that search engines can discern. Most of the products you've listed there have no content, or a very small amount, meaning that when you consider the rest of the code involved with that page, it mostly matches the homepage.
Regarding the second set: I ran those URLs through Screaming Frog and don't see any canonical tags. Keep in mind, just because URLs aren't indexed in search engines, doesn't mean Rogerbot doesn't have access to them.
*Update - on further digging, I think I found the source of all of your duplicate issues. Both www and non-www versions of your URLs are accessible. One of them should redirect to the other, doesn't matter which, but both should not render.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Large site with content silo's - best practice for deep indexing silo content
Thanks in advance for any advice/links/discussion. This honestly might be a scenario where we need to do some A/B testing. We have a massive (5 Million) content silo that is the basis for our long tail search strategy. Organic search traffic hits our individual "product" pages and we've divided our silo with a parent category & then secondarily with a field (so we can cross link to other content silo's using the same parent/field categorizations). We don't anticipate, nor expect to have top level category pages receive organic traffic - most people are searching for the individual/specific product (long tail). We're not trying to rank or get traffic for searches of all products in "category X" and others are competing and spending a lot in that area (head). The intent/purpose of the site structure/taxonomy is to more easily enable bots/crawlers to get deeper into our content silos. We've built the page for humans, but included link structure/taxonomy to assist crawlers. So here's my question on best practices. How to handle categories with 1,000+ pages/pagination. With our most popular product categories, there might be 100,000's products in one category. My top level hub page for a category looks like www.mysite/categoryA and the page build is showing 50 products and then pagination from 1-1000+. Currently we're using rel=next for pagination and for pages like www.mysite/categoryA?page=6 we make it reference itself as canonical (not the first/top page www.mysite/categoryA). Our goal is deep crawl/indexation of our silo. I use ScreamingFrog and SEOMoz campaign crawl to sample (site takes a week+ to fully crawl) and with each of these tools it "looks" like crawlers have gotten a bit "bogged down" with large categories with tons of pagination. For example rather than crawl multiple categories or fields to get to multiple product pages, some bots will hit all 1,000 (rel=next) pages of a single category. I don't want to waste crawl budget going through 1,000 pages of a single category, versus discovering/crawling more categories. I can't seem to find a consensus as to how to approach the issue. I can't have a page that lists "all" - there's just too much, so we're going to need pagination. I'm not worried about category pagination pages cannibalizing traffic as I don't expect any (should I make pages 2-1,000) noindex and canonically reference the main/first page in the category?). Should I worry about crawlers going deep in pagination among 1 category versus getting to more top level categories? Thanks!
Moz Pro | | DrewProZ1 -
Update in Moz spider/tools?? Flagging duplicate content / ignoring canonical
Hi all, Has there been an update in the SEOmoz crawling software? We now have thousands of dupe content/page title warnings for paginated product page URLs that have correctly formatted canonicals. e.g. http://www.woolovers.com/british-wool/mens/tweed-green/wool-countryman-suede-patch-sweater.aspx ... has following pages with identical content that have been flagged: http://www.woolovers.com/british-wool/mens/olive-green/wool-countryman-suede-patch-sweater.aspx?p=true&rspage=4 http://www.woolovers.com/british-wool/mens/olive-green/wool-countryman-suede-patch-sweater.aspx?p=true&rspage=6 http://www.woolovers.com/british-wool/mens/olive-green/wool-countryman-suede-patch-sweater.aspx?p=true&rspage=4 ..plus 4 more URL's. But they all have canonical set. There's even a notice at the bottom of report that tells us there's a canonical set to http://www.woolovers.com/british-wool/mens/tweed-green/wool-countryman-suede-patch-sweater.aspx What gives, SEOmoz ?? Thanks Michael
Moz Pro | | LawrenceNeal0 -
SEOmoz duplicate content checker
From my reports in seomoz i can see pages that are showing as having duplicate content but when i click on them it does not show me which pages are carrying the duplicate content? Is there any way to check this via semoz reports?
Moz Pro | | jazavide0 -
Domain.com and domain.com/index.html duplicate content in reports even with rewrite on
I have a site that was recently hit by the Google penguin update and dropped a page back. When running the site through seomoz tools, I keep getting duplicate content in the reports for domain.com and domain.com/index.html, even though I have a 301 rewrite condition. When I test the site, domain.com/index.html redirects to domain.com for all directories and root. I don't understand how my index page can still get flagged as duplicate content. I also have a redirect from domain.com to www.domain.com. Is there anything else I need to do or add to my htaccess file? Appreciate any clarification on this.
Moz Pro | | anthonytjm0 -
Port 80 and Duplicate Content
The SEOmoz Web App is showing me that every single URL on one of my clients' domains has a duplicate in the form of the URL + :80. For instance, the app is showing me that www.example.com/default.aspx is duplicated in the form of www.example.com:80/default.aspx Any idea if this is an actual problem or just some kind of reporting error? Any help would be appreciated.
Moz Pro | | AnthonyMangia0 -
Crawl diagnostic Notices for rel Canonical increased
Hello, We just signed up for SEO Moz, and are reviewing the results of our second web crawl. Our Errors and Warnings summary have been reduced, but our Notices for Rel Canonical have skyrocketed from 300 to over 5,500. We are using a WP with the Headway theme and our pages already have the rel=canonical along wiht rel=author. Any ideas why this number would go up so much in one week? Thank you, Michael
Moz Pro | | MKaloud0 -
How to remove Duplicate content due to url parameters from SEOMoz Crawl Diagnostics
Hello all I'm currently getting back over 8000 crawl errors for duplicate content pages . Its a joomla site with virtuemart and 95% of the errors are for parameters in the url that the customer can use to filter products. Google is handling them fine under webmaster tools parameters but its pretty hard to find the other duplicate content issues in SEOMoz with all of these in the way. All of the problem parameters start with ?product_type_ Should i try and use the robot.txt to stop them from being crawled and if so what would be the best way to include them in the robot.txt Any help greatly appreciated.
Moz Pro | | dfeg0