Duplicate Content Resolution Suggestion?
-
SEOmoz tools is saying there is duplicate content for:
What would be the best way to resolve this "error"?
-
Does having the line:
DirectoryIndex index.html
Have any use in addition to the lines you posted?
Thanks.
-
Stephen I agree with the KISS method. Using an htaccess RewriteCond is not the simplest solution for someone who does not know htaccess syntax. In an effort to fully answer this, here is the typical code we are referring to:
Options +FollowSymLinks
RewriteEngine on
RewriteCond %{HTTP_HOST} ^yoursite.com
RewriteRule (.*) http://www.yoursite.com/$1 [R=301,L]
RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /index.html\ HTTP/
RewriteRule ^index.html$ http://www.yoursite.com/ [R=301,L]The first 2 lines are typical commands to Follow Symbolic links, and make sure that the rewrite engine is in the on state.
The first RewriteCond looks at the host, and if it is not the www. version the RewriteRule will redirect the visitor to the http://www. version of your site.
The second RewriteCond looks at whether this is an index.html file, if it is, it will the RewriteRule will 301 redirect them to the version with out the index.html, just yoursite.com/
-
Im a KISS guy, duplicate content pages should just be handled with a rewrite, then they don't appear in any of your stats, attract links, spread your like/tweet numbers over multiple pages, if you are using xml files to keep tabs on your indexing etc and give you a better idea of whats going on on your site
Also, you have to take into account Facebook likes, page tweets, +1s etc - does rel canonical work on the social graph data?
rel canonical sort orders etc but if its a pure duplicate, then 301
"Dont link to page X on your site" isnt really a good solution in my eyes, too much room for error
-
Completely agree.
I think II may have been slightly confused by thinking the default for www.mydomain.com/ was not iindex.html
-
Yes, by the original posting your impression is correct and are the same page, but you can't 301 an index.html page to the domain where the index.html is the page that shows by default.
You could use an htaccess RewriteCond, but could be a little overkill for this situation, where adding a canonical will solve it.
-
I was under the impression that www.mydomain.com/ and www.mydomain.com/index.html were both indexing but are the same page
-
If the index.html is their home page that shows up when just doing the domain: http://www.mydomain.com/
then what would you 301 to? Are you assuming that it is a site that is using a index.php, index.htm, index.asp, etc.?
PlasticCards stated that there is a duplicate content, therefore the index.html page actually exists and should use a canonical.
-
I think in that particular situation I would use 301 as there really isn't a separate use for the /index.html page
-
or 301 redirect in your .htaccess and then you don't have to worry about link issues etc
-
Use a rel='canonical' and use the non index.html for the href;
also don't link to the index.html from anywhere.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content question
Hey Mozzers! I received a duplicate content notice from my Cycle7 Communications campaign today. I understand the concept of duplicate content, but none of the suggested fixes quite seems to fit. I have four pages with HubSpot forms embedded in them. (Only two of these pages have showed up so far in my campaign.) Each page contains a title (Content Marketing Consultation, Copywriting Consultation, etc), plus an embedded HubSpot form. The forms are all outwardly identical, but I use a separate form for each service that I offer. I’m not sure how to respond to this crawl issue: Using a 301 redirect doesn’t seem right, because each page/form combo is independent and serves a separate purpose. Using a rel=canonical link doesn’t seem right for the same reason that a 301 redirect doesn’t seem right. Using the Google Search Console URL Parameters tool is clearly contraindicated by Google’s documentation (I don’t have enough pages on my site). Is a meta robots noindex the best way to deal with duplicate content in this case? Thanks in advance for your help. AK
Technical SEO | | AndyKubrin0 -
International SEO And Duplicate Content Within The Same Language
Hello, Currently, we have a .com English website serving an international clientele. As is the case we do not currently target any countries in Google Search Console. However, the UK is an important market for us and we are seeing very low traffic (almost entirely US). We would like to increase visibility in the UK, but currently for English speakers only. My question is this - would geo-targeting a subfolder have a positive impact on visibility/rankings or would it create a duplicate content issue if both pieces of content are in English? My plan was: 1. Create a geo-targeted subfolder (website.com/uk/) that copies our website (we currently cannot create new unique content) 2. Go into GSC and geo-target the folder to the UK 3. Add the following to the /uk/ page to try to negate duplicate issues. Additionally, I can add a rel=canonical tag if suggested, I just worry as an already international site this will create competition between pages However, as we are currently only targeting a location and not the language at this very specific point, would adding a ccTLD be advised instead? The threat of duplicate content worries me less here as this is a topic Matt Cutts has addressed and said is not an issue. I prefer the subfolder method as to ccTLD's, because it allows for more scalability, as in the future I would like to target other countries and languages. Ultimately right now, the goal is to increase UK traffic. Outside of UK backlinks, would any of the above URL geo-targeting help drive traffic? Thanks
Technical SEO | | Tom3_150 -
Duplicate content when working with makes and models?
Okay, so I am running a store on Shopify at the address https://www.rhinox-group.com. This store is reasonably new, so being updated constantly! The thing that is really annoying me at the moment though, is I am getting errors in the form of duplicate content. This seems to be because we work using the machine make and model, which is obviously imperative, but then we have various products for each machine make and model. Have we got any suggestions on how I can cut down on these errors, as the last thing I want is being penalised by Google for this! Thanks in advance, Josh
Technical SEO | | josh.sprakes1 -
What could be the cause of this duplicate content error?
I only have one index.htm and I'm seeing a duplicate content error. What could be causing this? IUJvfZE.png
Technical SEO | | ScottMcPherson1 -
Duplicate Content Question (E-Commerce Site)
Hi All, I have a page that ranks well for the keyword “refurbished Xbox 360”. The ranking page is an eCommerce product details page for a particular XBOX 360 system that we do not currently have in stock (currently, we do not remove a product details page from the website, even if it sells out – as we bring similar items into inventory, e.g. more XBOX 360s, new additional pages are created for them). Long story short, given this way of doing things, we have now accumulated 79 “refurbished XBOX 360” product details pages across the website that currently, or at some point in time, reflected some version of a refurbished XBOX 360 in our inventory. From an SEO standpoint, it’s clear that we have a serious duplicate content problem with all of these nearly identical XBOX 360 product pages. Management is beginning to question why our latest, in-stock, XBOX 360 product pages aren't ranking and why this stale, out-of-stock, XBOX 360 product page still is. We are in obvious need of a better process for retiring old, irrelevant (product) content and eliminating duplicate content, but the question remains, how exactly is Google choosing to rank this one versus the others since they are primarily duplicate pages? Has Google simply determined this one to be the original? What would be the best practice approach to solving a problem like this from an SEO standpoint – 301 redirect all out of stock pages to in stock pages, remove the irrelevant page? Any thoughts or recommendations would be greatly appreciated. Justin
Technical SEO | | JustinGeeks0 -
What to do about similar content getting penalized as duplicate?
We have hundreds of pages that are getting categorized as duplicate content because they are so similar. However, they are different content. Background is that they are names and when you click on each name it has it's own URL. What should we do? We can't canonical any of the pages because they are different names. Thank you!
Technical SEO | | bonnierSEO0 -
50+ duplicate content pages - Do we remove them all or 301?
We are working on a site that has 50+ pages that all have duplicate content (1 for each state, pretty much). Should we 301 all 50 of the URLs to one URL or should we just completely get rid of all the pages? Are there any steps to take when completely removing pages completely? (submit sitemap to google webmaster tools, etc) thanks!
Technical SEO | | Motava0 -
Duplicate content check picking up weird urls
Hi everyone, I love the duplicate content feature; we have a lot of duplicate content issues due to the way our site is structured. So, we're working on them. However, I'm not fully understanding the results. For example, say I have an article on breast cancer symptoms. It shows up as duplicate content, by having two urls that point to the exact same page. http://www.healthchoices.ca/articles/breast cancer symptoms and http://www.healthchoices.ca/somerandomstringofcode. I fully understand why that is duplicate content. I am not sure about this though, it picks up the same url twice and calls it duplicate content. For example, saying that http://www.healthchoices.ca/dr.-so-and-so and http://www.healthchoices.ca/dr.-so-and-so is duplicate...however is this not the same page? Is there something I'm missing? Many of the URL's are identical. Thanks, Erin
Technical SEO | | erinhealthchoices0