Duplicate Content for index.html
-
In the Crawl Diagnostics Summary, it says that I have two pages with duplicate content which are:
I read in a Dream Weaver tutorial that you should name your home page "index.html" and then you can let www.mywebsite.com automatically direct the user to index.html. Is this a bug in SEOMoz's crawler or is it a real problem with my site?
Thank you,
Dan
-
The code should definitely go into the websites root directory's .htaccess, however .htaccess can be weird, a few days ago I ran into a similar issue with a client's website, and I was able to remedy the issue with a variation of the code.
index Redirect RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /([^/]+/)index.(php|html|htm|asp)\ HTTP/ RewriteRule ^(([^/]+/))index.(php|html|htm|asp)$ http://yoursite.com/$1 [R=301,L]
If you give me the URL for the site I will take a look at it and let you know what would be feasible.
-
Hi Daniel, can you share with us the URL of your site? We can take a look at it and give you a more precise answer that way. Thanks!
-
I eventually figured out that your method was a 301 redirect and I definitely broke my site trying to use the code you posted. .. haha. Its ok though. I just removed the code and it went back to normal. At first, I was editing the .htaccess file in the public_html folder which wasnt working. Then I tried the root folder for the site (I created the .htaccess file since it did not exist.) Neither of those worked. (I am using Bluehost so I do not think that I have root access and I am not sure if it is a Linux server or not.)
If there is an easy way to explain what I am doing wrong, please do so. Otherwise, I will use canonical.
Thanks for everything!
-
@Dan
Thanks for your reply. It seems like there are lots of different ways to solve this problem. I just watched this video on Matt Cutt's blog where he discusses his preference for 301 redirects over rel canonical tag.
Where would you say your solution fits in?
sorry about the delay of this response, i didn't realize the that you were asking me a question right away. When placing the code I provided in my previous answer this will cause a 301 perminant redirect to the original URL. That's actually what the
[R=301,L]
portion of the code is stating (R) redirect (301) status is referring to. After reviewing the Matt Cutts video, I realize that I should have asked you if you were operating on a Linux server that you had root access to. We actually utilize both redirects and canonical tags since it was recommended by the on-page optimization reports. Heck Google uses them, I would assume because it's easier for the user to be referred to a single page URL. Obviously though if you don't have server header access, and are not familiar with .htaccess (you can accidentally break your site) then the canonical solution is appropriate
-
Josh,
Thanks for your reply. It seems like there are lots of different ways to solve this problem. I just watched this video on Matt Cutt's blog where he discusses his preference for 301 redirects over rel canonical tag.
Where would you say your solution fits in?
Thanks,
Dan -
use the link rel tag for all my homepages for the http://www.yoursite.com
-
Odd enough I just recently answered this question. The SEOmoz crawler is correct, because without a redirect you will be able to access both versions of the page in your browser.
To resolve this issue simply rewrite the index.html to the root url by placing the following code into your .htaccess file into your root directory.
Options +FollowSymlinks RewriteEngine on
Index Rewrite RewriteRule ^index.(htm|html|php) http://www.yoursite.com/ [R=301,L] RewriteRule ^(.*)/index.(htm|html|php) http://www.yoursite.com/$1/ [R=301,L]
You can also do the same with the index file in any subdirectories that you might create, by simply placing a .htaccess into those sub directories and using variations of the above code. This is how you create nice tight URLs without the duplicate content issue that look like - http://www.semclix.com/design/business/
-
It is a problem which you need to fix. You need to canonicalize your pages.
Those are all various URLs which most likely lead to the same web page. I say "most likely" because these URLs can actually lead to different pages.
You need to tell crawlers and search engines how you organize your site. There are several ways to achieve canonicalization. The method I prefer is to add the following line of code to each page:
The URL provided should be the preferred URL for your page.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Making html table as 'seofriendly' as possible
Hi, On my website I have a table with a list of products, on every row I have a different product and a different property on each column. The table is made with css so the html code is clean. The problem is (I guess) that google doesn't 'understand' what its inside on the table. So if I do a google search that page appears on the page 87, there is any way to improve my SEO without changing the table? Or to improve my SEO I must change the format of my content? In resume, I want to improve the SEO page of a page that contains information organized inside a table. I don't know if there is a specific answer to this question. Any help is welcome. Regards
Web Design | | jcobo0 -
Duplicate Title Issues using # anchor tags
Our homepage navigation uses anchor tags (?TabNumb=1#, ?TabNumb=1# etc) rather than directly linking to different pages to decrease load time (and simplify the build process I owuld imagine). These anchor links are showing up as duplicate titles in Moz. I am pretty sure if I were to use noindex or rel tags, that could have a negative affect on my search results. Any way to tackle this outside of a complete redesign of the structure? http://www.dedoose.com/about-us/?TabNum=2# as an example
Web Design | | sbnjl0 -
Should Blog Category Archive URLs be Set to "No-Index" in Wordpress?
It appears that Google Webmaster Tools is listing about 120 blog archives URLs in Google Index>Index Status that should not be listed. Our site map contains 650 pages, but Google shows 860. Pages like: <colgroup><col width="464"></colgroup>
Web Design | | Kingalan1
| http://www.nyc-officespace-leader.com/blog/category/manhattan-office-space | With Titles Like: <colgroup><col width="454"></colgroup>
| Manhattan Office Space Archives - Metro Manhattan Office Space | Are listed when in the Rogerbot crawl report for the site. How can we remove such pages from Google Webmaster Tools, Index Status? Our site map shows about 650 pages, yet Google show these extra pages. We would prefer that they not be indexed. Note that these pages do not appear when we run a site:www.nyc-officespace-leader.com search. The site has suffered a drop in ranking since May and we feel it prudent to keep Google from indexing useless URLs. Before May 650 pages showed on the Webmaster Tools Index status, and suddenly in early June when we upgraded the site the index grew by about 175 pages. I suspect the 120 blog archives URLs may have something to do with it. How can we get them removed? Can we set them to "No-Index", or should the robot text be used to remove them? Or can some type of removal request be made to Google? My developers have been struggling with this issue since early June. The bloat on the site is about 175 URLs not on the site map. Is there any go to authority on this issue (it is apparently rather complicated) that can provide a definitive answer? Thanks!!
Alan0 -
Subscription Video Content
I've never used video other than embedding youtube videos. This time I want to host my own with encryption for the subscriber content and sample videos for general consumption. Would love any pointers at all. I would also like the content to be streamable on ipad etc. What platform would you use (adobe etc) and why? I don't want to start out on one road to discover down the line that it sucks for SEO. Obviously the subscription content will suck since it will only be available to logged in users, but the rest..... In a nutshell I want to know how to host video well for SEO and make it shareable, but with the option to also have some of the video content subscription only. (should have put it like that to start with probably.)
Web Design | | Serpstone0 -
AJAX & JQuery Tabs: Indexation & Navigation
Hi I've two questions about indexing Tabs. 1. Let's say I have tabs, or an accordion that is triggered with Jquery. That means that all HTML is accessible and indexed by search engines. But let's say a search query is relevant to the content in Tab#3, while Tab#1 is the one that's open by default. Is there any way that Tab#3 would be open directly if it's more relevant to the search query? 2. AJAX Tabs: We have pages that have Tabs triggered by AJAX (example: http://www.swisscom.ch/en/residential/help/loesung/entfernen-sie-sim-lock.html). I'm wondering about the current best practice. Google recommends HTML Snapshots. A newer SEOMoz Article talks about pushState(). What's the way to go here? Or in other words: How to get Tabs & Accordion content indexed and allow users to navigate directly to it?
Web Design | | zeepartner0 -
Using content from other sites without duplicate content penalties?
Hi there, I am setting up a website, where i believe it would substantially benefit users experience if i setup a database of information on artists. I am torn because to feasibly do this correctly, i would have content that is built from multiple sources, but has no real unique content. It would have parts from Wikipedia, parts from other websites etc. All would be sourced of-course. My concern is that if i do this, am i risking in devaluing my website because of this. Is there a way i can handle this without taking a hit?
Web Design | | BorisD0 -
Redirecting duplicate pages
For whatever reason, X-cart creates duplicates of our categories and articles so that we have URLs like this www.k9electronics.com/dog-training-collars
Web Design | | k9byron
www.k9electronics.com/dog-training-collars/ or http://www.k9electronics.com/articles/anti-bark-collar
http://www.k9electronics.com/articles/anti-bark-collar/ now our SEO guy says that we dont have to redirect these because google is "smart enough" to know they are the same, and that we should "leave it as-is". However, everything I have read online says that google sees this as dupe content and that we should redirect to one or the other / or no /, depending on which most of our internal links already point to, which is with a slash. What should we do? Redirect or leave it as is? Thanks!0 -
URL parameters causing duplicate content errors
My ISP implemented product reviews. In doing so, each page has a possible parameter string of ?wr=1. I am not receiving duplicate page content and duplicate page title errors for all my product URLs. The report shows the base URL and the base URL?wr=1. My ISP says that the search engines won't have a problem with the parameters and a check of Google Webmaster Tools for my site says I don't have any errors and recommends against configuring URL parameters. How can I get SEOmoz to stop reporting these errors?
Web Design | | NiftySon1