Why does my crawl diagnostics show duplicate content
-
My crawl diagnostics show duplicate content at mysite.com and mysite.com/index.html which are essentially the same file.
-
Michel is right - Google doesn't care that they're one template - if both URLs are being crawled, then they'll see that as two "pages". Every unique, crawlable URL can become an indexed page. That's why duplicate content problems are so common.
The good news is that you can put a canonical tag on just the one template/file and it will cover all of the paths/URLs that land on that file. The tag goes in your section and looks like:
I'd check the internal links, though, and see if you're linking to both versions. It's best to use one, consistent URL in your internal links for any given page.
-
mysite.com is a domain not a file with mysite.com/index.html being the home page. Not sure how I would do what you suggest.
-
If the crawl report found those two URLs, then your website has at least one link to each of those URLs (otherwise Rogerbot wouldn't have found them).
You should follow Collin's advice to define the canonical page.
It also won't hurt to figure out where those links are being used in your content, and then make sure you only use one to point to your page.
Cheers
Michel
-
"Essentially" the same file isn't the same as "the same file." Your best bet is probably to mark one of them (probably mysite.com) with rel=canonical.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moz Crawl Report more urls?
Hi. I have used Moz Crawl Test and get my 3,000 urls crawled no issue. However, my site has more than that, is it possible to crawl the entire website? Alot of the crawl urls in the Moz test are search string urls and filters so Ive probably wasted about 2,500 urls on filter urls. Any advise or alternative software that wont cost a fortune?
Moz Pro | | YNWA
Thanks0 -
Site Crawl Error
In moz crawling error this message is appears: MOST COMMON ISSUES 1Search Engine Blocked by robots.txt Error Code 612: Error response for robots.txt i asked help staff but they crawled again and nothing changed. there's only robots.XML (not TXT) in root of my webpage it contains: User-agent: *
Moz Pro | | nopsts
Allow: /
Allow: /sitemap.htm anyone please help me? thank you0 -
Forward slash on URL on Duplicate Content Report
Hi I'm new to this whole Moz thing, so needing help from some kind people! I've just looked at my Duplicate Page Content report and there are loads of URLs in there which are the same but are just differentiated by adding / at the end of the URL, e.g. http://youngepilepsy.org.uk/news-and-events/events http://youngepilepsy.org.uk/news-and-events/events/ Is this be a canonical issue? I can't understand why though as these aren't at the root. However when we add inline text links within the page HTML, there are some URLs with / and some without, could that be the reason? Thanks for your help! Jackie
Moz Pro | | YoungEpilepsy1 -
The pages that add robots as noindex will Crawl and marked as duplicate page content on seo moz ?
When we marked a page as noindex with robots like {<meta name="<a class="attribute-value">robots</a>" content="<a class="attribute-value">noindex</a>" />} will crawl and marked as duplicate page content(Its already a duplicate page content within the site. ie, Two links pointing to the same page).So we are mentioning both the links no need to index on SE.But after we made this and crawl reports have no change like it tooks the duplicate with noindex marked pages too. Please help to solve this problem.
Moz Pro | | trixmediainc0 -
Crawl Diagnostics - unexpected results
I received my first Crawl Diagnostics report last night on my dynamic ecommerce site. It showed errors on generated URLs which simply are not produced anywhere when running on my live site. Only when running on my local development server. It appears that the Crawler doesn't think that it's running on the live site. For example http://www.nordichouse.co.uk/candlestick-centrepiece-p-1140.html will go to a Product Not Found page, and therefore Duplicate Content errors are produced. Running http://www.nhlocal.co.uk/candlestick-centrepiece-p-1140.html produces the correct product page and not a Product Not Found page Any thoughts?
Moz Pro | | nordichouse0 -
Lots of site errors after last crawl....
Something interesting happened on the last update for my site on SEOmoz pro tools. For the last month or so the errors on my site were very low, then on the last update I had a huge spike in errors, warnings, and notices. I'm not sure if somehow I made a change to my site (without knowing it) and I caused all of these errors, or if it just took a few months to find all the errors on my site? My duplicate page content went from 0 to 45, my duplicate page titles went from 0 to 105, my 4xx (client error) went from 0 to 4, and my title missing or empty went from 0 to 3. On the warnings sections my missing meta description tag went form a hand full to 444. (most of these looking to be archive pages.) Down in the notices I have over 2000 that are blocked by meta robots, meta-robots nofollow, and Rel canonical. I didn't have any where near this many prior to the last update of my site. I just wanted to see what I need to do to clean this up, and figure out if I did something to cause all the errors. I'm assuming the red errors are the first things I need to clean up. Any help you guys can provide would be greatly appreciated. Also if you'd like me to post any additional information, please let me know and I'd be glad to.
Moz Pro | | NoahsDad0 -
Crawl test tool from SEOmoz - which URLs does it actually crawl?
I am using for the first time the crawl test tool from SEOmoz and I do not really understand which URLs the tool is going to crawl. First, it says "enter any subdomain" --> why can´t I do the crawl for the root domain? Second it says "we'll crawl up to 3,000 linked-to pages" --> does that mean that the tool crawls all internal links that it can find on the given domain? Thanks for your help!
Moz Pro | | Elke.GetApp0 -
How long does a crawl take?
A crawl of my site started on the 8th July & is still going on - is there something wrong???
Moz Pro | | Brian_Worger1