Duplicate Page and Title Issues
-
On the last crawl, we received errors for duplicate page titles and some duplicate content pages.
Here is the issue:
We went through our page titles that were marked as duplicate and changed them to make sure their titles were different. However, we just received a new crawl this week and it is saying there are even more duplicate page title errors detected than before. We're wondering if this is a problem with just us or if it has been happening to other Moz users.
As for the duplicate content pages, what is the best way to approach this and see what content is being looked at as a "duplicate" set?
-
I am being told I have hundreds of duplicate titles and content on my site. It's not possible. Not only do I only have 120ish posts on my site and they all have different titles, but google console shows me that no on on the web is claiming my posts as theirs (ie they are not saying they are the originals, etc.) So what the heck is the issue? Why am I being told that posts that are NOT duplicate of anything are duplicate content and that I have a bunch of duplicate titles when NONE OF MY titles are duplicates of any other title?????????
And, how can I have 400 pieces of duplicate content when I only have 120 blog posts??
Please someone help me with this
-
Always glad to help!
-
Thank you for your help!
-
Hi there!
I took a look at your campaign and it seems that we crawled a lot more page of the site this week in general than we did the previous week. Two weeks ago, we crawled about 500 pages and this week the crawl jumped up to 2800. So it isn't that there are more duplicate pages on your site now, it is just that we are able to crawl more pages now than we were before, which led to us finding even more duplicate pages that had not been reported before.
As for why we are crawling so many more pages, it could be for a few reasons. If you made changes to the links on your site, the page hierarchy of the site, or if you remove noindex tags or updated the robots.txt file for the site, those things could all affect how we are able to crawl the site.
I hope this helps! Please let me know if I can help you with anything else.
Chiaryn
Help Team Sensei -
Hey There!
I responded to your support request with more information on how we view canonicals. I hope it was helpful!
If you can send me examples of pages that you believe are being counted incorrectly, that will help me determine if there is an issue with the crawl or if the pages correctly fall into how we determine duplicates based on the canonical tags.
I look forward to hearing back from you there soon!
Chiaryn
Help Team Sensei -
They have different purposes, but all relating to preventing the same page from showing duplicate content because of the different ways a URL can be written. There is nothing on this page to show that using a canonical tag can take two different pages and remove the similar content. The purpose of the canonical tag is to set the preferred URL for one page.
-
The problem is, our content is unique to each page and post. Yet they still have been coming up duplicated. I'll have to try Netrepid's google searching suggestion to double check these.
-
I would remove the domain name when you use the query.
moz keyword research in the united states best practice gets a much different SERP than keyword research in the united states best practice.
If you are trying to get a pure SERP result than you shouldn't use your domain name. That will tell you if there are any other search results in the web. If you want to find duplicate content on your site use copyscape.com or go to GWT and look for internal duplicate content.
Again, not only copy creates a duplicate content message. It is having an off html to text ratio, repetitive links in the HTML with not enough copy to balance it out. If Moz is reading a duplicate content error, and the number is increasing week to week I wouldn't discredit the finding simply because you don't understand why the error is occurring. The canonical tag won't prevent two different URLs from showing duplicate content. If you want to do that, no follow one of the URLs. That isn't best practice though, best practice is to fix the copy.
-
Hey Monica... canonical tagging can be used for a lot more than just 'www' or non-www:
-
Yeh, the weird thing I am noticing is that the canonical tags are already present on this domain, and it's not picking up ALL of the pages with canonical as duplicates.
I'm not really sure what's going on, but when in doubt, I always Google the following query:
site:domain intext:block of text unique to that page
If there is a duplicate content issue, Google should tell you that by showing more than one result. If it shows only one, then Google is reading your code right.
I know we all love Moz and want them to show we have no errors on our sites, but at the end of the day... don't we really want Google to find no issues with our sites, not Moz?
-
Canonical tags just point non www URL address to www addresses. It tells the engines that whether or not the WWW is used, the two URLs are the same page. It will only solve the duplicate content errors if that is in fact what is causing the error. If the actual cause is duplicate content the only way to solve it is to write unique copy.
-
Thank you for that information!
-
No Magento. We were debating the use of Canonical tags ourself.
-
I'm noticing a similar issue for an online store I consult for. We added canonical tags to all product pages and category pages, and Moz doesn't appear to be correctly attributing them.
Are you using a Magento store by chance?
-
Depending on when you made the changes, it could just be that they weren't fixed in time for the next crawl. Fixing these duplicate titles are really important for your SEO.
Open the medium priority issues and set it to duplicate page titles only. When you do that you will see which titles are duplicate. Sometimes there are more than one duplicate title so make sure you completely expand each line. I would make sure I go back into all of them and see if what is showing as duplicate on the crawl report is still the same information on your title tags. If it matches, then those pages need to be changed. If the titles are different, then wait another week and just see if the timing was off somehow.
If you have pages with duplicate content there could be a few things triggering it. There could be a too much of the same HTML and not enough text to make the pages look different. The content on the pages could be very thin, and very similar. The best way to offset duplicate page errors on your site is to get original, informative, unique content on those pages. You can set your crawl report to high priority errors, then select duplicate page content. You can then look at all of the duplicate pages side by side to determine if you can get unique content on those pages. If you are getting duplicate page errors for the same web page, one with a WWW and one without, then check to make sure your REL Canonical tags are in place and functioning properly. If the pages are different then you need to get great content up.
-
Hi there,
See here - I think Moz Analytics/PRO don't process rel=prev/next properly, so they may give false alarms on those pages, even if the titles are properly implemented.
Cheers
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Blocked Resource in Google Index. SSL certificate blocking 718 pages seen in Google Search Console.
My google search console indicates that my SSL certificate is blocking Googlebot. I was wondering if the blocking of my SSL certificate to the GoogleBot is causing any issues. I I'm not sure if this was only blocked recently by Volusion (my host) as a means of accommodating my ssl certificate not being able to address the various url versions of my site, or is this just commonplace and not really harmful to my indexing. I tested one of these "blocked" urls in the robots.txt tester and it showed that the Googlebot was allowed. Could it be just the SSL certificate at the bottom of the page is blocked? Thanks
Moz Bar | | mrkingsley0 -
Calling all 301 htaccess Guru's - www to non www - then to https + Redirect homepage to inner page
I have tried searching, multiple opinions and multiple things that supposedly work. What I have now, seems to work from an end user perspective, but Roger tells me otherwise: Redirect Chain issue....redirect, which redirects which redirects etc..... FIRST, we need to redirect all www to non www. SECOND, we need to redirect all to https. THIRD, we need to redirect the homepage to an inner page. (Got to love BOGUS DMCA complaints! :)?) So far we have: RewriteEngine on
Moz Bar | | Jes-Extender-Australia
RewriteCond %{HTTPS} off
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]
RewriteCond %{HTTP_HOST} ^mydomain.com.au$ [OR]
RewriteCond %{HTTP_HOST} ^www.mydomain.com.au$
RewriteRule ^/?$ "https://mydomain.com.au/inner-page-here" [R=301,L] Plus down the page there is the usual wordpress settings: <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]</ifmodule> So, why does it seem to work for the end user, but Roger has his knickers in a knot saying, redirect, to redirect to redirect etc? Namaste and many thank you's in advance 🙂0 -
Issues generating pdf reports
When generating a pdf from a Custom report we get 'oops there must have been an error...try again" - well we have been trying again and again. Can you provide a tad more diagnostics other than 'oops'?
Moz Bar | | 7thPowerInc0 -
Puzzled. High domain authority, but low page authority?
I have a question regarding how a page's authority can be low but the domain authority be high. I'm using the SERP report in moz to compare competing pages. The page I'm working on is an ecommerce category page. Moz says it has very low page authority, has been indexed for a little over a year, it's domain authority is a moderate 50, and has no root domains linking to it. It is more relevant than the other pages (has more products for sale that fit the keyword searched for), has a decent amount of copy on the page, but ranks around #10 in the search results. The competing pages are also ecommerce category pages. The 9 ranking higher have domain authority varying from 30-75, and 1 or 2 root domain links to them. Their number of products that match the keyword searched are all over the board, some have many matches, some have only 6 or 8 results. Some have a moderate amount of text on the page, some have very little text on the page. Yet they have page authority in the 30s-40s. What goes into page authority that I'm missing which would make these other pages have so much more authority when they are just slightly better in the other metrics listed above? What am I missing here?
Moz Bar | | K-WINTER0 -
Moz Crawl Test says pages have no internal links
Greetings, I am working on a website, https://www.nasscoinc.com, and ran a Moz Crawl Test on it. According to the crawl test, only 2 of the website's hundreds of pages are receiving internal links. When I run a similar test on the site using Screaming Frog, I see that most of the pages have at least one internal link. I'm wondering if anyone has seen this before with the crawl test; and there is a way to get the crawl test to see the internal links? Thanks!
Moz Bar | | TopFloor0 -
Error for a page that doesn't exist.
Hi, I'm just trailing this service, and I have a couple of questions that I hope someone can help with. 1. I am getting a high priority error regarding a page not being able to be crawled - a 4XX error. Problem is, there is no such page in existence. The URL is my site/comments/feed It's driving me crazy. 2. I'm also getting errors based on missing meta tags in blog posts. I am adding tags at the time of posting, so I am unsure why these errors are showing up. Actually, I didn't add tags to all posts - but there are errors on ALL posts, even those I added tags to. Any help would be wonderful. Thanks!!! Hugh
Moz Bar | | hughanderson0 -
Issues adding keywords to MOZ analytics
Hi All 🙂 I've started a couple of campaigns in MOZ Analytics, one accepts keywords the other looks like it does but doesn't actually add them.. I'm pasting a group of keywords, about 30, all separated with comma's. Doing exactly the same to one account that congratulated me straight away and added them to the list below, yay! The other shows the spinning wheel for a few seconds and does nothing further, I've waited a good 24 hours and they keywords have not appeared. Any help appreciated! Thanks 🙂
Moz Bar | | spanda0 -
Is there a way to get Page Authority values included in the Crawl Diagnostic .csv export?
Would be nice to have these values included so that you can sort by Page Authority. 4uF6efx.png
Moz Bar | | WebReputationBuilders0