SEOmoz suddenly reporting duplicate content with no changes???
-
I am told the crawler has been updated and wanted to know if anyone else is seeing the same thing I am.
SEOmoz reports show many months of no duplicate content problems. As of last week though, I get a little over a thousand pages reported as dupe content errors.
Checking these pages I find there is similar content (hasn't changed) with keywords that are definitely different. Many of these pages rank well in Google, but SEOmoz is calling them out as duplicate content. Is SEOmoz attempting to closely imitate Google's perspective in this matter and therefore telling me that I need to seriously change the similar content?
Anyone else seeing something like this?
-
Hi
We see the extreme raise in the duplicate content of our site too. If the sensitivity is adapted, will these graphs come down again?
What is your opinion on how Google sees a webshop with lot's of products and filter options? Our site www.dmlights.com/massive for example can have a lot of filtering but we try to counter this in Webmaster Tools with the URL parameters.
Do you suggest to adapt this for good seo?
Wondering about your opinions. Thanks.
-
Hey Scott,
Again, we're sorry about the odd jump in duplicate content errors!
We just launched a new crawler and it is being extremely sensitive to duplicate content. As of now we are picking up duplicate pages on your domain via:
https clones of URLs
Some pages have a “/” trailing after the URL and some don’t
We are also ignoring some rel=canonical directives
This is an issue that other users are seeing with their crawls. Our engineers have made some changes to the crawler to scale back the sensitivity to these issues on the crawler and you should be seeing the changes within a week or two.
We're really sorry for the confusion.
Best of Luck,
Chiaryn
-
Two good suggestions so far, and both I had checked. Thanks KJ Rogers and Ryan Kent.
This is starting to look like it boils down to how much the new SEOmoz crawler sees content in the same way that Google does.
We did not make any site-wide changes and the URLs identified as duplicate in the report are valid URLs that actually hold similar content (keywords and so forth were changed for each version of a slightly different product through an Excel Concatenate construct to build the content). We have actually seen these pages climb in rank over the months since the content was added.
So, like I said, the sudden identification of these as duplicate by the moz crawler is suspicious to me. Not sure it sees things the way Google does.
-
Without examining your site and the pages involved it is not possible for me to share feedback.
Is it possible you made any recent site wide changes? Changes to your header, navigation, footer or sidebar could have pushed you passed a certain threshhold of duplicate content which triggered a flag.
-
I got the same thing last week. I later found out that mine, using dynamic content on the same page, had speical characters in the url which was taking crawlers to an error page. The error page was showing a list of pages with the url's capitalized. I was able to fix some of them, but it scared the heck out of me.
I had to run a crawl test from SEOMoz to filter out what was going on. Perhaps you have something similar?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Our partners are using our website content for their websites. Do such websites hurt us due to duplicate content?
Hi all, Many of our partners across the globe are using the same content from our website and hosting on their websites including header tags, text, etc. So I wonder will these websites are hurting our website due to this duplicate content. Do we need to ask our partners to stop using our content? Any suggestions? What if some unofficial partners deny to remove the content? best way to handle? Thanks
Algorithm Updates | | vtmoz0 -
Duplicate content on a sub domain
I have two domains www.hairremoval.com and a sub domain www.us.hairromoval.com both sites have virtual the same content apart from around 8 pages and the sub domain is more focused to US customers so the spelling are different, it is also hosted in the states. Would this be classed as duplicate content ? (The url’s are made up for the question but the format is correct)
Algorithm Updates | | Nettitude0 -
How to content marketing: Should my blog posts link to my sales page?
Hi, I've been doing a weekly blog making sure that each blog post contains my money keywords in the text, sometimes in h2 tags etc. My blog posts never contain any links to my actual sales page. Should I link each blog post to my sale page or is it overdoing it? Will internal linking of all my blog posts to my sales page will improve its page authority or have any SEO benefits? What about using exact match anchor text on these internal links? I couldn't find any resource online about this matter. Thank you for your opinion and help! -Marc
Algorithm Updates | | marcandre0 -
Content Caching Memory & Removal of 301 Redirect for Relieving Links Penalty
Hi, A client site has had very poor link legacy, stretching for over 5 years. I started the campaign a year ago, providing valuable good quality links. Link removals and creating a disavow to Google have been done, however after months and months of waiting nothing has happened. If anything, after the recent penguin update, results have been further affected. A 301 redirect was undertaken last year, consequently associating those bad links with the new site structure. I have since removed the 301 redirect in an attempt to detach this legacy, however with little success. I have read up on this and not many people appear to agree whether this will work. Therefore, my new decision is to start a fresh using a new domain, switching from the .com to .co.uk version, helping remove all legacy and all association with the spam ridden .com. However, my main concern with this is whether Google will forever cach content from the spammy .com and remember it, because the content on the new .co.uk site will be exactly the same (content of great quality, receiving hundreds of visitors each month from the blog section along) The problem is definitely link related and NOT content as I imagine people may first query. This could then cause duplicate content, knowing that this content pre-existed on another domain - I will implement a robots.txt file removing all of the .com site , as well as a no index no follow - and I understand you can present a site removal to Google within webmaster tools to help fast track the deindexation of the spammy .com - then once it has been deindexed, the new .co.uk site will go live with the exact same content. So my question is whether Google will then completely forget that this content has ever existed, allowing me to use exactly the same content on the new .co.uk domain without the threat of a duplicate content issue? Also, any insights or experience in the removal of a 301 redirect, detaching legacy and its success would also be very helpful! Thank you, Denver
Algorithm Updates | | ProdoDigital0 -
How can I use Intuit without getting duplicate content issues
All of my Intuit site show duplicate content on the index pages. How can I avoid this
Algorithm Updates | | onestrohm0 -
Sudden drop after 301 redirection
Hi Experts We did a 301 redirect from an old site to a new site to get rid of any bad link juice. We recently found a big drop in rankings and traffic after google last indexed the new web pages. We did 301 using asp at page level coding. The website had 4000 approx. pages and we did 301 section by section. This is how we did as per one of the blog post in seomoz. Create a sitemap for your old domain. Create content (contact information, description of your company, indication of future plans) and something link worthy for the new domain. (You should start trying to build links early) Setup the new domain and make it live. Register and verify your old domain and new domain with Google Webmaster Tools. Create a custom 404 page for old domain which suggests visiting new domain. Old Domain error checking and fixing In a development environment, test the redirects from the old domain to the new domain. Ideally, this will be a 1:1 redirect. (www.example-old-site.com/category/sexy-mustaches.html to www.example-new-site.com/category/sexy-mustaches.html) 301 redirect your old domain to your new domain. Submit your old sitemap to Google and Bing. The submission pages are within Google Webmaster Tools and Bing Webmaster Center (This step will make the engines crawl your old URLs, see that they are 301 redirects and change their index accordingly.) Fill out the Change of Address form in Google Webmaster Tools. Create a new sitemap and submit it to the engines. (This will tell them about any new URLs that were not present on the old domain) Wait until Google Webmaster Tools updates and fix any errors it is indicated in the Diagnostics section. Monitor search engine results to make sure new domain is being properly indexed. We also did a press release with prweb to announce the new launch. We followed the steps recommended in one of the I am not sure what to do next. Can anyone suggest if its normal to see a drop and we should wait for some time or if we did something wrong? We are loosing business with every single day. Please help !
Algorithm Updates | | ITRIX0 -
Has there been a change in the Bing/Yahoo Algorithm?
I have noticed many of my sites took a beating this month on Bing and Yahoo, while Google stayed pretty flat.
Algorithm Updates | | netviper0 -
Moving content in to tabs
Hi, I'm kind of an SEO noobie, so please bare with me 🙂 On one of the sites I'm working on I got a request to move large blocks of content, just placed on the page currently, in to tabs. This makes sense. We tried it and it makes navigating through the information much easier for visitors. My question is: Will Google consider this as hiding information? It's not loaded dynamically. It's all their when the page is loaded, in the source, but not displayed until the visitor clicks the tab. Will this cause SEO issues? Thank you!
Algorithm Updates | | eladlachmi0