Can duplicate content issues be solved with a noindex robot metatag?
-
Hi all
I have a number of duplicate content issues arising from a recent crawl diagnostics report.
Would using a robots meta tag (like below) on the pages I don't necessarily mind not being indexed be an effective way to solve the problem?
Thanks for any / all replies
-
Thanx!
-
This is an old question... And the answer is yes In fact a page blocked in a robots.txt can be reindexed if that same page is linked in an external site. check this old webmaster help thread > http://www.google.com/support/forum/p/Webmasters/thread?tid=3747447eb512f886&hl=en That is why is always better use the meta robots no index to be really sure we don't want a page to be indexed
-
Yes it
would, but i would rather use the canonical tag, all pages have pagerank and
even weak pages help you site rank better. Google once released their page
rank, since then they have changed it many times, but from testing we know that
the main idea still holds true. Pages not in the index can not add to your
sites pagerank.Take a
look at this page it explains it very well. http://www.webworkshop.net/pagerank.htmlUse the calculator,
it is very intuitive -
Using a noindex meta tag is one way to resolve duplicate content issues. If you take this approach, it is most likely you wish to use only the "noindex" tag and not the "nofollow" tag. You don't want to prevent Google from following the links on the page, but instead simply stop the content from being viewed as duplicate.
If you wish to explicitly include the "follow" you can but it is unnecessary since it is the default setting.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Consolidating a Large Site with Duplicate Content
I will be restructuring a large website for an OEM. They provide products & services for multiple industries, and the product/service offering is identical across all industries. I was looking at the site structure and ran a crawl test, and learned they have a LOT of duplicate content out there because of the way they set up their website. They have a page in the navigation for “solution”, aka what industry you are in. Once that is selected, you are taken to a landing page, and from there, given many options to explore products, read blogs, learn about the business, and contact them. The main navigation is removed. The URL structure is set up with folders, so no matter what you select after you go to your industry, the URL will be “domain.com/industry/next-page”. The product offerings, blogs available, and contact us pages do not vary by industry, so the content that can be found on “domain.com/industry-1/product-1” is identical to the content found on “domain.com/industry-2/product-1” and so-on and so-forth. This is a large site with a fair amount of traffic because it’s a pretty substantial OEM. Most of their content, however, is competing with itself because most of the pages on their website have duplicate content. I won’t begin my work until I can dive in to their GA and have more in-depth conversations with them about what kind of activity they’re tracking and why they set up the website this way. However, I don’t know how strategic they were in this set up and I don’t think they were aware that they had duplicate content. My first thought would be to work towards consolidating the way their site is set up, so we don’t spread the link-equity of “product-1” content, and direct all industries to one page, and track conversion paths a different way. However, I’ve never dealt with a site structure of this magnitude and don’t want to risk messing up their domain authority, missing redirect or URL mapping opportunities, or ruin the fact that their site is still performing well, even though multiple pages have the same content (most of which have high page authority and search visibility). I was curious if anyone has dealt with this before and if they have any recommendations for tackling something like this?
On-Page Optimization | | cassy_rich0 -
Duplicate Content - Pricing Plan tables
Hey guys, We're faced with a problem that we want to solve. We're working on the designs for a few pages for a drag & drop email builder we're currently working on, and we will be having the same pricing table on several pages (much like Moz does). We're worried that Google will take this as duplicate content and not be very fond of it. Any ideas about how we could integrate the same flow without potentially harming ranking efforts? And NO, re-writing the content for each table is not an option. It would do nothing but confuse the heck out of our clients. 😄 Thanks everybody!
On-Page Optimization | | andy.bigbangthemes0 -
"Issue: Duplicate Page Content " in Crawl Diagnostics - but these pages are noindex
Saw an issue back in 2011 about this and I'm experiencing the same issue. http://moz.com/community/q/issue-duplicate-page-content-in-crawl-diagnostics-but-these-pages-are-noindex We have pages that are meta-tagged as no-everything for bots but are being reported as duplicate. Any suggestions on how to exclude them from the Moz bot?
On-Page Optimization | | Deb_VHB0 -
Duplicate content on domains we own
Hello! We are new to SEO and have a problem we have caused ourselves. We own two domains GoCentrix.com (old domain) and CallRingTalk.com (new domain that we want to SEO). The content was updated on both domains at about the same time. Both are identical with a few exceptions. Now that we are getting into SEO we now understand this to be a big issue. Is this a resolvable matter? At this point what is the best approach to handle this? So far we have considered a couple of options. 1. Change the copy, but on which site? Is one flagged as the original and the other duplicate? 2. Robots.txt noindex, nofollow on the old one. Any help is appreciated, thanks in advance!
On-Page Optimization | | CallRingTalk0 -
Issue: Rel Canonical
My SEO Report shows issues: Rel Canonical I have a wordpress website each page has its content but I'm getting errors from my SEOMOZ report. I instaledl the yoast plug in to fix the issue but I'm still getting 29 errors. Wordpress 3.4.1
On-Page Optimization | | mobiledudes0 -
Duplicate content
crawler shows following links as duplicate http://www.mysite.com http://mysite.com http://www.mysite.com/ http://mysite.com. http://mysite.com/index.html How can i solve this issue?
On-Page Optimization | | bhanu22170 -
Duplicate Page Content Question
This article was published on fastcompany.com on March 19th. http://www.fastcompany.com/magazine/164/designing-facebook It did not receive much traffic, so it was re-posted on Co.Design today (March 27th) where it has received significantly more traffic. http://www.fastcodesign.com/1669366/facebook-agrees-the-secret-to-its-future-success-is-design My question is if google will dock us for reprinting/reusing content on another site (even if it is a sister site within the same company). If they do frown on that, is there a proper way to attribute the content to the source material/site (fastcompany.com)?
On-Page Optimization | | DanAsadorian0 -
How to solve Rel Canonical issue?
I have created one campaign for my website on SEOmoz. I found Rel Canonical issue for following 2 URLs. I can not understand that, what is error with that? Can any one help me to solve it? http://vistastores.com/blog/?p=1 http://vistastores.com/blog/?page_id=2
On-Page Optimization | | CommercePundit0