Noindexing Duplicate (non-unique) Content
-
When "noindex" is added to a page, does this ensure Google does not count page as part of their analysis of unique vs duplicate content ratio on a website? Example: I have a real estate business and I have noindex on MLS pages. However, is there a chance that even though Google does not index these pages, Google will still see those pages and think "ah, these are duplicate MLS pages, we are going to let those pages drag down value of entire site and lower ranking of even the unique pages". I like to just use "noindex, follow" on those MLS pages, but would it be safer to add pages to robots.txt as well and that should - in theory - increase likelihood Google will not see such MLS pages as duplicate content on my website?
On another note: I had these MLS pages indexed and 3-4 weeks ago added "noindex, follow". However, still all indexed and no signs Google is noindexing yet.....
-
Canonical pages don't have to be the same.
it will merge the content to look like one page.
Good luck
-
thx, Alan. I am already using re=next prev. However, that means all those paginated pages will still be indexed. I am adding the "noindex, follow" to page 2-n and only leaving page 1 indexed. Canonical: I don't think that will work. Each page in the series shows different properties, which means pages 1 - n are all different......
-
Ok if you use follow, that will be ok. but I would be looking at canonical or next previous first
-
I am trying to rank for those MLS duplicate alike pages, since that is what users want (they don't want my guide pages with lots of unique data, when they are searching "....for sale"). I will add unique data to page 1 of these MLS result pages. However, page 2-50 will NOT change (stay duplicate alike looking). If I have page 1-50 indexed, the unique content on page 1 may look like a drop in the ocean to G, and that is why I feel including "noindex, follow" on pages 2-50 may make sense.
-
That's correct.
you wont rank for duplicate pages, but unless most of your site is duplicate you wont be penalized
-
http://moz.com/blog/handling-duplicate-content-across-large-numbers-of-urls - that is Rand's whiteboard Friday a few weeks ago and I quote from the transcripts:
"So what happens, basically, is you get a page like this. I'm at BMO's Travel Gadgets. It's a great website where I can pick up all sorts of travel supplies and gear. The BMO camera 9000 is an interesting one because the camera's manufacturer requires that all websites which display the camera contain a lot of the same information. They want the manufacturer's description. They have specific photographs that they'd like you to use of the product. They might even have user reviews that come with those.
Because of this, a lot of the folks, a lot of the e-commerce sites who post this content find that they're getting trapped in duplicate content filters. Google is not identifying their content as being particularly unique. So they're sort of getting relegated to the back of the index, not ranking particularly well. They may even experience problems like Google Panda, which identifies a lot of this content and says, "Gosh, we've seen this all over the web and thousands of their pages, because they have thousands of products, are all exactly the same as thousands of other websites' other products."
-
There is nothing wrong with having duplicate content. It becomes a problem when you have a site that is all or almost all duplicate or thin content.
Having a page that is on every other competitors site will not harm you, you just may not rank for it.
but no indexing can cause lose of link juice as all links pointing to non indexed pages waste there link juice. Using noindex,follow will return most of this, but still there in no need to no-index
-
http://www.honoluluhi5.com/oahu-condos/ - this is an "MLS result page". That URL will soon have some statistics and it will be unique (I will include in index). All the paginated pages (2 to n) hardly has any unique content. It is great layout, users love it (ADWords campaign average user spends 9min and views 16 pages on site), but since it is MLS listings (shared amongst thousands of Realtors) Google will see "ah, these are duplicate pages, nothing unique". That is why I plan to index page 1 (the URL I list) but all paginated pages like: http://www.honoluluhi5.com/oahu-condos/page-2) I will keep as "noindex, follow". Also, I want to rank for this URL: http://www.honoluluhi5.com/oahu/honolulu-condos/ which is a sub-category of the first URL and 100% of the content is exactly the same as the 1st URL. So, I will focus on indexing just the 1st page and not the paginated pages. Unfortunately, G cannot see value in layout and design and I can see how keeping all pages indexed could hurt my site.
Would be happy to hear your thoughts on this. I launched site 4 months ago, more unique and quality content than 99% of other firms I am up against, yet nothing happens ranking wise yet. I suspect all these MLS pages is the issue. Time will show!
-
If you no index, I don't think Next Previous will have any affect.
If they are different then and if the keywords are all important why no-index?
-
Thx ,Philip. I am using already, but I thought adding "noindex, follow" to those paginated pages (on top of rel=next prev") will increase likelihood G will NOT see all those MLS result pages as a bunch of duplicate content. Page 1 may look thin, but with some statistical data I will soon include it is unique and that uniqueness may offset lack of indexed MLS result pages.....not sure if my reasoning is sound. Would be happy to hear if you feel differently
-
Sounds like you should actually be using rel=next and rel=prev.
More info here: http://googlewebmastercentral.blogspot.com/2011/09/pagination-with-relnext-and-relprev.html
-
Hi Alan, thx for your comment. Let me give you an example and if you have a though that's be great:
- Condos on Island: http://www.honoluluhi5.com/oahu-condos/
- Condos in City: http://www.honoluluhi5.com/oahu/honolulu-condos/
- Condos in Region: http://www.honoluluhi5.com/oahu/honolulu/metro-condos/
Properties on the result page for 3) are all in 2) and all properties within 2) is within 1). Furthermore, for each of those URL, the paginated pages (2 to n) are all different, since each property is different, so using canonical tags would not be accurate. 1 + 2 + 3 are all important keywords.
Here is what I am planning: add some unique content to the first page in the series for each of those URL and include just the 1st page in the serious to the index, but pages 2 to n I will keep "noindex, follow" on. Argument could be "your MLS result pages will look too thin and not rank" but the other way of looking at it is "with potentially 500 or more properties on each URL, a bit of stats on page 1 will not offset all the MLS duplicate data, so even though the page may look thin, only indexing page 1 is best way forward".
-
Remember that if you no-index pages, any link you have on your site pointing to those pages is wasting its link juice.
This looks like a job for Canonical tag
-
lol - good answer Philip. I hear you. What makes it difficult is the lack of crystal clear guidelines from search engines....it is almost like they don't know themselves and each case is sort of on a "what feels right" basis.....
-
Good find. I've never seen this part of the help section. Their resonating reason behind all of the examples seems to be "You don’t need to manually remove URLs; they will drop out naturally over time."
I have never had an issue, nor have I ever heard of anyone having an issue, removing URLs with the Removal Tool. I guess if you don't feel safe doing it, you can wait for Google's crawler to catch up, although it could take over a month. If you're comfortable waiting it out, have no reasons to rush it, AND feel like playing it super safe... you can disregard everything I've said
We all learn something new every day!
-
based on Google's own guidelines it appears to be a bad idea to use the removal tool under normal circumstances (which I believe my site falls under): https://support.google.com/webmasters/answer/1269119
It starts with: "The URL removal tool is intended for pages that urgently need to be removed—for example, if they contain confidential data that was accidentally exposed. Using the tool for other purposes may cause problems for your site."
-
thx, Philip. Most helpful. I will get on it
-
Yes. It will remove /page-52 and EVERYTHING that exists in /oahu/honolulu/metro/waikiki-condos/. It will also remove everything that exists in /page-52/ (if anything). It trickles down as far as the folders in that directory will go.
**Go to Google search and type this in: **site:honoluluhi5.com/oahu/honolulu/metro/waikiki-condos/
That will show you everything that's going to be removed from the index.
-
Yep, you got it.
You can think of it exactly like Windows folders, if that helps you stay focused. If you have C:\Website\folder1 and C:\Website\folder12. "noindexing" \folder1\ would leave \folder12\ alone because they're not in the same directory.
-
for some MLS result pages I have a BUNCH of pages and I want to remove from index with 1 click as opposed to having to include each paginated page. Example: http://www.honoluluhi5.com/oahu/honolulu/metro/waikiki-condos/page-52 I simply include"/oahu/honolulu/metro/waikiki-condos/" and that will ALSO remove from index this page: http://www.honoluluhi5.com/oahu/honolulu/metro/waikiki-condos/page-52 - is that correct?
-
removing directory "/oahu/waianae-makaha-condos/" will NOT remove "/oahu/waianae-makaha/maili-condos/" because the silo "waianae-makaha" and "waianae-makaha-condos" are different.
HOWEVER,
removing directory " /oahu/waianae-makaha/maili-condos/" will remove "/oahu/waianae-makaha/maili-condos/page-2" because they share this silo "waianae-makaha"Is that correctly understood?
-
Yep. Just last week I had an entire website deindexed (on purpose, it's a staging website) by entering just / into the box and selecting directory. By the next morning the entire website was gone from the index
It works for folders/directories too. I've used it many times.
-
so I will remove directory for "/oahu/waianae-makaha/maili-condos/" and that will ensure removal of "/oahu/waianae-makaha/maili-condos/page-2" as well?
-
thx, Philip. So you are saying if I use the directory option that will ensure the paginated pages will also be taken out of the index like this page: /oahu/waianae-makaha/maili-condos/page-2
-
I'm not 100% sure Google will understand you if you leave off the slashes. I've always added them and have never had a problem, so you want to to type: /oahu/waianae-makaha-condos/
Typing that would NOT include the neighborhood URL, in your example. It will only remove everything that exists in the /waianae-makaha-condos/ folder (including that main category page itself).
edit >> To remove the neighborhood URL and everything in that folder as well, type /oahu/waianae-makaha/maili-condos/ and select the option for "directory".
edit #2 >> I just want to add that you should be very careful with this. You don't want to use the directory option unless you're 100% sure there's nothing in that directory that you want to stay indexed.
-
thx. I have a URL like this for a REGION: http://www.honoluluhi5.com/oahu/waianae-makaha-condos/ and for a "NEIGHBORHOOD" I have this: http://www.honoluluhi5.com/oahu/waianae-makaha/maili-condos/
As you can see Region has "waianae-makaha-condos" directory, whereas the Neighborhood has "waianae-makaha" without the "condos" for that region directory part.
Question: when I go to GWT and remove can I simply type "oahu/waianae-makaha-condos" and select the directory option and that will ALSO exclude the neighborhood URL? OR, since the region part in the URL within the neighborhood URL is different I have to submit individually?
-
Yep! After you remove the URL or directory of URLs, there is a "Reinclude" button you can get to. You just need to switch your "Show:" view so it shows URLs removed. The default is to show URLs PENDING removal. Once they're removed, they will disappear from that view.
-
good one, Philip. Last BIG question: if I remove URL's from GWT, is it possible to "unremove" without issue? I am planning to index some of these MLS pages in the future when I have more unique content on.
-
When "noindex" is added to a page, does this ensure Google does not count page as part of their analysis of unique vs duplicate content ratio on a website? Yes, that will tell Google that you understand the pages don't belong in the index. They will not penalize your site for duplicate content if you're explicitly telling Google to noindex them.
Is there a chance that even though Google does not index these pages, Google will still see those pages and think "ah, these are duplicate MLS pages, we are going to let those pages drag down value of entire site and lower ranking of even the unique pages". No, there's no chance these will hurt you if they're set to noindex. That is exactly what the noindex tag is for. You're doing what Google wants you to do.
I like to just use "noindex, follow" on those MLS pages, but would it be safer to add pages to robots.txt as well and that should - in theory - increase likelihood Google will not see such MLS pages as duplicate content on my website? You could add them to your robots.txt but that won't increase your likelihood of Google not penalizing you because there is already no worry about being penalized for pages not being indexed.
On another note: I had these MLS pages indexed and 3-4 weeks ago added "noindex, follow". However, still all indexed and no signs Google is noindexing yet.....
Donna's advice is perfect here. Use the Remove URLs tool. Every time I've used the tool, Google has removed the URLs from the index in less than 12-24 hours. I of course made sure to have a noindex tag in place first. Just make sure you enter everything AFTER the TLD (.com, .net, etc) and nothing before it. Example: You'd want to ask Google to remove /mls/listing122 but not example.com/mls/listing122. The ladder will not work properly because Google automatically adds "example.com" to it (they just don't make this very clear). -
thx, Donna. My question was mainly around whether Google will NOT consider MLS pages as duplicate content when I place the "noindex" on. We can all guess, but does anyone have anything concrete on this, to make me understand reality of this. Can we with 90% certainty say "yes, if you place noindex on a duplicate content page, then Google will not consider that duplicate content, hence it will not count towards how Google views duplicate vs unique site content". This is the big question: If we are left in uncertainty, then only way forward may be to password protect such pages and not offer users without creating an account.....
Removal on GWT: I plan to index some of these MLS pages in the future (when I get more unique content on them) and I am concerned if once submitted to GWT for removal, then it is tough to get such pages indexed again.
-
Hi khi5,
I think excluding those MLS listings from your site using the robots.txt file would be over kill.
As I'm sure you well know, Google does what it wants. I think tagging the pages you don't want indexed with "noindex follow" AND adding them to the robots.txt file doesn't make the likelihood that Google will respect your wishes any higher. You might want to consider canonicalizing them though, so links to and bookmarks and shares of said pages get credited to your site.
As to how long it takes for Google to deindex said pages, it can take a very long time. In my experience, "a very long time" can run 6-8 months. You do have the option however, of using Google Webmaster Tools > Google Index > Remove URLs to ask to have them deindexed faster. Again, no guarantees that Google will do as you ask, but I've found them to be pretty responsive when I use the tool.
I'd love to hear if anyone else feels differently.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Query based site; duplicate content; seo juice flow.
Hi guys, We're planning on starting a Saas based service where we'll be selling different skins. Let's say WordPress themes, though it's not about that. Say we have an url called site.com/ and we would like to direct all seo juice to the mother landing page /best-wp-themes/ but then have that juice flow towards our additional pages: /best-wp-themes/?id=Mozify
Intermediate & Advanced SEO | | andy.bigbangthemes
/best-wp-themes/?id=Fiximoz /best-wp-themes/?id=Mozicom Challenges: 1. our content would be formatted like this:
a. Same content - features b. Same content - price c. Different content - each theme will have its own set of features / design specs. d. Same content - testimonials. How would be go about not being penalised by SE's for the duplicate content, but still have the /?id=whatever pages be indexed with proper content? 2. How do we go about making sure SEO juice flows to the /?id pages too?Basically it's the same thing with different skins. Thanks for the help!0 -
Is it OK to dynamically serve different content to paid and non-paid traffic from the same URL?
Hi Moz! We're trying to serve different content to paid and non-paid visitors from the same URL. Is this black hat? Here's the reason we want to do this -- we're testing a theory that paid ads boost organic rankings. This is something we saw happen to a client and we want to test this further. But we have to have a different UX that's more sparse and converts better for paid. Thanks for reading!
Intermediate & Advanced SEO | | Horizon_SEO0 -
Base copy on 1 page, then adding a bit more for another page - potential duplicate content. What to do?
Hi all, We're creating a section for a client that is based on road trips - for example, New York to Toronto. We have a 3 day trip, a 5 day trip, a 7 day trip and a 10 day trip. The 3 day trip is the base, and then for the 5 day trip, we add another couple of stops, for the 7 day trip, we add a couple more stops and then for the 10 day trip, there might be two or three times the number of stops of the initial 3 day trip. However, the base content is similar - you start at New York, you finish in Toronto, you likely go through Niagara on all trips. It's not exact duplicate content, but it's similar content. I'm not sure how to look after it? The thoughts we have are:1) Use canonical tags 3,5,7 day trips to the 10 day trip.
Intermediate & Advanced SEO | | digitalhothouse
2) It's not exactly duplicate content, so just go with the content as it is We don't want to get hit by any penalty for duplicate content so just want to work out what you guys think is the best way to go about this. Thanks in advance!0 -
Duplicate Content: Is a product feed/page rolled out across subdomains deemed duplicate content?
A company has a TLD (top-level-domain) which every single product: company.com/product/name.html The company also has subdomains (tailored to a range of products) which lists a choosen selection of the products from the TLD - sort of like a feed: subdomain.company.com/product/name.html The content on the TLD & subdomain product page are exactly the same and cannot be changed - CSS and HTML is slightly differant but the content (text and images) is exactly the same! My concern (and rightly so) is that Google will deem this to be duplicate content, therfore I'm going to have to add a rel cannonical tag into the header of all subdomain pages, pointing to the original product page on the TLD. Does this sound like the correct thing to do? Or is there a better solution? Moving on, not only are products fed onto subdomain, there are a handfull of other domains which list the products - again, the content (text and images) is exactly the same: other.com/product/name.html Would I be best placed to add a rel cannonical tag into the header of the product pages on other domains, pointing to the original product page on the actual TLD? Does rel cannonical work across domains? Would the product pages with a rel cannonical tag in the header still rank? Let me know if there is a better solution all-round!
Intermediate & Advanced SEO | | iam-sold0 -
Get Duplicate Page content for same page with different extension ?
I have added a campaign like "Bannerbuzz" in SEOMOZ Pro account and before 2 or 3 days i got errors related to duplicate page content . they are showing me same page with different extension. As i mentioned below http://www.bannerbuzz.com/outdoor-vinyl-banners.html
Intermediate & Advanced SEO | | CommercePundit
&
http://www.bannerbuzz.com/outdoor_vinyl_banner.php We checked our whole source files but we didn't define php related urls in our source code. we want to catch only our .html related urls. so, Can you please guide us to solve this issue ? Thanks <colgroup><col width="857"></colgroup>
| http://www.bannerbuzz.com/outdoor-vinyl-banners.html |0 -
How to prevent duplicate content within this complex website?
I have a complex SEO issue I've been wrestling with and I'd appreciate your views on this very much. I have a sports website and most visitors are looking for the games that are played in the current week (I've studied this - it's true). We're creating a new website from scratch and I want to do this is as best as possible. We want to use the most elegant and best way to do this. We do not want to use work-arounds such as iframes, hiding text using AJAX etc. We need a solid solution for both users and search engines. Therefor I have written down three options: Using a canonical URL; Using 301-redirects; Using 302-redirects. Introduction The page 'website.com/competition/season/week-8' shows the soccer games that are played in game week 8 of the season. The next week users are interested in the games that are played in that week (game week 9). So the content a visitor is interested in, is constantly shifting because of the way competitions and tournaments are organized. After a season the same goes for the season of course. The website we're building has the following structure: Competition (e.g. 'premier league') Season (e.g. '2011-2012') Playweek (e.g. 'week 8') Game (e.g. 'Manchester United - Arsenal') This is the most logical structure one can think of. This is what users expect. Now we're facing the following challenge: when a user goes to http://website.com/premier-league he expects to see a) the games that are played in the current week and b) the current standings. When someone goes to http://website.com/premier-league/2011-2012/ he expects to see the same: the games that are played in the current week and the current standings. When someone goes to http://website.com/premier-league/2011-2012/week-8/ he expects to the same: the games that are played in the current week and the current standings. So essentially there's three places, within every active season within a competition, within the website where logically the same information has to be shown. To deal with this from a UX and SEO perspective, we have the following options: Option A - Use a canonical URL Using a canonical URL could solve this problem. You could use a canonical URL from the current week page and the Season page to the competition page: So: the page on 'website.com/$competition/$season/playweek-8' would have a canonical tag that points to 'website.com/$competition/' the page on 'website.com/$competition/$season/' would have a canonical tag that points to 'website.com/$competition/' The next week however, you want to have the canonical tag on 'website.com/$competition/$season/playweek-9' and the canonical tag from 'website.com/$competition/$season/playweek-8' should be removed. So then you have: the page on 'website.com/$competition/$season/playweek-9' would have a canonical tag that points to 'website.com/$competition/' the page on 'website.com/$competition/$season/' would still have a canonical tag that points to 'website.com/$competition/' In essence the canonical tag is constantly traveling through the pages. Advantages: UX: for a user this is a very neat solution. Wherever a user goes, he sees the information he expects. So that's all good. SEO: the search engines get very clear guidelines as to how the website functions and we prevent duplicate content. Disavantages: I have some concerns regarding the weekly changing canonical tag from a SEO perspective. Every week, within every competition the canonical tags are updated. How often do Search Engines update their index for canonical tags? I mean, say it takes a Search Engine a week to visit a page, crawl a page and process a canonical tag correctly, then the Search Engines will be a week behind on figuring out the actual structure of the hierarchy. On top of that: what do the changing canonical URLs to the 'quality' of the website? In theory this should be working all but I have some reservations on this. If there is a canonical tag from 'website.com/$competition/$season/week-8', what does this do to the indexation and ranking of it's subpages (the actual match pages) Option B - Using 301-redirects Using 301-redirects essentially the user and the Search Engine are treated the same. When the Season page or competition page are requested both are redirected to game week page. The same applies here as applies for the canonical URL: every week there are changes in the redirects. So in game week 8: the page on 'website.com/$competition/' would have a 301-redirect that points to 'website.com/$competition/$season/week-8' the page on 'website.com/$competition/$season' would have a 301-redirect that points to 'website.com/$competition/$season/week-8' A week goes by, so then you have: the page on 'website.com/$competition/' would have a 301-redirect that points to 'website.com/$competition/$season/week-9' the page on 'website.com/$competition/$season' would have a 301-redirect that points to 'website.com/$competition/$season/week-9' Advantages There is no loss of link authority. Disadvantages Before a playweek starts the playweek in question can be indexed. However, in the current playweek the playweek page 301-redirects to the competition page. After that week the page's 301-redirect is removed again and it's indexable. What do all the (changing) 301-redirects do to the overall quality of the website for Search Engines (and users)? Option C - Using 302-redirects Most SEO's will refrain from using 302-redirects. However, 302-redirect can be put to good use: for serving a temporary redirect. Within my website there's the content that's most important to the users (and therefor search engines) is constantly moving. In most cases after a week a different piece of the website is most interesting for a user. So let's take our example above. We're in playweek 8. If you want 'website.com/$competition/' to be redirecting to 'website.com/$competition/$season/week-8/' you can use a 302-redirect. Because the redirect is temporary The next week the 302-redirect on 'website.com/$competition/' will be adjusted. It'll be pointing to 'website.com/$competition/$season/week-9'. Advantages We're putting the 302-redirect to its actual use. The pages that 302-redirect (for instance 'website.com/$competition' and 'website.com/$competition/$season') will remain indexed. Disadvantages Not quite sure how Google will handle this, they're not very clear on how they exactly handle a 302-redirect and in which cases a 302-redirect might be useful. In most cases they advise webmasters not to use it. I'd very much like your opinion on this. Thanks in advance guys and galls!
Intermediate & Advanced SEO | | StevenvanVessum0 -
Duplicate content for swatches
My site is showing a lot of duplicate content on SEOmoz. I have discovered it is because the site has a lot of swatches (colors for laminate) within iframes. Those iframes have all the same content except for the actual swatch image and the title of the swatch. For example, these are two of the links that are showing up with duplicate content: http://www.formica.com/en/home/dna.aspx?color=3691&std=1&prl=PRL_LAMINATE&mc=0&sp=0&ots=&fns=&grs= http://www.formica.com/en/home/dna.aspx?color=204&std=1&prl=PRL_LAMINATE&mc=0&sp=0&ots=&fns=&grs= I do want each individual swatch to show up in search results and they currently are if you search for the exact swatch name. Is the fact that they all have duplicate content affecting my individual rankings and my domain authority? What can I do about it? I can't really afford to put unique content on each swatch page so is there another way to get around it? Thanks!
Intermediate & Advanced SEO | | AlightAnalytics0 -
How do I fix the error duplicate page content and duplicate page title?
On my site www.millsheating.co.uk I have the error message as per the question title. The conflict is coming from these two pages which are effectively the same page: www.millsheating.co.uk www.millsheating.co.uk/index I have added a htaccess file to the root folder as I thought (hoped) it would fix the problem but I doesn't appear to have done so. this is the content of the htaccess file: Options +FollowSymLinks RewriteEngine On RewriteCond %{HTTP_HOST} ^millsheating.co.uk RewriteRule (.*) http://www.millsheating.co.uk/$1 [R=301,L] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /index\.html\ HTTP/ RewriteRule ^index\.html$ http://www.millsheating.co.uk/ [R=301,L] AddType x-mapp-php5 .php
Intermediate & Advanced SEO | | JasonHegarty0