Could large number of "not selected" pages cause a penalty?
-
My site was penalized for specific pages in the UK On July 28 (corresponding with a Panda update).
I cleaned up my website and wrote to Google and they responded that "no manual spam actions had been taken".
The only other thing I can think of is that we suffered an automatic penalty.
I am having problems with my sitemap and it is indexing many error pages, empty pages, etc... According to our index status we have 2,679,794 not selected pages and 36,168 total indexed.
Could this have been what caused the error?
(If you have any articles to back up your answers that would be greatly appreciate)
Thanks!
-
Canonical tag to what? Themselves? Or the page they should be? Are these pages unique by some URL variables only? If so, you can instruct Google to ignore specific get variables to resolve this issue but you would also want to fix your sitemap woes: http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1235687
This is where it gets sticky, these pages are certainly not helping and not being indexed, Google Webmaster tools shows us that, but if you have this problem, how many other technical problems could the site have?
We can be almost certain you have some kind of panda filter but to diagnose it further we would need a link and access to analytics to determine what has gone wrong and provide more detailed guidance to resolve the issues.
This could be a red herring and your problem could be elsewhere but with no examples we can only give very general responses. If this was my site I would certainly look to identify the most likely issues and work through this in a pragmatic way to eliminate possible issues and look at other potentials.
My advice would be to have the site analysed by someone with distinct experience with Panda penalties who can give you specific feedback on the problems and provide guidance to resolve them.
If the URL is sensitive and can't be shared here, I can offer this service and am in the UK. I am sure can several other users at SEOMoz can also help. I know Marie Haynes offers this service as I am sure Ryan Kent could help also.
Shout if you have any questions or can provide more details (or a url).
-
Hi,
Thanks for the detailed answer.
We have many duplicate pages, but they all have canonical tags on them... shouldn't that be solving the problem. Would pages with the canonical tag be showing up here?
-
Yes, this can definitely cause problems. In fact this is a common footprint in sites hit by the panda updates.
It sound like you have some sort of canonical issue on the site: Multiple copies of each page are being crawled. Google is finding lots of copies of the same thing, crawling them but deciding that they are not sufficiently unique/useful to keep in the index. I've been working on a number of sites hit with the same issue and clean up can be a real pain.
The best starting point for reading is probably this article here on SEOmoz : http://www.seomoz.org/learn-seo/duplicate-content . That article includes some useful links on how to diagnose and solve the issues as well, so be sure to check out all the linked resources.
-
Hey Sarah
There are always a lot of moving parts when it comes to penalties but the very fact that you lost traffic on a known panda date really points towards this being a Panda style of penalty. Panda, is an algorithmic penalty so you will not receive any kind of notification in Webmaster Tools and likewise, a re-inclusion request will not help, you have to fix the problem to resolve the issues.
The not selected pages are likely a big part of your problem. Google classes not selected pages as follows:
"Not selected: Pages that are not indexed because they are substantially similar to other pages, or that have been redirected to another URL. More information."
If you have the best part of 3 million of these pages that are 'substantially similar' to other pages then there is every change that this is a very big part of your problem.
Obviously, there are a lot of moving parts to this. This sounds highly likely this is part of your problem and just think how this looks to Google. 2.6 million pages that are duplicated. It is a low quality signal, a possible attempt at manipulation or god knows what else but what we do know, is that is unlikely to be a strong result for any search users so those pages have been dropped.
What to do?
Well, firstly, fix your site map and sort out these duplication problems. It's hard to give specifics without a link to the site in question but just sort this out. Apply the noindex tag dynamically if needs be, remove these duplicates from the sitemap, heck, remove the sitemap alltogether for a while if needs be till it is fixed. Just sort out these issues one way or another.
Happy to give more help here if I can but would need a link or some such to advise better.
Resources
You asked for some links but I am not completely sure what to provide here without a link but let me have a shot and provide some general points:
1. Good General Panda Overview from Dr. Pete
http://www.seomoz.org/blog/fat-pandas-and-thin-content
2. An overview of canonicalisation form Google
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=139066
3. A way to diagnose and hopefully recover from Panda from John Doherty at distilled.
http://www.distilled.net/blog/seo/beating-the-panda-diagnosing-and-rescuing-a-clients-traffic/
4. Index Status Overview from Google
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=2642366
Summary
You have a serious problem here but hopefully one that can be resolved. Panda is a primarily focused at on page issues and this is an absolute doozy of an on page issue so sort it out and you should see a recovery. Keep in mind you have 75 times more problem pages than actual content pages at the moment in your site map so this may be the biggest case I have ever seen so I would be very keen to see how you get on and what happens when you resolve these issues as I am sure would the wider SEOMoz community.
Hope this helps & please fire over any questions.
Marcus
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Paginated Pages Page Depth
Hi Everyone, I was wondering how Google counts the page depth on paginated pages. DeepCrawl is showing our primary pages as being 6+ levels deep, but without the blog or with an infinite scroll on the /blog/ page, I believe it would be only 2 or 3 levels deep. Using Moz's blog as an example, is https://mza.seotoolninja.com/blog?page=2 treated to be on the same level in terms of page depth as https://mza.seotoolninja.com/blog? If so is it the https://site.comcom/blog" /> and https://site.com/blog?page=3" /> code that helps Google recognize this? Or does Google treat the page depth the same way that DeepCrawl is showing it with the blog posts on page 2 being +1 in page depth compared to the ones on page 1, for example? Thanks, Andy
Intermediate & Advanced SEO | | AndyRSB0 -
Best way to "Prune" bad content from large sites?
I am in process of pruning my sites for low quality/thin content. The issue is that I have multiple sites with 40k + pages and need a more efficient way of finding the low quality content than looking at each page individually. Is there an ideal way to find the pages that are worth no indexing that will speed up the process but not potentially harm any valuable pages? Current plan of action is to pull data from analytics and if the url hasn't brought any traffic in the last 12 months then it is safe to assume it is a page that is not beneficial to the site. My concern is that some of these pages might have links pointing to them and I want to make sure we don't lose that link juice. But, assuming we just no index the pages we should still have the authority pass along...and in theory, the pages that haven't brought any traffic to the site in a year probably don't have much authority to begin with. Recommendations on best way to prune content on sites with hundreds of thousands of pages efficiently? Also, is there a benefit to no indexing the pages vs deleting them? What is the preferred method, and why?
Intermediate & Advanced SEO | | atomiconline0 -
Should pages with rel="canonical" be put in a sitemap?
I am working on an ecommerce site and I am going to add different views to the category pages. The views will all have different urls so I would like to add the rel="canonical" tag to them. Should I still add these pages to the sitemap?
Intermediate & Advanced SEO | | EcommerceSite0 -
Received "Googlebot found an extremely high number of URLs on your site:" but most of the example URLs are noindexed.
An example URL can be found here: http://symptom.healthline.com/symptomsearch?addterm=Neck%20pain&addterm=Face&addterm=Fatigue&addterm=Shortness%20Of%20Breath A couple of questions: Why is Google reporting an issue with these URLs if they are marked as noindex? What is the best way to fix the issue? Thanks in advance.
Intermediate & Advanced SEO | | nicole.healthline0 -
What is the difference between link rel="canonical" and meta name="canonical"?
Hi mozzers, I would like to know What is the difference between link rel="canonical" and meta name="canonical"? and is it dangerous to have both of these elements combined together? One of my client's page has the these two elements and kind of bothers me because I only know link rel="canonical" to be relevant to remove duplicates. Thanks!
Intermediate & Advanced SEO | | Ideas-Money-Art0 -
How to avoid too many "On Page Links"?
Hi everyone I don't seem to be able to keep big G off my back, even though I do not engage in any black hat or excessive optimization practices. Due to another unpleasant heavy SERP "fluctuation" I am in investigation mode yet again and want to take a closer look at one of the warnings within the SEOmoz dashboard, which is "Too many on page links". Looking at my statistics this is clearly the case. I wonder how you can even avoid that at times. I have a lot of information on my homepage that links out to subpages. I get the feeling that even the links within the roll-over menus (or dropdown) are counted. Of course, in that case then you will end up with a crazy amount of on page links. What about blog-like news entries on your homepage that link to other pages as well? And not to forget the links that result from the tags underneath a post? What am I trying to get at? Well, do you feel that a bad website template may cause this issue i.e. are the links from roll-over menus counted as links on the homepage even though they are not directly visible? I am not sure how to cut down on the issue as the sidebar modules are present on every page and thus up the links count wherever you are on the site. On another note, I've seen plenty of homepages with excessive information and links going out, would they be suffering from the search engines' hammer too? How do you manage the too many on page links issue? Many thanks for your input!
Intermediate & Advanced SEO | | Hermski0 -
Posing QU's on Google Variables "aclk", "gclid" "cd", "/aclk" "/search", "/url" etc
I've been doing a bit of stats research prompted by read the recent ranking blog http://www.seomoz.org/blog/gettings-rankings-into-ga-using-custom-variables There are a few things that have come up in my research that I'd like to clear up. The below analysis has been done on my "conversions". 1/. What does "/aclk" mean in the Referrer URL? I have noticed a strong correlation between this and "gclid" in the landing page variable. Does it mean "ad click" ?? Although they seem to "closely" correlate they don't exactly, so when I have /aclk in the referrer Url MOSTLY I have gclid in the landing page URL. BUT not always, and the same applies vice versa. It's pretty vital that I know what is the best way to monitor adwords PPC, so what is the best variable to go on? - Currently I am using "gclid", but I have about 25% extra referral URL's with /aclk in that dont have "gclid" in - so am I underestimating my number of PPC conversions? 2/. The use of the variable "cd" is great, but it is not always present. I have noticed that 99% of my google "Referrer URL's" either start with:
Intermediate & Advanced SEO | | James77
/aclk - No cd value
/search - No cd value
/url - Always contains the cd variable. What do I make of this?? Thanks for the help in advance!0