Can adding "noindex" help with quality penalizations?
-
Hello Moz fellows,
I have another question about content quality and Panda related penalization.
I was wondering this: If I have an entire section of my site that has been penalized due to thin content, can adding "noindex,follow" to all pages belonging to that section help de-penalizing the rest of the site in the short term, while we work to improve those penalized pages, which is going to take a long time? Can that be considered a "short term solution" to improve the overall site scoring on Google index while we work to improve those penalized pages, and, once ready, we remove the "noindex" tag?
I am eager to know your thoughts on this possible strategy.
Thank you in advance to everyone!
-
Thank you for your posting, but I made further research on all this, and I tend to disagree with what you state.
It is now my understanding that if you remove a page from the index, that content is no longer considered by Google, because it is actually "out of the index"... therefore, if, let's say, a specific page or a specific section of the site which could have caused a site-wide "content" penalty is removed from the index, those pages are no longer affecting any algorithmic calculation on the quality of the site from a "contents" stand point, and such alleged "content-related penalty" should be lifted.
Anyone else can confirm that?
-
Hi Fabrizo,
I agree with Andy's response up above. No indexing is not as good as removing the content from the website altogether, but it still can work as long as there are no links or sitemaps that lead Google back to the low quality content.
No indexing the pages won't be a permanent solution, only a temporary one that might help you in the meantime.
-
I am sorry, but I haven't received an affirmative answer to my last inquiry above...
-
Thank you Andy for your reply.
While I was waiting for an answer here, I made further research, and it looks like this can be a good strategy to cope with Panda related penalties, at least until the "bad content" is updated and improved:
https://mza.bundledseo.com/community/q/noindex-vs-page-removal-panda-recovery
Your thoughts?
Thank you again!
-
Hi Fabrizio,
Yes, and no.
I have seen this work in the past and I have also seen it make no difference. My feeling these days is that no-indexing doesn't solve the issue, even while being worked on, as I have seen more occurrences of it not working.
How big a problem are you trying to deal with? I did help a company with 37k pages recover from Panda a while ago, but we have to do some pretty hefty trimming of the site in order to get it back into good shape again. There issue was that thousands of pages all had big pieces of the same content on many similar pages, so we cut out a lot of the problem areas and pulled the site into something that resembled a bit more sense.
-Andy
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Penalized By Google
My site name is bestmedtour .it's in English. I also want to have the Arabic version of the site. If I translate it with Google Translate, is it possible that the Arabic version of the site will be penalized?
Intermediate & Advanced SEO | | aalinlandacc0 -
Syntax: 'canonical' vs "canonical" (Apostrophes or Quotes) does it matter?
I have been working on a site and through all the tools (Screaming Frog & Moz Bar) I've used it recognizes the canonical, but does Google? This is the only site I've worked on that has apostrophes. rel='canonical' href='https://www.example.com'/> It's apostrophes vs quotes. Could this error in syntax be causing the canonical not to be recognized? rel="canonical"href="https://www.example.com"/>
Intermediate & Advanced SEO | | ccox10 -
Can I Use Multiple rel="alternate" Tags on Multiple Domains With the Same Language?
Hoping someone can answer this for me, as I have spent a ton of time researching with no luck... Is there anything misleading/wrong with using multiple rel="alternate" tags on a single webpage to reference multiple alternate versions? We currently use this tag to specify a mobile-equivalent page (mobile site served on an m. domain), but would like to expand so that we can cover another domain for desktop (possibly mobile in the future). In essence: MAIN DOMAIN would get The "Other Domain" would then use Canonical to point back to the main site. To clarify, this implementation idea is for an e-commerce site that maintains the same product line across 2 domains. One is homogeneous with furniture & home decor, which is a sub-set of products on our "main" domain that includes lighting, furniture & home decor. Any feedback or guidance is greatly appreciated! Thanks!
Intermediate & Advanced SEO | | LampsPlus0 -
Dilemma about "images" folder in robots.txt
Hi, Hope you're doing well. I am sure, you guys must be aware that Google has updated their webmaster technical guidelines saying that users should allow access to their css files and java-scripts file if it's possible. Used to be that Google would render the web pages only text based. Now it claims that it can read the css and java-scripts. According to their own terms, not allowing access to the css files can result in sub-optimal rankings. "Disallowing crawling of Javascript or CSS files in your site’s robots.txt directly harms how well our algorithms render and index your content and can result in suboptimal rankings."http://googlewebmastercentral.blogspot.com/2014/10/updating-our-technical-webmaster.htmlWe have allowed access to our CSS files. and Google bot, is seeing our webapges more like a normal user would do. (tested it in GWT)Anyhow, this is my dilemma. I am sure lot of other users might be facing the same situation. Like any other e commerce companies/websites.. we have lot of images. Used to be that our css files were inside our images folder, so I have allowed access to that. Here's the robots.txt --> http://www.modbargains.com/robots.txtRight now we are blocking images folder, as it is very huge, very heavy, and some of the images are very high res. The reason we are blocking that is because we feel that Google bot might spend almost all of its time trying to crawl that "images" folder only, that it might not have enough time to crawl other important pages. Not to mention, a very heavy server load on Google's and ours. we do have good high quality original pictures. We feel that we are losing potential rankings since we are blocking images. I was thinking to allow ONLY google-image bot, access to it. But I still feel that google might spend lot of time doing that. **I was wondering if Google makes a decision saying, hey let me spend 10 minutes for google image bot, and let me spend 20 minutes for google-mobile bot etc.. or something like that.. , or does it have separate "time spending" allocations for all of it's bot types. I want to unblock the images folder, for now only the google image bot, but at the same time, I fear that it might drastically hamper indexing of our important pages, as I mentioned before, because of having tons & tons of images, and Google spending enough time already just to crawl that folder.**Any advice? recommendations? suggestions? technical guidance? Plan of action? Pretty sure I answered my own question, but I need a confirmation from an Expert, if I am right, saying that allow only Google image access to my images folder. Sincerely,Shaleen Shah
Intermediate & Advanced SEO | | Modbargains1 -
Can too many "noindex" pages compared to "index" pages be a problem?
Hello, I have a question for you: our website virtualsheetmusic.com includes thousands of product pages, and due to Panda penalties in the past, we have no-indexed most of the product pages hoping in a sort of recovery (not yet seen though!). So, currently we have about 4,000 "index" page compared to about 80,000 "noindex" pages. Now, we plan to add additional 100,000 new product pages from a new publisher to offer our customers more music choice, and these new pages will still be marked as "noindex, follow". At the end of the integration process, we will end up having something like 180,000 "noindex, follow" pages compared to about 4,000 "index, follow" pages. Here is my question: can this huge discrepancy between 180,000 "noindex" pages and 4,000 "index" pages be a problem? Can this kind of scenario have or cause any negative effect on our current natural SEs profile? or is this something that doesn't actually matter? Any thoughts on this issue are very welcome. Thank you! Fabrizio
Intermediate & Advanced SEO | | fablau0 -
What is the best way to optimize/setup a teaser "coming soon" page for a new product launch?
Within the context of a physical product launch what are some ideas around creating a /coming-soon page that "teases" the launch. Ideally I'd like to optimize a page around the product, but the client wants to try build consumer anticipation without giving too many details away. Any thoughts?
Intermediate & Advanced SEO | | GSI0 -
"nocontent" class use for Google Custom Search: SEO Ramifications?
Hi all, Have a client that uses Google Custom Search tool which is crawling, indexing and returning millions of irrelevant results for keywords that are on every page of the site. IT/Web dev. team is considering adding a class attribute to prohibit Google Custom Search from indexing bolierplate content regions. Here's the link to Google's custom search help page: http://support.google.com/customsearch/bin/answer.py?hl=en&answer=2364585 "...If your pages have regions containing boilerplate content that's not relevant to the main content of the page, you can identify it using the nocontent class attribute. When Google Custom Search sees this tag, we'll ignore any keywords it contains and won't take them into account when calculating ranking for your Custom Search engine. (We'll still follow and crawl any links contained in the text marked nocontent.) To use the nocontent class attribute, include the boilerplate content in a tag (for example, span or div) like this: Google Custom Search also notes:"Using nocontent won't impact your site's performance in Google Web Search, or our crawling of your site, in any way. We'll continue to follow any links in tagged content; we just won't use keywords to calculate ranking for your Custom Search engine."Just want to confirm if anyone can forsee any SEO implications the use of this div could create? Anyone have experience with this?Thank you!
Intermediate & Advanced SEO | | MRM-McCANN0 -
Any advice on acquiring "jump to" via anchor link text?
Google says these types of references are generated algorithmically and that users should include a table of contents & descriptive anchor link text. Is there anything else we should take into consideration? Also, does anyone know how this works with pagination? Due to the design of our site, we can't make one really long article, but would need to divide it up into several 'pages'--even though it would all live on one URL (we'd use the # for pagination). Thank you in advance for your feedback.
Intermediate & Advanced SEO | | nicole.healthline1