Can adding "noindex" help with quality penalizations?
-
Hello Moz fellows,
I have another question about content quality and Panda related penalization.
I was wondering this: If I have an entire section of my site that has been penalized due to thin content, can adding "noindex,follow" to all pages belonging to that section help de-penalizing the rest of the site in the short term, while we work to improve those penalized pages, which is going to take a long time? Can that be considered a "short term solution" to improve the overall site scoring on Google index while we work to improve those penalized pages, and, once ready, we remove the "noindex" tag?
I am eager to know your thoughts on this possible strategy.
Thank you in advance to everyone!
-
Thank you for your posting, but I made further research on all this, and I tend to disagree with what you state.
It is now my understanding that if you remove a page from the index, that content is no longer considered by Google, because it is actually "out of the index"... therefore, if, let's say, a specific page or a specific section of the site which could have caused a site-wide "content" penalty is removed from the index, those pages are no longer affecting any algorithmic calculation on the quality of the site from a "contents" stand point, and such alleged "content-related penalty" should be lifted.
Anyone else can confirm that?
-
Hi Fabrizo,
I agree with Andy's response up above. No indexing is not as good as removing the content from the website altogether, but it still can work as long as there are no links or sitemaps that lead Google back to the low quality content.
No indexing the pages won't be a permanent solution, only a temporary one that might help you in the meantime.
-
I am sorry, but I haven't received an affirmative answer to my last inquiry above...
-
Thank you Andy for your reply.
While I was waiting for an answer here, I made further research, and it looks like this can be a good strategy to cope with Panda related penalties, at least until the "bad content" is updated and improved:
https://mza.bundledseo.com/community/q/noindex-vs-page-removal-panda-recovery
Your thoughts?
Thank you again!
-
Hi Fabrizio,
Yes, and no.
I have seen this work in the past and I have also seen it make no difference. My feeling these days is that no-indexing doesn't solve the issue, even while being worked on, as I have seen more occurrences of it not working.
How big a problem are you trying to deal with? I did help a company with 37k pages recover from Panda a while ago, but we have to do some pretty hefty trimming of the site in order to get it back into good shape again. There issue was that thousands of pages all had big pieces of the same content on many similar pages, so we cut out a lot of the problem areas and pulled the site into something that resembled a bit more sense.
-Andy
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can anyone please explain the real difference between backlinks, 301 links, and redirect links?which one is better to rank a website? i am looking for the help for one of my website
Can anyone please explain the real difference between backlinks, 301 links, and redirect links? which one is better to rank a website? I am looking for help for one of my website vacuum cleaners
Intermediate & Advanced SEO | | hshajjajsjsj3880 -
Lower quality new domain link vs higher quality repeat domain link
First time poster here with a dilemma that head scratching and spreadsheets can't solve! I'm trying to work out whether to focus on getting links from new domains or to nurture relationships with the bigger sites in our business and get more links. Of the two links below which does the community here think would be more valuable a signal to Google? Both would be links from within relevant text/post copy. Link 1. Site DA 30. No links currently from this domain. Link 2. Site DA 60. Many links over last 12 months already from this domain. I suspect link 1 but given the enormous disparity in ranking power am I correct?! Thanks for any considered opinions out there! Matthew
Intermediate & Advanced SEO | | mat20150 -
NoIndex Purchase Page
We ran a ScreamingFrog report of one of our websites and found that there are thousands of instances of a single page with a different URL parameter, for example: purchase.cfm?id=1234
Intermediate & Advanced SEO | | ErnieB
purchase.cfm?id=1235
purchase.cfm?id=1236
purchase.cfm?id=1237 and we do not need purchase.cfm to be indexed for any reason as there is practically no content on that page to begin with, but it's just part of the purchase steps in our website. What is the best way to deal with this for Google & SEO? Should we do a Meta NoIndex of this purchase.cfm page? Thank you.0 -
How important is the optional <priority>tag in an XML sitemap of your website? Can this help search engines understand the hierarchy of a website?</priority>
Can the <priority>tag be used to tell search engines the hierarchy of a site or should it be used to let search engines know which priority to we want pages to be indexed in?</priority>
Intermediate & Advanced SEO | | mycity4kids0 -
Error: Missing required field "updated"
In my WordPress blog, there are pages for tags,categories,... like : https://www.abc.com/blog/category/how-to-cook-something/ On these pages I am getting the following error: Error: Missing required field "updated" So far I have 39 if these errors. Please let me know if this is an important issue to pay attention to? If yes, how I can fix it? Thanks Everyone
Intermediate & Advanced SEO | | AlirezaHamidian0 -
Use "If-Modified-Since HTTP header"
I´m working on a online brazilian marketplace ( looks like etsy in US) and we have a huge amount of pages... I´ve been studing a lot about that and I was wondering to use If-Modified-Since so Googlebot could check if the pages have been updated, and if it is not, there is no reason to get a new copy of them since it already has a current copy in the index. It uses a 304 status code, "and If a search engine crawler sees a web page status code of 304 it knows that web page has not been updated and does not need to be accessed again." Someone quoted before me**Since Google spiders billions of pages, there is no real need to use their resources or mine to look at a webpage that has not changed. For very large websites, the crawling process of search engine spiders can consume lots of bandwidth and result in extra cost and Googlebot could spend more time in pages actually changed or new stuff!**However, I´ve checked Amazon, Rakuten, Etsy and few others competitors and no one use it! I´d love to know what you folks think about it 🙂
Intermediate & Advanced SEO | | SeoMartin10 -
Robot.txt help
Hi, We have a blog that is killing our SEO. We need to Disallow Disallow: /Blog/?tag*
Intermediate & Advanced SEO | | Studio33
Disallow: /Blog/?page*
Disallow: /Blog/category/*
Disallow: /Blog/author/*
Disallow: /Blog/archive/*
Disallow: /Blog/Account/.
Disallow: /Blog/search*
Disallow: /Blog/search.aspx
Disallow: /Blog/error404.aspx
Disallow: /Blog/archive*
Disallow: /Blog/archive.aspx
Disallow: /Blog/sitemap.axd
Disallow: /Blog/post.aspx But Allow everything below /Blog/Post The disallow list seems to keep growing as we find issues. So rather than adding in to our Robot.txt all the areas to disallow. Is there a way to easily just say Allow /Blog/Post and ignore the rest. How do we do that in Robot.txt Thanks0 -
Rel="prev" and rel="next" implementation
Hi there since I've started using semoz I have a problem with duplicate content so I have implemented on all the pages with pagination rel="prev" and rel="next" in order to reduce the number of errors but i do something wrong and now I can't figure out what it is. the main page url is : alegesanatos.ro/ingrediente/ and for the other pages : alegesanatos.ro/ingrediente/p2/ - for page 2 alegesanatos.ro/ingrediente/p3/ - for page 3 and so on. We've implemented rel="prev" and rel="next" according to google webmaster guidelines without adding canonical tag or base link in the header section and we still get duplicate meta title error messages for this pages. Do you think there is a problem because we create another url for each page instead of adding parameters (?page=2 or ?page=3 ) to the main url alegesanatos.ro/ingrediente?page=2 thanks
Intermediate & Advanced SEO | | dan_panait0