De-indexing thin content & Panda--any advantage to immediate de-indexing?
-
We added the nonidex, follow tag to our site about a week ago on several hundred URLs, and they are still in Google's index. I know de-indexing takes time, but I am wondering if having those URLs in the index will continue to "pandalize" the site. Would it be better to use the URL removal request? Or, should we just wait for the noindex tags to remove the URLs from the index?
-
Whenever Matt Cutts discusses this subject in the Webmaster Tools videos and elsewhere, there is always a caveat along the lines of "while google mostly take notice of noindex and robots.txt, this may not always be acted upon". The primary reason given for this seems to be if content is indexed via a link from another site, or exists in google cache. In these cases it seems logical that it may continue to appear in the index.
Your question reminded me of Dr Pete's Catastrophic Canonicalization Experiment - it seems his method proved quite effective
-
Hey
I don't think it would make a great deal of difference as you are going to need to wait for a full crawl of your site anyhow before you see any benefits.
Out of interest, how are you identifying the low quality pages? One way to have a go at this is to use your analytics and identify all pages with a 100% bounce rate and noindex all of them. If there are lots (sounds like there are) you can do them in chunks and see what happens.
Don't get rid of pages that are doing good search traffic or have a low bounce rate UNLESS you know they are really poor pages as sooner or later, they will be picked up.
Ultimately, it sounds like a big site so you are going to have to be patient here and make incremental changes based on analytical and crawl data until you get the results you are looking for.
I have pulled a site back from the depths, a rather unfairly punished site in my opinion that just got it's content copied by several other sites but the same rules applied. We updated pages, removed blocks of template content to their own pages and just kept on watching and like magic, it came back stronger than before a week or so after we made all the changes.
Hope this helps!
Marcus -
You want to be a bit more patient. Depending on how popular and deep these pages are within your site, I would expect it to take several weeks to see most of them disappear. There is a good chance if you check you will find a percentage of those pages are disappearing each day.
The de-index tool is to remove content which you consider harmful to your business. Of course, any damage to your SEO rankings could be considered harmful, but that is clearly not what Google means. If you use the tool, they clearly explain it is for pages which need to "urgently" need to be removed due to legal reasons, copyright issues, etc.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Content Issue
SEOMOZ is giving me a number of duplicate content warnings related to pages that have an email a friend and/or email when back in stock versions of a page. I thought I had those blocked via my robots.txt file which contains the following... Disallow: /EmailaFriend.asp Disallow: /Email_Me_When_Back_In_Stock.asp I had thought that the robot.txt file would solve this issue. Anyone have any ideas?
Technical SEO | | WaterSkis.com0 -
Noindex Pages indexed
I'm having problem that gogole is index my search results pages even though i have added the "noindex" metatag. Is the best thing to block the robot from crawling that file using robots.txt?
Technical SEO | | Tedred0 -
Rel=canonical + no index
We have been doing an a/b test of our hp and although we placed a rel=canonical tag on the testing page it is still being indexed. In fact at one point google even had it showing as a sitelink . We have this problem through out our website. My question is: What is the best practice for duplicate pages? 1. put only a rel= canonical pointing to the "wanted original page" 2. put a rel= canonical (pointing to the wanted original page) and a no index on the duplicate version Has anyone seen any detrimental effect doing # 2? Thanks
Technical SEO | | Morris770 -
How to tell if PDF content is being indexed?
I've searched extensively for this, but could not find a definitive answer. We recently updated our website and it contains links to about 30 PDF data sheets. I want to determine if the text from these PDFs is being archived by search engines. When I do this search http://bit.ly/rRYJPe (google - site:www.gamma-sci.com and filetype:pdf) I can see that the PDF urls are getting indexed, but does that mean that their content is getting indexed? I have read in other posts/places that if you can copy text from a PDF and paste it that means Google can index the content. When I try this with PDFs from our site I cannot copy text, but I was told that these PDFs were all created from Word docs, so they should be indexable, correct? Since WordPress has you upload PDFs like they are an image could this be causing the problem? Would it make sense to take the time and extract all of the PDF content to html? Thanks for any assistance, this has been driving me crazy.
Technical SEO | | zazo0 -
Subdomains & SEO
Exact match domains are great for ranking but what about domains which contain just half of the full phrase being targeted? eg. If you owned the domain rentals.co.uk but wanted to target the search term "car rentals" Regarding backlinks, would it be best to link back to your rentals.co.uk homepage (using anchor text "car rentals") or to one of the following: a) www.rentals.co.uk/car-rentals b) car.rentals.co.uk AND 301 redirect to www.rentals.co.uk c) car.rentals.co.uk AND 301 redirect to www.rentals.co.uk/car-rentals
Technical SEO | | martyc1 -
Forget Duplicate Content, What to do With Very Similar Content?
All, I operate a Wordpress blog site that focuses on one specific area of the law. Our contributors are attorneys from across the country who write about our niche topic. I've done away with syndicated posts, but we still have numerous articles addressing many of the same issues/topics. In some cases 15 posts might address the same issue. The content isn't duplicate but it is very similar, outlining the same rules of law etc. I've had an SEO I trust tell me I should 301 some of the similar posts to one authoritative post on the subject. Is this a good idea? Would I be better served implementing canonical tags pointing to the "best of breed" on each subject? Or would I be better off being grateful that I receive original content on my niche topic and not doing anything? Would really appreciate some feedback. John
Technical SEO | | JSOC0 -
Magento and Duplicate content
I have been working with Magento over the last few weeks and I am becoming increasingly frustrated with the way it is setup. If you go to a product page and remove the sub folders one by one you can reach the same product pages causing duplicate content. All magento sites seem to have this weakness. So use this site as an example because I know it is built on magento, http://www.gio-goi.com/men/clothing/tees/throve-t-short.html?cid=756 As you remove the tees then the clothing and men sub folders you can still reach the product page. My first querstion is how big an issue is this and two does anyone have any ideas of how to solve it? Also I was wondering how does google treat question marks in urls? Should you try and avoid them unless you are filtering? Thanks
Technical SEO | | gregster10001 -
Over 1000 pages de-indexed over night
Hello, On my site (www.bridgman.co.uk) we had a lot of duplicate page issues as reported by the Seomoz site report tool - this was due to database driven URL strings. As a result, I sent an excel file with all the duplicate pages to my web developer who put rel canonical tags on what I assumed would be all the correct pages. I am not sure if this is a coincidence, or a direct result of the canonical tags, but a few days after (yesterday) the amount of pages indexed by google dropped from 1,200 to under 200. The number is still declining, and other than the canonical tags I can't work out why Google would just start de-indexing most of our pages. If you could offer any solutions that would be greatly appreciated. Thanks, Robert.
Technical SEO | | 87ROB0