Robots.txt vs noindex
-
I recently started working on a site that has thousands of member pages that are currently robots.txt'd out.
Most pages of the site have 1 to 6 links to these member pages, accumulating into what I regard as something of link juice cul-d-sac.
The pages themselves have little to no unique content or other relevant search play and for other reasons still want them kept out of search.
Wouldn't it be better to "noindex, follow" these pages and remove the robots.txt block from this url type? At least that way Google could crawl these pages and pass the link juice on to still other pages vs flushing it into a black hole.
BTW, the site is currently dealing with a hit from Panda 4.0 last month.
Thanks! Best... Darcy
-
if you add the meta noindex, follow tag , it will keep the page out of the SERP but allows pagerank to flow through them to other pages.
See this interview of Matt Cutts for more info : http://www.stonetemple.com/articles/interview-matt-cutts.shtml
-
Hi Saijo,
Thanks for the response. Do you think that would yield the benefit I'm looking for of recapturing that lost link juice?
Do you think there'd be any downside to the switcheroo from robots.txt to noindex, follow?
Best... Darcy
-
Since you said " The pages themselves have little to no unique content or other relevant search play and for other reasons still want them kept out of search. " I would use meta robots "noindex, follow"
-
HI Lesley,
Thanks for the thoughts. I don't see this as a real option for a number of reasons, including but not limited to that there are 50,000 profiles, most with very little information. The members of this site are 95% busy professionals who aren't trying to advance their career via their profile. So, there'd be some privacy concern and the potential for tens of thousands of low content/highly templated pages. Not really a search dream come true!
Also, converting it into a system where different levels of profile completeness are acknowledged would not really resonate with this community nor would it be near the top of our engineering priorities.
What I really want to get clear on is how best to keep them search invisible while not losing link value into a robots.txt'd black hole. Really just looking for confirmation if, with those goals, "noindex, follow" and remove from robots is the way to go. I'm pretty sure it is, but would like to hear more about that.
Thanks... Darcy
-
I think what I am going to say is going to sound like it is going against the grain, but it really isn't. I have noticed in some places if you want an active community, you reward your members. Look at how moz does their forum, they don't really noindex the pages, but once you hit a point they psuedo drop the nofollow off of your profile link (it could be argued whether they really do). But the point is reward your members that are active. I would set up some automatic noindex tag in the header that grabbed the users post numbers. Then you can noindex all of the spammers and have prominent members shown in the search. If it were me that is how I would do it. I have a PA of 49 on my profile in one forum I regular, I have seen the stats, it is regularly an entry page to the forum. Another member has a 64 on a 93 domain, his is used a lot more than mine for entry as well. Think of it this way, if someone is googling my name, the second result is http://screencast.com/t/jIx7a4hcWV Moz's forum. 2nd search results still get a lot of clicks.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Conditional Noindex for Dynamic Listing Pages?
Hi, We have dynamic listing pages that are sometimes populated and sometimes not populated. They are clinical trial results pages for disease types, some of which don't always have trials open. This means that sometimes the CMS produces a blank page -- pages that are then flagged as thin content. We're considering implementing a conditional noindex -- where the page is indexed only if there are results. However, I'm concerned that this will be confusing to Google and send a negative ranking signal. Any advice would be super helpful. Thanks!
Intermediate & Advanced SEO | | yaelslater0 -
Same page Anchor Links vs Internal Link (Cannibalisation)
Hey Mozzers, I have a very long article page that supports several of my sub-category pages. It has sub-headings that link out to the relevant pages. However the article is very long and to make it easier to find the relevant section I was debating adding inpage anchor links in a bullet list at the top of the page for quick navigation. PAGE TITLE Keyword 1 Keyword 2 etc <a name="'Keyword1"></a> Keyword 1 Content
Intermediate & Advanced SEO | | ATP
<a name="'Keyword2"></a> Keyword 2 Content Because of the way my predecessor wrote this article, its section headings are the same as the sub-categories they link out to and boost (not ideal but an issue I will address later). What I wondered is if having the inpage achor would confuse the SERPS because they would be linking with the same keyword. My worry is that by increasing userbility of the article by doing this I also confuse them SERPS First I tell them that this section on my page talk about keyword 1. Then from in that article i tell them that a different page entirely is about the same keyword. Would linking like this confuse SERPS or are inpage anchor links looked upon and dealt with differently?0 -
Using Meta Header vs Robots.txt
Hey Mozzers, I am working on a site that has search-friendly parameters for their faceted navigation, however this makes it difficult to identify the parameters in a robots.txt file. I know that using the robots.txt file is highly recommended and powerful, but I am not sure how to do this when facets are using common words such as sizes. For example, a filtered url may look like www.website.com/category/brand/small.html Brand and size are both facets. Brand is a great filter, and size is very relevant for shoppers, but many products include "small" in the url, so it is tough to isolate that filter in the robots.txt. (I hope that makes sense). I am able to identify problematic pages and edit the Meta Head so I can add on any page that is causing these duplicate issues. My question is, is this a good idea? I want bots to crawl the facets, but indexing all of the facets causes duplicate issues. Thoughts?
Intermediate & Advanced SEO | | evan890 -
Robots.txt assistance
I want to block all the inner archive news pages of my website in robots.txt - we don't have R&D capacity to set up rel=next/prev or create a central page that all inner pages would have a canonical back to, so this is the solution. The first page I want indexed reads:
Intermediate & Advanced SEO | | theLotter
http://www.xxxx.news/?p=1 all subsequent pages that I want blocked because they don't contain any new content read:
http://www.xxxx.news/?p=2
http://www.xxxx.news/?p=3
etc.... There are currently 245 inner archived pages and I would like to set it up so that future pages will automatically be blocked since we are always writing new news pieces. Any advice about what code I should use for this? Thanks!0 -
Google indexing "noindex" pages
1 weeks ago my website expanded with a lot more pages. I included "noindex, follow" on a lot of these new pages, but then 4 days ago I saw the nr of pages Google indexed increased. Should I expect in 2-3 weeks these pages will be properly noindexed and it may just be a delay? It is odd to me that a few days after including "noindex" on pages, that webmaster tools shows an increase in indexing - that the pages were indexed in other words. My website is relatively new and these new pages are not pages Google frequently indexes.
Intermediate & Advanced SEO | | khi50 -
Customer Experience vs Search Result Optimisation
Yes, I know customer experience is king, however, I have a dilema, my site has been live since June 2013 & we get good feedback on site design & easy to follow navigation, however, our rankings arent as good as they could be? For example, the following 2 pages share v similar URLs, but the pages do 2 different jobs & when you get to the site that is easy to see, but my largest Keyword "Over 50 Life Insurance" becomes difficult to target as google sees both pages and splits the results, so I think i must be losing ranking positions? http://www.over50choices.co.uk/Funeral-Planning/Over-50-Life-Insurance.aspx http://www.over50choices.co.uk/Funeral-Planning/Over-50-Life-Insurance/Compare-Over-50s-Life-Insurance.aspx The first page explains the product(s) and the 2nd is the Quote & Compare page, which generates the income. I am currently playing with meta tags, but as yet havent found the right combination! Originally the 2nd page meta tags were focussing on "compare over 50s life insurance" but google still sees "over 50 life insurance" in this phrase, so the results get split. I also had internal anchor text supporting this. What do you think is the best strategy for optimising both pages? Thanks Ash
Intermediate & Advanced SEO | | AshShep10 -
Robots.txt: how to exclude sub-directories correctly?
Hello here, I am trying to figure out the correct way to tell SEs to crawls this: http://www.mysite.com/directory/ But not this: http://www.mysite.com/directory/sub-directory/ or this: http://www.mysite.com/directory/sub-directory2/sub-directory/... But with the fact I have thousands of sub-directories with almost infinite combinations, I can't put the following definitions in a manageable way: disallow: /directory/sub-directory/ disallow: /directory/sub-directory2/ disallow: /directory/sub-directory/sub-directory/ disallow: /directory/sub-directory2/subdirectory/ etc... I would end up having thousands of definitions to disallow all the possible sub-directory combinations. So, is the following way a correct, better and shorter way to define what I want above: allow: /directory/$ disallow: /directory/* Would the above work? Any thoughts are very welcome! Thank you in advance. Best, Fab.
Intermediate & Advanced SEO | | fablau1 -
Transactional vs Informative Search
I have a page that id ranking quiet good (Page1) for the plural of a Keyword but it is just ranking on Page 3 for the Singular Keyword. For more then one Year I am working on Onpage and Offpage optimization to improve ranking for the singular term, without success. Google is treating the two terms almost the same, when you search for term one also term 2 is marked in bold and the results are very similar. The big difference between both terms is in my opinion that one is more for informational search the other one is more for transactional search. Now i would be curious to know which factors could Google use to understand weather a search and a website is more transactional or informative? Apart of mentioning: Buy now, Shop, Buy now, Shop, Special offer etc. Any Ideas?
Intermediate & Advanced SEO | | SimCaffe0