Will blocking urls in robots.txt void out any backlink benefits? - I'll explain...
-
Ok...
So I add tracking parameters to some of my social media campaigns but block those parameters via robots.txt. This helps avoid duplicate content issues (Yes, I do also have correct canonical tags added)... but my question is -- Does this cause me to miss out on any backlink magic coming my way from these articles, posts or links?
Example url: www.mysite.com/subject/?tracking-info-goes-here-1234
- Canonical tag is: www.mysite.com/subject/
- I'm blocking anything with "?tracking-info-goes-here" via robots.txt
- The url with the tracking info of course IS NOT indexed in Google but IT IS indexed without the tracking parameters.
What are your thoughts?
- Should I nix the robots.txt stuff since I already have the canonical tag in place?
- Do you think I'm getting the backlink "juice" from all the links with the tracking parameter?
What would you do?
Why?
Are you sure?
-
Thanks Guys...
Yeah, I figure that's the right path to take based on what we know... But I love to hear others chime in so I can blame it all on you if something goes wrong - ha!
Another Note: Do you think this will cause some kind of unnatural anomaly when the robots.txt file is edited? All of a sudden these links will now be counted (we assume).
It's likely the answer is no because Google still knows about the links.. they just don't count them - but still thought I'd throw that thought out there.
-
I agree with what Andrea wrote above - just one additional point - blocking a file via robots.txt doesn't prevent the search engine from not indexing the page. It just prevents the search engine from crawling the page and seeing the content on the page. The page may very well still show up in the index - you'll just see a snippet that your robots.txt file is preventing google from crawling the site and caching it and providing a snippet or preview. If you have canonical tags put in place properly, remove the block on the parameters in your robots.txt and let the engines do things the right way and not have to worry about this question.
-
If you block with robots.txt link juice can't get passed along. If your canonicals are good, then ideally you wouldn't need the robots. Also, it really removes value of the social media postings.
So, to your question, if you have the tracking parameter blocked via robots, then no, I don't think you are getting the link juice.
http://www.rickrduncan.com/robots-txt-file-explained
When I want link juice passed on but want to avoid duplicate content, I'm more a fan of the no index, follow tags and using canonicals where it makes sense, too. But since you say your URLs with the parameters aren't being indexed then you must be using tags anyway to make that happen and not just relying on robots.
To your point of "are you sure":
http://www.evergreensearch.com/minimum-viable-seo-8-ways-to-get-startup-seo-right/
(I do like to cite sources - there's so many great articles out there!)
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is there a difference in SEO ranking value between a single backlink from a domain and multiple backlinks
Hi all I see that Moz gives data on Linking Domains and also External Links. The former being the number of domains which have one or more links pointing at your site and the latter the total number of links, including multiple links on the same domain. Apart from the potential benefit of people clicking the links and coming to your site and so increasing traffic, is there any SEO ranking benefit from multiple links? The only one I can think of is that you MAY get benefit from different anchor text for each link...? I'd be interested to hear any comment or experience on this. Bob
Intermediate & Advanced SEO | | BobBawden10 -
Will these 301's get me penalized?
Hey everyone, We're redesigning parts of our site and I have a tricky question that I was hoping to get some sound advice about. We have a blog (magazine) with subcategory pages that are quite thin. We are going to restructure the blog (magazine) and feature different concert and have new subcategories. So we are trying to decide where to redirect the existing subcategory pages, e.g. Entertainment, Music, Sports, etc. www.charged.fm/magazine Our new ticket category pages ( Concert Tickets, NY Yankees Tickets, OKC Thunder Tickets, etc) are going to feature a tab called 'Latest News' where we are thinking of 301 redirecting the old magazine subcategory pages. So Sports News from the blog would 301 to Sports Tickets (# Latest News tab). See screenshot below for example. So my question is: Will this look bad in the eyes of the GOOG? Are these closely related enough to redirect? Are there any blatant pitfalls that I'm not seeing? It seems like a win/win because we are making a rich Performer page with News, Bio, Tickets and Schedule and getting to reallocate the link juice that was being wasted in an pretty much useless page that was allowed to become to powerful. Gotta keep those pages in check! Thoughts appreciated. Luke Cn6HPpH.jpg
Intermediate & Advanced SEO | | keL.A.xT.o0 -
Robots.txt assistance
I want to block all the inner archive news pages of my website in robots.txt - we don't have R&D capacity to set up rel=next/prev or create a central page that all inner pages would have a canonical back to, so this is the solution. The first page I want indexed reads:
Intermediate & Advanced SEO | | theLotter
http://www.xxxx.news/?p=1 all subsequent pages that I want blocked because they don't contain any new content read:
http://www.xxxx.news/?p=2
http://www.xxxx.news/?p=3
etc.... There are currently 245 inner archived pages and I would like to set it up so that future pages will automatically be blocked since we are always writing new news pieces. Any advice about what code I should use for this? Thanks!0 -
Multiple URL's exist for the same page, canonicaliazation issue?
All of the following URL's take me to the same page on my site: 1. www.mysite.com/category1/subcategory.aspx 2. www.mysite.com/subcategory.aspx 3. www.mysite.com/category1/category1/category1/subcategory.aspx All of those pages are canonicalized to #1, so is that okay? I was told the following my a company trying to make our sitemap: "the site's platform dynamically creates URLs that resolve as 200 and should be 404. This is a huge spider trap for any search engine and will make them wary of crawling the site." What would I need to do to fix this? Thanks!
Intermediate & Advanced SEO | | pbhatt0 -
Pagination Question: Google's 'rel=prev & rel=next' vs Javascript Re-fresh
We currently have all content on one URL and use # and Javascript refresh to paginate pages, and we are wondering if we transition to the Google's recommended pagination if we will see an improvement in traffic. Has anyone gone though a similar transition? What was the result? Did you see an improvement in traffic?
Intermediate & Advanced SEO | | nicole.healthline0 -
Why is noindex more effective than robots.txt?
In this post, http://www.seomoz.org/blog/restricting-robot-access-for-improved-seo, it mentions that the noindex tag is more effective than using robots.txt for keeping URLs out of the index. Why is this?
Intermediate & Advanced SEO | | nicole.healthline0 -
Culling 99% of a website's pages. Will this cause irreparable damage?
I have a large travel site that has over 140,000 pages. The problem I have is that the majority of pages are filled with dupe content. When Panda came in, our rankings were obliterated, so I am trying to isolate the unique content on the site and go forward with that. The problem is, the site has been going for over 10 years, with every man and his dog copying content from it. It seems that our travel guides have been largely left untouched and are the only unique content that I can find. We have 1000 travel guides in total. My first question is, would reducing 140,000 pages to just 1,000 ruin the site's authority in any way? The site does use internal linking within these pages, so culling them will remove thousands of internal links throughout the site. Also, am I right in saying that the link juice should now move to the more important pages with unique content, if redirects are set up correctly? And finally, how would you go about redirecting all theses pages? I will be culling a huge amount of hotel pages, would you consider redirecting all of these to the generic hotels page of the site? Thanks for your time, I know this is quite a long one, Nick
Intermediate & Advanced SEO | | Townpages0 -
Robots.txt disallow subdomain
Hi all, I have a development subdomain, which gets copied to the live domain. Because I don't want this dev domain to get crawled, I'd like to implement a robots.txt for this domain only. The problem is that I don't want this robots.txt to disallow the live domain. Is there a way to create a robots.txt for this development subdomain only? Thanks in advance!
Intermediate & Advanced SEO | | Partouter0