Block in robots.txt instead of using canonical?
-
When I use a canonical tag for pages that are variations of the same page, it basically means that I don't want Google to index this page. But at the same time, spiders will go ahead and crawl the page. Isn't this a waste of my crawl budget? Wouldn't it be better to just disallow the page in robots.txt and let Google focus on crawling the pages that I do want indexed?
In other words, why should I ever use rel=canonical as opposed to simply disallowing in robots.txt?
-
With this info, I would go with Robots.txt because, as you say, it outweighs any potential loss given the use of the pages and the absence of links.
Thanks
-
Thanks Robert.
The pages that I'm talking about disallowing do not have rank or links. They are sub-pages of a profile page. If anything, the main page will be linked to, not the sub-pages.
Maybe I should have explained that I'm talking about a large site - around 400K pages. More than 1,000 new pages are created per week. That's why I am concerned about managing crawl budget. The pages that I'm referring to are not linked to anywhere on the site. Sure, Google can potentially get to them if someone decides to link to them on their own site, but this is unlikely and certainly won't happen on a large scale. So I'm not really concerned about about losing pagerank on the main profile page if I disallow them. To be clear: we have many thousands of pages with content that we want to rank. The pages I'm talking about are not important in those terms.
So it's really a question of balance... if these pages (there are MANY of them) are included in the crawl (and in our sitemap), potentially it's a real waste of crawl budget. Doesn't this outweigh the minuscule, far-fetched potential loss?
I understand that Google designed rel=canonical for this scenario, but that does not mean that it's necessarily the best way to go considering the other options.
-
Thanks Takeshi.
Maybe I should have explained that I'm talking about a large site - around 400K pages. More than 1,000 new pages are created per week. That's why I am concerned about managing crawl budget. The pages that I'm referring to are not linked to anywhere on the site. Sure, Google can potentially get to them if someone decides to link to them on their own site, but this is unlikely (since it's a sub-page of the main profile page, which is where people would naturally link to) and certainly won't happen on a large scale. So I'm not really concerned about about link-juice evaporation. According to AJ Kohn here, it's not enough to see in Webmaster Tools that Google has indexed all pages on our site. There is also the issue of how often pages are being crawled, which is what we are trying to optimize for.
So it's really a question of balance... if these pages (there are MANY of them) are included in the crawl (and in our sitemap), potentially it's a real waste of crawl budget. Doesn't this outweigh the minuscule, far-fetched potential loss?
Would love to hear your thoughts...
-
I would go with the canonicals. If there are any links going to these duplicate pages, that will prevent any "link juice evaporation" from links which Google can see but can't crawl due to robots.txt. Best to let Google just crawl the page and see the canonical so that it understands that it is a duplicate page.
Having canonicals on all your pages is good practice anyway, as it can prevent inadvertent duplicate content from things like query parameters.
Crawl budget can be of some concern if you're talking about a massive number of pages, but start by first taking a look at Google Webmaster Tools and seeing how many of your pages are being crawled vs the total number of pages on your site. As long as this ration isn't small, you should be good. You can also get more crawl budget by building up your domain authority by building links.
-
I don't disagree at all and I think AJ Kohn is a rock star. In SEO, I have learned over time that there are rarely absolutes like always do this or never do that. I based my answer on how you posited the question.
If you read AJ's post you will note that the rel=canonical issue comes up with others commenting and not in the body of his post. Yes, if the page is superfluous like a cart page or a contact page, use the robots.txt to block the crawl. But, if you have a page with rank, links, etc. that help your canonical page, how are you helping yourself by forgoing rel=canon?
I think his bigger point was that you want to be aware and to understand that the # of times you are crawled is at least partially governed by PR which is governed by all those other things we discussed. If you understand that and keep the crawl focused on better pages you help yourself.
Does that clarify a bit?
Best -
Hi, even if you use robots.txt file to block these pages, Google can still pick the references of these pages from third-party websites and can crawl from there. Such pages will not have the description snippet in the search results and instead will show text that reads:
A description of this result is not available because of this site's robots.txt.
So, to fully stop Google from crawling these pages, you can go in for the page-level meta robots tag along with the robots.txt method. The page-level robots meta tag complements robots.txt method.By the way, robots.txt file can definitely save you some crawl budget. I don't think you should be thinking much about crawl budget though, as long as your website is super-easy to crawl with simple text-based internal links and stuff like, super-fast servers etc.,
Those my my two cents my friend.
Best regards,
Devanur Rafi
-
Thanks for the response, Robert.
I have read lots of SEO advice on maximizing your "crawl budget" - making sure your internal link system is built well to send the bots to the right pages. According to my research, since bots only spend a certain amount of time on your site when they are crawling, it is important to do whatever you can to ensure that they don't "waste time" on pages that are not important for SEO. Just as one example, see this post from AJ Kohn.
Do you disagree with this whole approach?
-
Yair
I think that the canonical is the better option. I am unsure as to your use of the term "crawl budget," in that there is no fixed number of times a page or a site will be crawled versus a second similar site for example. I have a huge reference site that is crawled every couple of days and I have small sites of ten pages that are crawled weekly or less. It is dependent on the traffic and behaviors of that traffic (which would include number of inbound links, etc.) and on things like you re-submitting sitemap, etc.
The canonical tag was created to provide the clarification to the search engine as to what you considered to be the relevant page. Go ahead and use it.Best
Robert
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Canonical and Alternate Advice
At the moment for most of our sites, we have both a desktop and mobile version of our sites. They both show the same content and use the same URL structure as each other. The server determines whether if you're visiting from either device and displays the relevant version of the site. We are in a predicament of how to properly use the canonical and alternate rel tags. Currently we have a canonical on mobile and alternate on desktop, both of which have the same URL because both mobile and desktop use the same as explained in the first paragraph. Would the way of us doing it at the moment be correct?
Intermediate & Advanced SEO | | JH_OffLimits3 -
Ranking in SERPs but not using terms on website.
As far as I know, it's not normally possible for a website to rank for a keyword that is not mentioned on the website. I have seen a website that ranks very well for key terms and yet they are not mentioned anywhere on the website, I have run advanced search & checked using tools including cloak checker on my findings. How can this be?
Intermediate & Advanced SEO | | lee-murphy0 -
Changing domains - best process to use?
I am about to move my Thailand-focused travel website into a new, broader Asia-focused travel website. The Thailand site has had a sad history with Google (algorithmic, not penalties) so I don't want that history to carry over into the new site. At the same time though, I want to capture the traffic that Google is sending me right now and I would like my search positions on Bing and Yahoo to carry through if possible. Is there a way to make all that happen? At the moment I have migrated all the posts over to the new domain but I have it blocked to search engines. I am about to start redirecting post for post using meta-refresh redirects with a no-follow for safety. But at the point where I open the new site up to indexing, should I at the same time block the old site from being indexed to prevent duplicate content penalties? Also, is there a method I can use to selectively 301 redirect posts only if the referrer is Bing or Yahoo, but not Google, before the meta-refresh fires? Or alternatively, a way to meta-refresh redirect if the referrer is Google but 301 redirect otherwise? Or is there a way to "noindex, nofollow" the redirect only if the referrer is Google? Is there a danger of being penalised for doing any of these things? Late Edit: It occurs to me that if my penalties are algorithmic (e.g. due to bad backlinks), does 301 redirection even carry that issue through to the new website? Or is it left behind on the old site?
Intermediate & Advanced SEO | | Gavin.Atkinson0 -
How use Rel="canonical" for our Website
How is the best way to use Rel="canonical" for our website www.ofertasdeemail.com.br, for we can say goodbye for duplicated pages? I appreciate for every help. I also hope to contribute to the SEOmoz community. Sincerely,
Intermediate & Advanced SEO | | ZZNINTERNETMEDIAGROUP
Amador Goncalves0 -
Block Level Link Juice
I need a better understanding of how links in different parts of the page pass juice. Much has been written about how footer links pass less juice than other parts of the page. The question I have is that if a page has a hypothetical 1000 points of Link Juice and can pass on +/-800 points via links, and I have 1 and only 1 link in the footer to another page, does it pass the full 800 points? Or... since footers only pass a small fraction of link juice, it passes lets say 80 points, and the other 720 points stays locked up on the page. This question is a hypothetical - I'm just trying to understand relationships. I don't know if I've explained the question too well, but if someone could answer i it, or point me in the right direction, I would appreciate it.
Intermediate & Advanced SEO | | CsmBill0 -
Rel Canonical on Home Page
I have a client who says they can't implement a 301 on their home page. They have tow different urls for their home page that are live and do not redirect. I know that the best solution would be to redirect one to the main URL but they say this isn't possible. So they implemented the rel canonical instead. Is this the second best solution for them if they can't redirect? Will the link juice be passed through the rel canonical? Thanks!
Intermediate & Advanced SEO | | AlightAnalytics0 -
What is the best canonical url to use for a product page?
I just helped a client redesign and launch a new website for their organic skin care company (www.hylunia.com). The site is built in Magento which by default creates MANY urls for each product. Which of these two do you think would be the best to use as the canonical version? http://www.hylunia.com/pure-hyaluronic-acid-solution
Intermediate & Advanced SEO | | danielmoss
or http://www.hylunia.com/products/face-care/facial-moisturizers/pure-hyaluronic-acid-solution ? I'm leaning on the latter, because it makes sense to me to have the breadcrumbs match the url string, and also it seems having more keywords in the url would help. However, it's obviously a very long url, and there might be some benefits to using the shorter version that I'm not aware of. Thanks in advance for sharing your thoughts. Best, Daniel0 -
Is This 301 Use Best Practice??
I know its effective practice cuz we're getting our arse kicked. I'm curious if its best practice (white, gray or black hat). I'm checking a competitors link profile on its landing page that is hitting the top of page 1 for several keywords. This competitor (national chain) has a strong domain authority (69). The particular landing page I'm checking in OSE has two 301 redirects from its own site among some other directory links to the page. The page shows 15 external links and half of them are very strong including it's own 301's. Aren't they essentially sending their own juice to the landing page to bolster page/domain authority to rank higher in the SERPS for those keywords? Is this a common practice using the 301's to a landing page? Is it white, gray or black hat? They are appearing suddenly appearing on the first page for several category keywords, so we're doing some snooping. Thanks.
Intermediate & Advanced SEO | | AWCthreads0