"Equity sculpting" with internal nofollow links
-
I’ve been trying a couple of new site auditor services this week and they have both flagged the fact that I have some nofollow links to internal pages.
I see this subject has popped up from time to time in this community. I also found a 2013 Matt Cutts video on the subject:
https://searchenginewatch.com/sew/news/2298312/matt-cutts-you-dont-have-to-nofollow-internal-links
At a couple of SEO conferences I’ve attended this year, I was advised that nofollow on internal links can be useful so as not to squander link juice on secondary (but necessary) pages. I suspect many websites have a lot of internal links in their footers and are sharing the love with pages which don’t really need to be boosted. These pages can still be indexed but not given a helping hand to rank by strong pages. This “equity sculpting” (I made that up) seems to make sense to me, but am I missing something?
Examples of these secondary pages include login pages, site maps (human readable), policies – arguably even the general contact page.
Thoughts?
Regards,
Warren -
Useful reference links. Many thanks, Mike.
-
Here's a bit more on the subject.
Matt Cutts PageRank Sculpting 2009
TheSEMPost 2015 - Pagerank sculpting
The SEOBlog Pagerank Sculpting 2014
It just feels like every other year or so, this concept starts coming back up. Except as much as it does work, it also doesn't. Personally I think its a better use of time and effort to look at your site navigation & see if it's user friendly, intuitive, and natural in order to direct flow better and also to work on linkbuilding efforts to increase authority.
-
Thanks, Mike.
Just to be clear, I still want those non-primary internal pages (maybe not human sitemap and login) to be indexed so a robots.txt approach will not completely solve the problem. I just don't want to potentially squander link juice on secondary pages. Footers tend to have quite a bulk of link so there is a lot of dilution there. I had hoped that by halving my links, I'd be doubling the outbound link equity.
The first reference was useful, but only mentions my sculpting goal in the very last sentence without elaborating. The thing I found most interesting was the first comment from Mark Traphagen:
So, if this is true, there's absolutely no equity saving to be had from nofollow'ing internal links to my non-primary pages. But... is it true?! Any experiment results out there?
Finally, with regards to old versions of policies being published, I can't see how that would cause any legal problems. It's the version that is published that is important and, while I can set directives on cache expiry, nobody can be responsible for out-of-date information stored in a third-party cache (unless, of course, it was unlawful at the time of publishing).
-
Adding Nofollow to a handful of links on your site will not magically sculpt link equity in such a way as to create a noticeable improvement like that. If anything, you could just use robots.txt to remove those pages from being crawled. The bots don't necessarily need to index your login page, your human sitemap (if they already have their own), policies (which can change and cause legal issues if an older version is cached), and a few others.
And just a few months ago Gary Illyes stated that there's no good reason to nofollow internal links:
http://www.thesempost.com/google-dont-ever-nofollow-your-own-internal-links/
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Disavow links and domain of SPAM links
Hi, I have a big problem. For the past month, my company website has been scrape by hackers. This is how they do it: 1. Hack un-monitored and/or sites that are still using old version of wordpress or other out of the box CMS. 2. Created Spam pages with links to my pages plus plant trojan horse and script to automatically grab resources from my server. Some sites where directly uploaded with pages from my sites. 3. Pages created with title, keywords and description which consists of my company brand name. 4. Using http-referrer to redirect google search results to competitor sites. What I have done currently: 1. Block identified site's IP in my WAF. This prevented those hacked sites to grab resources from my site via scripts. 2. Reach out to webmasters and hosting companies to remove those affected sites. Currently it's not quite effective as many of the sites has no webmaster. Only a few hosting company respond promptly. Some don't even reply after a week. Problem now is: When I realized about this issue, there were already hundreds if not thousands of sites which has been used by the hacker. Literally tens of thousands of sites has been crawled by google and the hacked or scripted pages with my company brand title, keywords, description has already being index by google. Routinely everyday I am removing and disavowing. But it's just so much of them now indexed by Google. Question: 1. What is the best way now moving forward for me to resolve this? 2. Disavow links and domain. Does disavowing a domain = all the links from the same domain are disavow? 3. Can anyone recommend me SEO company which dealt with such issue before and successfully rectified similar issues? Note: SEAGM is company branded keyword 5CGkSYM.png
Technical SEO | | ahming7770 -
Does using data-href="" work more effectively than href="" rel="nofollow"?
I've been looking at some bigger enterprise sites and noticed some of them used HTML like this: <a <="" span="">data-href="http://www.otherodmain.com/" class="nofollow" rel="nofollow" target="_blank"></a> <a <="" span="">Instead of a regular href="" Does using data-href and some javascript help with shaping internal links, rather than just using a strict nofollow?</a>
Technical SEO | | JDatSB0 -
Duplicate content with "no results found" search result pages
We have a motorcycle classifieds section that lets users search for motorcycles for sale using various drop down menus to pick year-make-type-model-trim, etc.. These search results create urls such as:
Technical SEO | | seoninjaz
www.example.com/classifieds/search.php?vehicle_manufacturer=Triumph&vehicle_category=On-Off Road&vehicle_model=Tiger&vehicle_trim=800 XC ABS We understand that all of these URL varieties are considered unique URLs by Google. The issue is that we are getting duplicate content errors on the pages that have no results as they have no content to distinguish themselves from each other. A URL like:
www.example.com/classifieds/search.php?vehicle_manufacturer=Triumph&vehicle_category=Sportbike
and
www.example.com/classifieds/search.php?vehicle_manufacturer=Honda&vehicle_category=Streetbike Will have a results page that says "0 results found". I'm wondering how we can distinguish these "unique" pages better? Some thoughts:
-make sure <title>reflects what was search<br />-add a heading that may say "0 results found for Triumph On-Off Road Tiger 800 XC ABS"<br /><br />Can anyone please help out and lend some ideas in solving this? <br /><br />Thank you.</p></title>0 -
Can name="author" register as a link?
Hi all, We're seeing a very strange result in Google Webmaster tools. In "Links to your site", there is a site which we had nothing to do with (i.e. we didn't design or build it) showing over 1600 links to our site! I've checked the site several times now, and the only reference to us is in the rel="author" tag. Clearly the agency that did their design / SEO have nicked our meta, forgetting to delete or change the author tag!! There are literally no other references to us on this site, there hasn't every been (to our knowledge, at least) and so I'm very puzzled as to why Google thinks there are 1600+ links pointing to us. The only thing I can think of is that Google will recognise name="author" content as a link... seems strange, though. Plus the content="" only contains our company name, not our URL. Can anybody shed any light on this for me? Thanks guys!
Technical SEO | | RiceMedia0 -
Google is Showing Website as "Untitled"
My freelance designer made some changes to my website and all of a sudden my homepage was showing the title I have in Dmoz. We thought maybe the NOODP tag was not correct, so we edited that a little and now the site is showing as "Untitled". The website is http://www.chemistrystore.com/. Of course he didn't save an old copy that we can revert to. That is a practice that will end. I have no idea why the title and description that we have set for the homepage is not showing in google when it previously was. Another weird thing that I noticed is that when I do ( site:chemistrystore.com ) in Google I get the https version of the site showing with the correct title and description. When I do ( site:www.chemistrystore.com ) in Google I don't have the hompage showing up from what I can tell, but there are 4,000+ pages to the site. My guess is that if it is showing up, it is showing up as "Untitled". My question is.... How can we get Google to start displaying the proper title and description again?
Technical SEO | | slangdon0 -
What should i do with the links for "Login", "Register", "My Trolley" links on every page.
My website ommrudraksha has 3 links on every page. 1. Login 2. Register 3. My trolley My doubt is i do not want to give any weightage to these links. does these links will be calculated when page links are calculated ? Should i remove these as links and place these as buttons ? ( with look a like of link visually ? )
Technical SEO | | Ommrudraksha0 -
Honeypot Captcha - rated as "cloaked content"?
Hi guys, in order to get rid of our very old-school captcha on our contact form at troteclaser.com, we would like to use a honeypot captcha. The idea is to add a field that is hidden to human visitors but likely to be filled in by spam-bots. In this way we can sort our all those spam contact requests.
Technical SEO | | Troteclaser
More details on "honeypot captchas":
http://haacked.com/archive/2007/09/11/honeypot-captcha.aspx Any idea if this single cloaked field will have negative SEO-impacts? Or is there another alternative to keep out those spam-bots? Greets from Austria,
Thomas0 -
Does 301 redirect pass "freshness?"
Greetings! I work for an online retailer, and we recently launched a voting tool that allows customers to voice their opinion whether or not we should carry a new item. It's been a huge success and we've been generating thousands of comments. As a result, it's helped our SEO, and our products are showing up on the first page for some keywords without having any external links pointing to these pages. Our plan is to sell a product if it does well during the voting period. Unfortunately, we're not able to process the sale on the voting page, and need to redirect users to another page on our site. I understand that a 301 redirect transfers "linkjuice" to the new destination URL. But does it also transfer "freshness?" I ask because our new landing pages will not be updated as frequently as the voting pages. Example of our Voting Page:
Technical SEO | | znotes
http://www.uncommongoods.com/voting/product/50012/infant-fortune-cookie-booties Example of Redirected Item Page (where sale can be processed):
http://www.uncommongoods.com/product/baby-tube-socks-set-of-4 Any help/comments would be appreciated. Thank you!0