Robots.txt and redirected backlinks
-
Hey there,
since a client's global website has a very complex structure which lead to big duplicate content problems, we decided to disallow crawler access and instead allow access to only a few relevant subdirectories. While indexing has improved since this I was wondering if we might have cut off link juice. Since several backlinks point to the disallowed root directory and are from there redirected (301) to the allowed directory I was wondering if this could cause any problems?
Example: If there is a backlink pointing to example.com (disallowed in robots.txt) and is redirected from there to example.com/uk/en (allowed in robots.txt). Would this cut off the link juice?
Thanks a lot for your thoughts on this.
Regards,
Jochen
-
A noindexed page can still accumulate and pass link equity, although results vary on whether or not some of that link juice "evaporates" along the way. I'm inclined to agree with Chris, though, that there's probably no need to noindex a page that redirects to a page that you do want indexed.
-
Hi Jochen,
It's an interesting situation and to be honest, I don't know for sure how search engines will deal with that "link juice". This will come down to a question of whether search engines see robots.txt or htaccess first. If it looks at robots first (which is my suspicion), it can't see that page to pass the strength.
I suppose to test this, you could submit the redirected page to index via Search Console and see if it shows you the redirect or says it's blocked.
Interesting question aside, there's no real need to block access to a 301'd page
Also, apologies if I'm just highlighting the obvious here but it would be far better to clean up the site structure and remove that duplication rather than just masking it with robots; the user experience is at least as important as the algorithms!
Along the same lines, cleaning up those pages is going to help your crawl budget immensely.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Disallowed "Search" results with robots.txt and Sessions dropped
Hi
Intermediate & Advanced SEO | | Frankie-BTDublin
I've started working on our website and I've found millions of "Search" URL's which I don't think should be getting crawled & indexed (e.g. .../search/?q=brown&prefn1=brand&prefv1=C.P. COMPANY|AERIN|NIKE|Vintage Playing Cards|BIALETTI|EMMA PAKE|QUILTS OF DENMARK|JOHN ATKINSON|STANCE|ISABEL MARANT ÉTOILE|AMIRI|CLOON KEEN|SAMSONITE|MCQ|DANSE LENTE|GAYNOR|EZCARAY|ARGOSY|BIANCA|CRAFTHOUSE|ETON). I tried to disallow them on the Robots.txt file, but our Sessions dropped about 10% and our Average Position on Search Console dropped 4-5 positions over 1 week. Looks like over 50 Million URL's have been blocked, and all of them look like all of them are like the example above and aren't getting any traffic to the site. I've allowed them again, and we're starting to recover. We've been fixing problems with getting the site crawled properly (Sitemaps weren't added correctly, products blocked from spiders on Categories pages, canonical pages being blocked from Crawlers in robots.txt) and I'm thinking Google were doing us a favour and using these pages to crawl the product pages as it was the best/only way of accessing them. Should I be blocking these "Search" URL's, or is there a better way about going about it??? I can't see any value from these pages except Google using them to crawl the site.0 -
Block subdomain directory in robots.txt
Instead of block an entire sub-domain (fr.sitegeek.com) with robots.txt, we like to block one directory (fr.sitegeek.com/blog).
Intermediate & Advanced SEO | | gamesecure
'fr.sitegeek.com/blog' and 'wwww.sitegeek.com/blog' contain the same articles in one language only labels are changed for 'fr' version and we suppose that duplicate content cause problem for SEO. We would like to crawl and index 'www.sitegee.com/blog' articles not 'fr.sitegeek.com/blog'. so, suggest us how to block single sub-domain directory (fr.sitegeek.com/blog) with robot.txt? This is only for blog directory of 'fr' version even all other directories or pages would be crawled and indexed for 'fr' version. Thanks,
Rajiv0 -
How to remove the certain backlinks completely ?
Hello, I have deleted certain backlinks manually a few months ago. they dont longer exist. but even today Google Webmaster tools and MOZ backlink tool shows this backlinks. Why ? what i need to do to remove them completely ? Thank you 🙂
Intermediate & Advanced SEO | | Ivek990 -
Robots.txt Blocked Most Site URLs Because of Canonical
Had a bit of a "Gotcha" in Magento. We had Yoast Canonical Links extension which worked well , but then we installed Mageworx SEO Suite.. which broke Canonical Links. Unfortunately it started putting www.mysite.com/catalog/product/view/id/516/ as the Canonical Link - and all URLs with /catalog/productview/* is blocked in Robots.txt So unfortunately We told Google that the correct page is also a blocked page. they haven't been removed as far as I can see but traffic has certainly dropped. We have also , at the same time had some Site changes grouping some pages & having 301 redirects. Resubmitted site map & did a fetch as google. Any other ideas? And Idea how long it will take to become unblocked?
Intermediate & Advanced SEO | | s_EOgi_Bear0 -
Too many 301 redirects?
Hey, My company currently has one chief website with about 500-600 other domains that all feature the same material as the chief website. These domains have been around for about 5 years and have actually picked up some link traffic. I have all of these identical web-pages utilizing rel=canonical but I was wondering if I would be better served, from SEO purposes, to 301 redirect all of these sites to their respective pages on our chief website? If I add 500 301 redirects, will the major search engines consider this to be black-hat link-building even though the sites are related and technically already feature the same content? For an example, the chief website is www.1099pro.com and I would 301 redirect the below sites to the chief site: 1099softwarepro.com 1099softwarepro.info 1099softwarepro.net 1099softwarepro.biz 1099softwareprofessionals.com 1099softwareprofessionals.info ...you get the point
Intermediate & Advanced SEO | | Stew2220 -
.htaccess 301 Redirect Help! Specific Redirects and Blanket Rule
Hi there, I have the following domains: OLD DOMAIN: domain1.co.uk NEW DOMAIN: domain2.co.uk I need to create a .htaccess file that 301 redirects specific, individual pages on domain1.co.uk to domain2.co.uk I've searched for hours to try and find a solution, but I can't find anything that will do what I need. The pages on domain1.co.uk are all kinds of filenames and extensions, but they will be redirected to a Wordpress website that has a clean folder structure. Some example URL's to be redirected from the old website: http://www.domain1.co.uk/charitypage.php?charity=357 http://www.domain1.co.uk/adopt.php http://www.domain1.co.uk/register/?type=2 These will need to be redirected to the following URL types on the new domain: http://www.domain2.co.uk/charities/ http://www.domain2.co.uk/adopt/ http://www.domain2.co.uk/register/ I would also like a blanket/catch-all redirect from anything else on www.domain1.co.uk to the homepage of www.domain2.co.uk if there isn't a specific individual redirect in place. I'm literally tearing my hair out with this, so any help would be greatly appreciated! Thanks
Intermediate & Advanced SEO | | Townpages0 -
Robots.txt: Can you put a /* wildcard in the middle of a URL?
We have noticed that Google is indexing the language/country directory versions of directories we have disallowed in our robots.txt. For example: Disallow: /images/ is blocked just fine However, once you add our /en/uk/ directory in front of it, there are dozens of pages indexed. The question is: Can I put a wildcard in the middle of the string, ex. /en/*/images/, or do I need to list out every single country for every language in the robots file. Anyone know of any workarounds?
Intermediate & Advanced SEO | | IHSwebsite0 -
How long should a domain redirect take?
Hi, I know that this is a 'How long is a piece of string?' type question but at what point should the ranking value of site A pass over to site B following a domain 301 redirect? I have shifted a domain over to a new URL, same hosting server, same IP address. I haven't made any URL changes or any content changes other than to change the site logo to match the new domain name. Domain B is basically an exact clone of domain A. I have redirected Domain A to domain B using the following line at the top of the .htaccess file:- Redirect 301 / http://www.newdomain.com/ I have submitted a sitemap for the new domain via google webmaster tools. It looks like the original domain as been completely indexed by google following the redirect as all rankings have been dropped from the results and there are no results for a site:olddomain.com search. Surely the rankings should have switched over at this point? Any help would be much appreciated.
Intermediate & Advanced SEO | | AdeLewis
Ade.0