Moz was unable to crawl your site? Redirect Loop issue
-
Moz was unable to crawl your site on Jul 25, 2017. I am getting this message for my site: It says "unable to access your homepage due to a redirect loop.
Site is working fine and last crawled on 22nd July. I am not sure why this issue is coming.
When I checked the website in Chrome extension it saysThe server has previously indicated this domain should always be accessed via HTTPS (HSTS Protocol). Chrome has cached this internally, and did not connect to any server for this redirect. Chrome reports this redirect as a "307 Internal Redirect" however this probably would have been a "301 Permanent redirect" originally. You can verify this by clearing your browser cache and visiting the original URL again.
Not sure if this is actual issue, This is migrated on Https just 5 days ago so may be it will resolved automatically. Not sure, can anybody from Moz team help me with this?
-
HI,just checking if anyone figured out the issue with this?
-
Yes, this is actually very confusing.
I don't know on which metrics Moz is crawling the websites and providing the issues report. Few days back when I put the query I also got this issue on my website and now this issue is removed automatically.
Nothing is alarming in Google Webmasters too and when I checked manually everything seems fine. So, can't say much about this issue
So, if you will get some solution then just let me know also so that I can also work on this
Thanks anyways.
-
Hey Guys
I'm getting the same redirect loop error today for one of my client's sites. We have not changed anything on the site recently and it worked perfectly in Moz Pro Campaign for several weeks, so what's happened? The same thing happened to another client of mine a week ago. Their site crawled perfectly in Moz for weeks, but all of a sudden Moz could not crawl it because of a redirect loop issue.
The site is http://aprilrandlelaw.com/
FULL ERROR MESSAGE
We were unable to access your homepage due to a redirect loop, which prevented us from crawling the rest of your site. Your homepage is likely redirecting to itself. Because we can only crawl if we find unique pages, the redirect on your homepage is stopping us from crawling past that page. It is possible that other browsers and search engines are encountering this problem and aborting their sessions as well. We recommend eliminating any unnecessary, circular or indefinite redirects on your homepage. Also, make sure your site is not mandating cookies, which can cause circular redirects and make crawling more difficult. Typically errors like this should be investigated and fixed by the site webmaster.
-
Hi Rahul. You can run your site through Goodle's Pagespeed Insights tool and also see the redirects. Also there are many other things reported by Pagespeed that could be fixed. I have found optimizing images and minification to be pretty simple and Pagespeed will provide the optimized files for download.
Best!
-
Thanks Chris for your reply.
No, not plugin.
we used 301 but redirect path in Chrome is showing 302 on www and 301 then on https://kuzyklaw.com
May be this is why we are getting this issue, I will check with my developers to fix this. Thanks anyways.
-
Hi. Hopefully MOZ team will respond but I noticed if I type "http://www.kuzyklaw.com" I get a 301 to "https://www.kuzyklaw.com" and then another 301 to "https://kuzyklaw.com/".
One to many redirects I think. Noticed you are WP. Did you use a plugin for the https change?
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moz spam score 16 for some pages - Never a manual penalty: Disavow needed?
Hi community, We have some top hierarchy pages with spam score 16 as per Moz due to the backlinks with very high spam score. I read that we could ignore as long as we are not employing paid links or never got a manual penalty. Still we wanna give a try by disavowing certain domains to check if this helps. Anyway we are not going to loose any backlink score by rejecting this low-quality backlinks. Can we proceed? Thanks
White Hat / Black Hat SEO | | vtmoz0 -
Why isn't a 301 redirect removing old style URLs from Google's index?
I have two questions:1 - We changed the URL structure of our site. Old URLs were in the format of kiwiforsale.com/used_fruit/yummy_kiwi. These URLs are 301 redirected to kiwiforsale.com/used-fruit/yummy-kiwi. We are getting duplicate content errors in Google Webmaster Tools. Why isn't the 301 redirect removing the old style URL out of Google's index?2 - I tried to remove the old style URL at https://www.google.com/webmasters/tools/removals, however I got the message that "We think the image or web page you're trying to remove hasn't been removed by the site owner. Before Google can remove it from our search results, the site owner needs to take down or update the content."Why are we getting this message? Doesn't the 301 redirect alert Google that the old style URL is toast and it's gone?
White Hat / Black Hat SEO | | CFSSEO0 -
Forcing Entire site to HTTPS
We have a Wordpress site and hope to force everything to HTTPS. We change the site name (in wordpress settings) to https://mydomain.com In the htaccess code = http://moz.com/blog/htaccess-file-snippets-for-seos Ensure we are using HTTPS version of the site. RewriteCond %{HTTPS} !on RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] but some blogs http://stackoverflow.com/questions/19168489/https-force-redirect-not-working-in-wordpress say RewriteCond %{HTTPS} off RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] Which one is right? 🙂 and are we missing anything?
White Hat / Black Hat SEO | | joony0 -
Redirect from old domain to a new domain
Hi, assuming i have an old domain that i would like to redirect it to the new domain because the old domain contain good links on it and been ranking for its keywords. Would it be a wise choice? and can i redirect my sub domain into my new one too? for example website1.com/life > website2.com/life and how do i do so? can i do that by hosting the old domain in my new domain hosting and do all those redirect include sub domain redirect?
White Hat / Black Hat SEO | | andzon0 -
Controlling crawl speed/delay through dynamic server-code and 503's
Lately i'm experiencing performance trouble caused by bot traffic. Although Googlebot is not the worst (it's mainly bingbot and ahrefsbot), they cause heavy server load from time to time. We run a lot of sites on one server, so heavy traffic on one site impacts other site's performance. Problem is that 1) I want a centrally managed solution for all sites (per site administration takes too much time), which 2) takes into account total server-load in stead of only 1 site's traffic and 3) controls overall bot-traffic in stead of controlling traffic for one bot. IMO user-traffic should always be prioritized higher than bot-traffic. I tried "Crawl-delay:" in robots.txt, but Googlebot doesn't support that. Although my custom CMS system has a solution to centrally manage Robots.txt for all sites at once, it is read by bots per site and per bot, so it doesn't solve 2) and 3). I also tried controlling crawl-speed through Google Webmaster Tools, which works, but again it only controls Googlebot (and not other bots) and is administered per site. No solution to all three of my problems. Now i came up with a custom-coded solution to dynamically serve 503 http status codes to a certain portion of the bot traffic. What traffic-portion for which bots can be dynamically (runtime) calculated from total server load at that certain moment. So if a bot makes too much requests within a certain period (or whatever other coded rule i'll invent), some requests will be answered with a 503 while others will get content and a 200. Remaining question is: Will dynamically serving 503's have a negative impact on SEO? OK, it will delay indexing speed/latency, but slow server-response-times do in fact have a negative impact on the ranking, which is even worse than indexing-latency. I'm curious about your expert's opinions...
White Hat / Black Hat SEO | | internetwerkNU1 -
301 Redirect Asp.net Help
Hey, we are redesigning the site and we are changing a lot of urls to make them more SEO friendly But some of the old urls have PR 4-5 What is the best way to do about this? How to do a 301 redirect for specific pages in asp.net Or do you recommend something elsE? Thanks in advance
White Hat / Black Hat SEO | | Madz0 -
Link Farms and The Relationship between 2 domain with a 301 Redirect
I have an interesting scenario: Domain A was worked on by a disreputable SEO company off shore. The owner of Domain A came to me for my assistance and evaluation on how the off shore company was doing. I concluded that he should terminate the relationship immediately. One of the bad things they did was register Domain A with a LOT of link farms. I started working on a new site that eventually we decided to go with Domain B (a better, but totally related domain name to Domain A). I added a nice new site and had my client write clean, relevant information for it. We've done all legitimate, above ground by-google's-recommendation SEO for Domain B. I have a series of 301 redirects from Domain A to Domain B. Since April 24th, organic search results have plummeted. I see many incoming links via Webmaster Tools as the massive link farms, but those link farms have Domain A in their databases, not Domain B. My question: is Domain B inheriting the link juice from Domain A insofar as the incoming links are showing up in Webmaster Tools as directly related to Domain A? Should I sever the ties with Domain A altogether? Thanks.
White Hat / Black Hat SEO | | KateZDCA1 -
404checker.com / crawl errors
I noticed a few strange crawl errors in a Google Webmaster Tools account - further investigation showed they're pages that don't exist linked from here: http://404checker.com/404-checker-log Basically that means anyone can enter a URL into the website and it'll get linked from that page, temporarily at least. As there are hundreds of links of varying quality - at the moment they range from a well known car manufacturer to a university, porn and various organ enlargement websites - could that have a detrimental effect on any websites linked? They are all nofollow. Why would they choose to list these URLs on their website? It has some useful tools and information but I don't see the point in the log page. I have used it myself to check HTTP statuses but may look elsewhere from now on.
White Hat / Black Hat SEO | | Alex-Harford0