Incorrect crawl errors
-
A crawl of my websites has indicated that there are some 5XX server errors on my website:
Error Code 608: Page not Decodable as Specified Content Encoding
Error Code 803: Incomplete HTTP Response Received
Error Code 803: Incomplete HTTP Response Received
Error Code 608: Page not Decodable as Specified Content Encoding
Error Code 902: Network Errors Prevented Crawler from Contacting ServerThe five pages in question are all in fact perfectly working pages and are returning HTTP 200 codes. Is this a problem with the Moz crawler?
-
Thanks for this! I didn't think to check the server logs. I'll have them checked and make sure that it's not blocking Moz out from the crawl. We have thousands of URL's on our website and quite a strict security policy on the server - so I imagine Moz has probably been blocked out.
Thanks,
Liam
-
Hi,
These error code's are Moz custom codes to list errors it encounters when crawling your site - it's quite possible that when you check these pages in a browser that they load fine (and that google bot is able to crawl them as well).
You can find the full list of crawl errors here: https://mza.bundledseo.com/help/guides/search-overview/crawl-diagnostics/errors-in-crawl-reports. You could try to check these url's with a tool like web-sniffer.net to check the responses and check the configuration of your server.
-
608 errors: Home page not decodable as specified Content-Encoding
The server response headers indicated the response used gzip or deflate encoding but our crawler could not understand the encoding used. To resolve 608 errors, fix your site server so that it properly encodes the responses it sends. -
803 errors: Incomplete HTTP response received
Your site closed its TCP connection to our crawler before our crawler could read a complete HTTP response. This typically occurs when misconfigured back-end software responds with a status line and headers but immediately closes the connection without sending any response data. -
902 errors: Unable to contact server
The crawler resolved an IP address from the host name but failed to connect at port 80 for that address. This error may occur when a site blocks Moz's IP address ranges. Please make sure you're not blocking AWS.
Without the actual url's it's impossible to guess what is happening in your specific case.
Hope this helps,
Dirk
-
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moz crawling doesn't show all of my pages
Hello, I'm trying to make an SEO on-pages audit of my website. When using the SEO crawl, i see only one page while i have much more pages in this website (200+). Is it an issue with the sitemap ? The robot.txt ? How can i check and correct this. My website is discoolver.com
Link Explorer | | MK-Discoolver1 -
Strange error in MOZ report
I get the following warning about our domain name in Link Explorer Moz tool You entered the URL debtacademy.com which redirects to www.hugedomains.com/domain_profile.cfm?d=debtacademy&e=com. Click here to analyze www.hugedomains.com/domain_profile.cfm?d=debtacademy&e=com instead. Please advice me. How I can fix it.
Link Explorer | | jeffreyjohnson0 -
612 : Page banned by error response for robots.txt
Hi all,
Link Explorer | | ME5OTU
I ran a crawl on my site https://www.drbillsukala.com.au and received the following error "612 : Page banned by error response for robots.txt." Before anyone mentions it, yes, I have been through all the other threads but they did not help me resolve this issue. I am able to view my robots.txt file in a browser https://www.drbillsukala.com.au/robots.txt.
The permissions are set to 644 on the robots.txt file so it should be accessible
My Google Search Console does not show any issues with my robots.txt file
I am running my site through StackPath CDN but I'm not inclined to think that's the culprit One thing I did find odd is that even though I put in my website with https protocol (I double checked), on the Moz spreadsheet it listed my site with http protocol. I'd welcome any feedback you might have. Thanks in advance for your help.
Kind regards0 -
Local listings aren't being crawled?
Hello, SEO brainiacs. Here is the problem I can't figure out:We have a lot of clients and all of them have local listings, such as Insider Pages, Yahoo local, Bing locals, Yelp, Yellow Pages etc etc etc.
Link Explorer | | seomozinator
And all of those local listing have company's website link. But none of those links are considered (found) in reports, such as opensite explorer or in Link Analysis Section of Moz PRO. I do know that MOZ crawl considers only high DA websites, but all of local listing domains have above 80-90 DA score, so they can't be NOT crawled. I will say that some local listings do appear on very few reports, but far not all of them. So, what can be the problem?0 -
Why is Moz not crawling my backlinks
Hi my website www.dealwithautism.com is 3 months old and has been on DA 1 and PA1 ever since, even though the site is actively developed with quality content (a couple of posts already have 1k+ fb likes acquired editorially, while that doesnt necessarily improve SERP, it sure tells you that the post is engaging). In contrast another site of mine, www.deckmymac.com which is hardly ever managed, not have more than 15 posts and just 1 backlink, has DA 14. Running an on page analysis on www.dealwithautism.com I observed that Moz has not identified any backlinks nor social signals (except G+). However, according to Webmasters, I have 57 links, 51 of them to the root. Even Majestic is able to report 32+ backlinks. So what am I missing? Certainly, at this stage my website doesn't deserve DA 1, or does it?
Link Explorer | | DealWithAutism0 -
Moz Crawl Canonicals and Duplicates
Hi all, I am using Moz Crawl to analyze some sites I am having to optimize.
Link Explorer | | Eurasmus.com
I keep seeing many of my pages detected as duplicate content when they have the rel=canonical applied. Example: www.spain-internship.com/zh-CN/blog-by-aaron
I have seen that in other sites. Of course I understand that Moz is not perfect but, is there a known issue or am I doing something wrong with the canonicals? Regards,0 -
How can I get a Moz crawl report of 404 errors on my site
I have a Moz subscription and I see dead links on my website that link externally. Is there a Moz crawl report which will show me these 404 errors and which pages on my site those 404 links are on?
Link Explorer | | Marbanasin0 -
Clicking on "Filter" on "Just discovered"tab errors out
IF you click on the "fiter" button on the "just discovered" tab in OSE, page returns message "no links found." This is weird because that message displays even when you didn't actually filter for anything, but just clicked on the button.
Link Explorer | | ALLee0