How do fix an 803 Error?
-
I got am 803 error this week on the Moz crawl for one of my pages. The page loads normally in the browser. We use cloudflare.
Is there anything that I should do or do I wait a week and hope it disappears?
803 Incomplete HTTP response received
Your site closed its TCP connection to our crawler before our crawler could read a complete HTTP response. This typically occurs when misconfigured back-end software responds with a status line and headers but immediately closes the connection without sending any response data.
-
Kristina from Moz's Help Team here. Here is the working link to our Crawl Errors resource guide if you're still needing it!
https://mza.seotoolninja.com/help/guides/moz-pro-overview/crawl-diagnostics/errors-in-crawl-reports
-
It would be great to read more about this issue here. I would love to debug/troubleshoot the 803 Errors, but I have no idea where to start. One problem: It's not possible to adjust the crawl-speed/delay of the moz-bot so I can't tell it the bot is the problem or not. Any suggestion out there how to debug a 803 crawl error?
TIA,
Jörg
-
Hi Sha,
The first link with the complete list is not working. I would love to access it. Where can I find the link?
Thanks in advance, Michiel
-
Same here, I found error on 803 in an image, What to do now? Can you pls help?
Thnaks
-
Hi,
Found a 803 Error in an image. Does that mean I should compress/improve somehow the image, or is it a web server error?
Thank you,
-
So if it is a standard wordpress page would the issue likely to be with the wordpress code - or my on-page content?
-
Hi Zippy-Bungle,
To understand first why the 803 error was reported:
When a page is called, the web server sends header details of what's to be displayed. You can see a complete list of these HTTP header fields here.
One of the headers sent by the web server is Content-length, which indicates how many bytes the rest of the page is going to send. So let's say for example that content length is 100 bytes but the server only sends 74 bytes (it may be valid HTML, but the length does not match the content length indicated)
Since the web server only sent 74 bytes and the crawler expected 100 bytes the crawler sees a TCP close port error because it is trying to read the number of bytes that the webserver said it was going to send. So you get an 803 error.
Now browsers don't care when a mismatch like this happens because Content-length is an outdated component for modern browsers, but Roger Mozbot (the Moz crawler, identified in your logs as RogerBot) is on a mission to show you any errors that might be occurring. So Roger is configured to detect and report such errors.
The degree to which an 803 error will adversely affect crawl efficiency for search engine bots such as Googlebot, Bingbot and others will vary, but the fundamental problem with all 8xx errors is that they result from violations of the underlying HTTP or HTTPS protocol. The crawler expects all responses it receives to conform to the HTTP protocol and will typically throw an exception when encountering a protocol-violating response.
In the same way that 1xx and 2xx errors generally indicate a badly-misconfigured site, fixing them should be a priority to ensure that the site can be crawled effectively. It is worth noting here that bingbot is well known for being highly sensitive to technical errors.
So what makes the mismatch happen?
The problem could be originating from the website itself (page code), the server, or the web server. There are two broad sources:
- Crappy code
- Buggy server
I'm afraid you will need to get a tech who understands this type of problem to work through each of these possibilities to isolate and resolve the root cause.
The Moz Resource Guide on HTTP Errors in Crawl Reports is also worth a read in case Roger encounters any other infrequently seen errors.
Hope that helps,
Sha
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should I set blog category/tag pages as "noindex"? If so, how do I prevent "meta noindex" Moz crawl errors for those pages?
From what I can tell, SEO experts recommend setting blog category and tag pages (ie. "http://site.com/blog/tag/some-product") as "noindex, follow" in order to keep the page quality of indexable pages high. However, I just received a slew of critical crawl warnings from Moz for having these pages set to "noindex." Should the pages be indexed? If not, why am I receiving critical crawl warnings from Moz and how do I prevent this?
Moz Pro | | NichGunn0 -
Crawl Diagnostics - 350 Critical errors? But I used rel-canonical links
Hello Mozzers, We launched a new website on Monday and had our first MOZ crawl on 01/07/15 which came back with 350+ critical errors. The majority of these were for duplicate content. We had a situation like this for each gym class: GLOBAL YOGA CLASS (canonical link / master record) YOGA CLASS BROMLEY YOGA CLASS OXFORD YOGA CLASS GLASGOW etc All of these local Yoga pages had the canonical link deployed. So why is this regarded as an error by MOZ? Should I have added robots NO INDEX instead? Would think help? Very scared our rankings are gonna get effected 😞 Ben
Moz Pro | | Bendall0 -
Weird 404 Errors
Hi All, Although my Moz error scans have been pretty clean for a while, a law firm site I manage recently cropped up with 80+ 404 errors since the last scan. I'm a little baffled as the url it shows being returned looks like this: http://www.yoursite.com/ http://www.yoursite.com/resource.html For some reason it seems to be initiating a query to call the root domain twice before the actual resource. I installed ModX Revolution 2.2.6-PL on the site in question, and am hoping a canonical plugin I just started using will take care of these. Has this happened to anyone else? What did you do to solve the issue? Thanks for your time and any tips!
Moz Pro | | G2W0 -
404: Error - MBP Ninja Affiliate
Hello, I use the plugin MBP Ninja Affiliate to redirect links. I did Crawl Diagnostics and it appears 404: Error, but the link is working, it exists. Why Crawl Diagnostics appear 404: Error?
Moz Pro | | antoniojunior0 -
90% of our sites that are designed are in wordpress and the report brings up "duplicate" content errors. I presume this is down to a conical error?
We are looking at getting the Agency version of SEOMoz and are based in the UK Could you please tell me what would be the best way to correct this issue as this appears to be a problem with all our clients websites. an example would be www.fsgenergy.co.uk Would you also be able to suggest the best SEO plugin to use with SEOMOz ? Many thanks Paul
Moz Pro | | KloodLtd1 -
How to Fix the Errors with Duplicate Title or Content?
The latest Crawl Diagnostic has found 160 Errors on my site.
Moz Pro | | hanmark
And my error is, that the same content or title is used on two different! pages:
on both my root domain (han-mark.com) and the www subdomain. What does it matter (with or without www)? How serious is that error? Do I need to fix all the errors (and hundreds of warnings too)? What's the best practice? Is there any Guide on how to do it
or Tools for doing it the fast way? Viggo Joergensen0 -
It won't let me print the secon or third pages of site errors.
When I place the site errors page in pdf format it won't let me print the second or any of the other webpages containing content about my site. Does any one know why?
Moz Pro | | ibex0 -
What causes Crawl Diagnostics Processing Errors in seomoz campaign?
I'm getting the following error when seomoz tries to spider my site: First Crawl in Progress! Processing Issues for 671 pages Started: Apr. 23rd, 2011 Here is the robots.txt data from the site: Disallow ALL BOTS for image directories and JPEG files. User-agent: * Disallow: /stats/ Disallow: /images/ Disallow: /newspictures/ Disallow: /pdfs/ Disallow: /propbig/ Disallow: /propsmall/ Disallow: /*.jpg$ Any ideas on how to get around this would be appreciated 🙂
Moz Pro | | cmaddison0