Site crawl errors - download list of all urls
-
Hi
Ive provided my clients developers with the pdf reports of crawl errors but these seem to miss some urls
I see there are lots of csv file download/email options
Will the email csv button send a report of everything listing all urls that are missing from the pdfs ? if not will the more specific csv reports
Would be good if i can press 1 button and get all issues listed with all urls
It does look like this happens but i just want confirmed best way asap since need to provide reports urgently, any guidance much appreciated ?
All Best
Dan
-
You are welcome! I know the "manual" method often takes the longest, but in reality, it is often the most accurate. Hope this helps!
-
thanks David !
-
I have tried both options before, and tend to see the CSV document be the more reliable of the two. I have seen the same thing as you, where the PDF seems like it leaves out information. Unfortunately, a manual check would be required to make sure that all are included.
I always have the team download them first, double check for errors, them email them from our company email address. Makes it a bit more personal that way, rather then it being emailed directly from analytics.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moz Pro OnDemand Crawl fail on on WordPress site
Hello, I just can't seem to understand why OnDemand Crawl fails on further attempts only 4 pages out of 68 I am using WordPress, Divi Theme and on LiteSpeed server. Robots.txt allows rogerbot just can seem to find the issue
Moz Bar | | ChrisSanClaire0 -
Link Explorer Query Field ObscuresParts of Long URL
Hi, When using the Link Explorer to query URLs, when a long URL is entered, the field obscures the full URL eventough the field width should be able to comfortably fit the whole URL. This started to happen a few weeks ago. Hope this can be fixed soon. Thanks!
Moz Bar | | JacobMojiwat
Jake0 -
Different Errors Running 2 Crawls on Effectively the Same Setup
Our developers are moving away from utilising robots.txt files due to security risks, so e have been in the process of removing them from sites. However we, and our clients still want to run Moz crawl reports as they can highlight useful information. The two sites in question sit on the same server with the same settings (in fact running on the same Magento install). We do not have a robots.txt files present (they 404), and as per Chiaryn's response here https://mza.bundledseo.com/community/q/without-robots-txt-no-crawling this should work fine? However for www.iconiclights.co.uk we got: 902 : Network errors prevented crawler from contacting server for page. While for www.valuelights.co.uk we got: 612 : Page banned by error response for robots.txt. These crawls were both run recently, and there was no robots.txt present. Not to mention, they are on the same setup/server etc as mentioned. Now, we have just tested this, by uploading a blank robots.txt file to see if it changed anything - but we get exactly the same errors. I have had a look, but can't find anything that really matches this on here - help would really be appreciated! Thanks!
Moz Bar | | I-COM0 -
Why RogerBot can't crawl site https://unplag.com
Hello Please help me to solve the problem. The on-page grader and Crawl Test are not working for Unplag.com website. Both said that they can't access the url. Yes, I've tried different variants like unplag.com, http://unplag.com One more thing - RogerBot was disallowed in robots.txt file. I deleted it from the file a week ago so maybe moz index haven't been renewed.
Moz Bar | | Targeras0 -
URL inaccessible for On Page Grader
I am trying to use the on page grader however it is not working with my website. The URL is as follows: https://capbeast.com. I have been trying to read in older posts to see if https is now supported or not but have not found anything. I know there is no robots.txt issue as I am able to run the crawl test on our website fine. Is the issue on my end in regards to configuration or is due to DDos attacks? Any help would be appreciated. Thanks
Moz Bar | | MisterStitches0 -
Moz Crawl Test Trying to Crawl Contact Form Submit Button Location?
Moz Crawl Test for some reason is trying to Crawl a contact form Widget Submit Location. My obvious guess is that obviously the crawl cannot submit to the required fields…..I believe this because they're only kicking back these errors on the pages I have a contact form widget on. http://crawfordspest.com/pest-control/[email protected] 1412553693 404 : Received 404 (Not Found) error response for page. Error attempting to request page; see title for details. 404
Moz Bar | | Funk-Creative-Media
http://crawfordspest.com/tree-services/[email protected] 1412553693 404 : Received 404 (Not Found) error response for page. Error attempting to request page; see title for details. 404
http://crawfordspest.com/lawn-care/[email protected] 1412553693 404 : Received 404 (Not Found) error response for page. Error attempting to request page; see title for details. 404
http://crawfordspest.com/specialty-services/[email protected] 1412553693 404 : Received 404 (Not Found) error response for page. Error attempting to request page; see title for details. 404 Can you shed any insight to this? I'm a bit worried that I'll have to complete gut the contact form which was one of the major requests my client requested. Or in a worse scenario make all fields not required. It would let so much spam in. I have never seem anything like this at all. But I've learned a lot from Moz, and with major errors like 404 damage Domain Authority greatly. I've fixed 404 issues with newly acquired clients existing sites and tracked through Moz and the domain authority flies up once these errors are fixed. Along with fixing what Webmaster Tools through Google reports back. ..... Let me know if you have any expertise on this matter.0 -
Crawl Test cannot be seen on my PC. Using Windows 8.
I received and downloaded my Crawl Test. When I try to open it, my pc says "This app can't run on your PC. To find a version for your PC, check with the software publisher". I'm running Windows 8. Can I view my Crawl Test with my PC? Is there a work-around for this issue? Update I can apparently open my Crawl Test and view it as an Excel Spreadsheet. But when I download it and choose Save As, it saves it as a MS-DOS Application. This is my very first Crawl Test and I am not sure if I am doing everything right.
Moz Bar | | jameskoby010 -
Ajax #! URL support?
Hi Moz, My site is currently following the convention outlined here: https://support.google.com/webmasters/answer/174992?hl=en Basically since pages are generated via Ajax we are setup to direct bots that replace the #! in a url with ?escaped_fragment to cached versions of the ajax generated content. For example, if the bot sees this url: http://www.discoverymap.com/#!/California/Map-of-Carmel/73 it will replace it will instead access the page: http://www.discoverymap.com/?escaped_fragment=/California/Map-of-Carmel/73 In which case my server serves the cached html instead of the live page. This is all per Googles direction and is indexing fine. However the MOZ bot does not do this. It seems like a fairly straight-forward feature to support. Rather than ignoring the hash, you look to see if it is a #! and then try to spider the url replaced with ?escaped_fragment. Our server does the rest. If this is something MOZ plans on supporting in the future I would love to know. If there is other information that would be great. Also, pushstate is not practical for everyone due to limited browser support, etc. Thanks, Dustin Updates: I am editing my question because it won't let me respond to my own question. It says I need to sign up for MOZ analytics. I was signed up for Moz Analytics?! Now I am not? I responded to my invitation weeks ago? Anyway, you are misunderstanding how this process works. There is no site-map involved. The bot reads this URL on the page: http://www.discoverymap.com/#!/California/Map-of-Carmel/73 And when it is ready to spider the page for content it, it spider's this URL instead: http://www.discoverymap.com/?escaped_fragment=/California/Map-of-Carmel/73 The server does the rest, it is simply telling Roger to recognize the #! format and replace it with ?escaped_fragment Though I obviously do not know how Roger is coded but it is a simple string replacement. Thanks.
Moz Bar | | oneactlife0