How much time should I wait between Crawl Tests?
-
Hello!
I ask because it has happened before (and again this morning) that after doing a crawl test and repairing my site per the errors found in Moz's crawl test it still finds the same error. Even though I fixed them.
Typically I do a re-crawl 6 hours after or the next day and I find the same errors. I know they are fixed because a couple of days go by and finally Moz gets it right.
I had understood that the crawl test was an "on-demand" crawl of sorts, granted with limit of 2 a day. But it seems that if you re-crawl your site within a day the same results yield? It's frustrating.
Is this correct?
Thank you!
-
Hi Erin, thank you for your response. Ok that makes sense.
I appreciate it!
-
Ahoy!
Erin here. I'm a lead on the Help Team here at Moz! Sorry for the confusion! Our Crawl Test data is actually cached for 48 hours, which is why you're seeing the same issues after 6 hours/same day after you've made fixes. In order to get the most accurate results, you'll want to wait at least 48 hours between Crawl Tests.
I hope this helps, and if you have any other questions, feel free to shoot us an email at [email protected]!
Yours til the Chocolate Chips,
Erin
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
If we put the disavow links in google, does MOZ crawl the same links?
I have put bad or spam links in disavow file, but still showing in MOZ backlinks. So, I want to know that Why is MOZ not removing the spam links from their system?
Moz Bar | | insidewebanalytics0 -
Site crawl warning - concatenated urls from Wordpress
I could use some help on how to fix this. I asked at the walkthrough but was told it was a Wordpress issue but so far I can't find anything to point me in the right direction. There are no errors in the files on server side and I have asked my hosting company too. I am hoping someone here may be able to shed some light on it. One of my websites it giving 404 errors on links that are formed as below and there are over 12.7K of them! Example: <mydomainurl>/www.instagram.com/www.instagram.com/<instagram username=""></instagram></mydomainurl> The link that relates to my website is valid and working, but I don't understand the rest. I am totally stumped on how to move forward with this. Any advice, suggestions, tips on how to fix these errors and stop these types of links getting generated. Thanks.
Moz Bar | | emercarr0 -
Moz Site Crawl Test 404
Crawled site a number of times using Crawl Test. Its reporting 404's from files that are actually present. What do you make of this? Justin
Moz Bar | | GrouchyKids0 -
Http:// https:// google search console crawl errors
How to direct http:// to https:// to get rid of 404 errors in google webmaster search console (http:// crawl errors)
Moz Bar | | O.D.0 -
Odd crawl test issues
Hi all, first post, be gentle... Just signed up for moz with the hope that it, and the learning will help me improve my web traffic. Have managed to get a bit of woe already with one of the sites we have added to the tool. I cannot get the crawl test to do any actual crawling. Ive tried to add the domain three times now but the initial of a few pages (the auto one when you add a domain to pro) will not work for me. Instead of getting a list of problems with the site, i have a list of 18 pages where it says 'Error Code 902: Network Errors Prevented Crawler from Contacting Server'. Being a little puzzled by this, i checked the site myself...no problems. I asked several people in different locations (and countries) to have a go, and no problems for them either. I ran the same site through Raven Tool site auditor and got some results. it crawled a few thousand pages. I ran the site through screaming frog as google bot user agent, and again no issues. I just tried the fetch as Gbot in WMT and all was fine there. I'm very puzzled then as to why moz is having issues with the site but everyone is happy with it. I know the homepage takes 7 seconds to load - caching is off at the moment while we tweak the design - but all the other pages (according to SF) take average of 0.72 seconds to load. The site is a magento one so we have a lengthy robots.txt but that is not causing problems for any of the other services. The robots txt is below. Google Image Crawler Setup User-agent: Googlebot-Image
Moz Bar | | Arropa
Disallow: Crawlers Setup User-agent: * Directories Disallow: /ajax/
Disallow: /404/
Disallow: /app/
Disallow: /cgi-bin/
Disallow: /downloader/
Disallow: /errors/
Disallow: /includes/
#Disallow: /js/
#Disallow: /lib/
Disallow: /magento/
#Disallow: /media/
Disallow: /pkginfo/
Disallow: /report/
Disallow: /scripts/
Disallow: /shell/
Disallow: /skin/
Disallow: /stats/
Disallow: /var/
Disallow: /catalog/product
Disallow: /index.php/
Disallow: /catalog/product_compare/
Disallow: /catalog/category/view/
Disallow: /catalog/product/view/
Disallow: /catalogsearch/
#Disallow: /checkout/
Disallow: /control/
Disallow: /contacts/
Disallow: /customer/
Disallow: /customize/
Disallow: /newsletter/
Disallow: /poll/
Disallow: /review/
Disallow: /sendfriend/
Disallow: /tag/
Disallow: /wishlist/
Disallow: /catalog/product/gallery/ Files Disallow: /cron.php
Disallow: /cron.sh
Disallow: /error_log
Disallow: /install.php
Disallow: /LICENSE.html
Disallow: /LICENSE.txt
Disallow: /LICENSE_AFL.txt
Disallow: /STATUS.txt Paths (no clean URLs) #Disallow: /.js$
#Disallow: /.css$
Disallow: /.php$
Disallow: /?SID= Pagnation Disallow: /?dir=
Disallow: /&dir=
Disallow: /?mode=
Disallow: /&mode=
Disallow: /?order=
Disallow: /&order=
Disallow: /?p=
Disallow: /&p= If anyone has any suggestions then please i would welcome them, be it with the tool or my robots. As a side note, im aware that we are blocking the individual product pages. Too many products on the site at the moment (250k plus) which manufacturer default descriptions so we have blocked them and are working on getting the category pages and guides listed. In time we will rewrite the most popular products and unblock them as we go Many thanks Carl0 -
Is there a problem with New Campaign crawls being slow?
Hi I've created 3 new campaigns but results not showing as usual. I know the full results take up to 7 days but I usually see some basic info in a few minutes of creating the campaign. This problem happened a while ago and the campaign was never quite right in SEOMoz. Thanks for your help Steve
Moz Bar | | stevecounsell0 -
Moz Dupe content crawl anomaly
Hi Moz has completed a crawl for a site i'm working on which also has a development area (hence with lots of dupe content) on a sub domain (and this dev area hasn't been hidden from crawlers via password, robots, gwt etc etc). Moz dupe content report is not showing any of these urls though even though my campaign setting is on 'root' domain so i would have thought report should be listing the subdomain urls as dupe content (because they are dupe content). Any ideas ? Cheers Dan
Moz Bar | | Dan-Lawrence0 -
Not getting foreign characters in crawl diagnostics .csv
The crawl diagnostics .csv file is showing high-ascii characters instead of the correct language (foreign language website) e.g. Vietnamese, Chinese (both kinds), etc. Is there a way to get this right?
Moz Bar | | trainSEM0