Site Crawl Status code 430
-
Hello,
In the site crawl report we have a few pages that are status 430 - but that's not a valid HTTP status code. What does this mean / refer to?
https://en.wikipedia.org/wiki/List_of_HTTP_status_codes#4xx_Client_errorsIf I visit the URL from the report I get a 404 response code, is this a bug in the site crawl report?
Thanks,
Ian.
-
Which, of course, you can't do in Shopify.
Maybe we should just collectively get on Shopify to implement this by default.
-
It's all in this help document:
https://mza.bundledseo.com/help/moz-procedures/crawlers/rogerbot
"Crawl Delay To Slow Down Rogerbot
We want to crawl your site as fast as we can, so we can complete a crawl in good time, without causing issues for your human visitors.
If you want to slow rogerbot down, you can use the Crawl Delay directive. The following directive would only allow rogerbot to access your site once every 10 seconds:
User-agent: rogerbot
Crawl-delay: 10"
So you'd put the specified rule in your robots.txt file
-
This is happening to a client of mine too. Is there a way to set my regular MOZ Pro account to crawl the site slower?
-
This is a common issue with Shopify hosted stores, see this post:
It seems to be related to crawling speed. If a bot crawls your site too fast, you'll get 430s.
It may also be related to the proposed, 'additional' status code 430 documented here:
"430 Request Header Fields Too Large
This status code indicates that the server is unwilling to process the request because its header fields are too large. The request MAY be resubmitted after reducing the size of the request header fields."
I'd probably look at that Shopify thread and see if anything sounds familiar
-
@Angler - yeah thought the same - but why not log it as a 403 in the report. The site is hosted on Shopify - so don't get access to logs unfortunately.
Was wandering if it was related to rate limiting as in a few cases it's a false positive and page loads fine.
Have emailed Eli - thanks,
Best.
Ian.
-
-
Hey Ian,
Thanks for reaching out to us!
Would you be able to contact us at [email protected] so that we can take a closer look at your Campaign.
Looking forward to hearing from you,
Eli
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Crawling only the Home of my website
Hello,
Product Support | | Azurius
I don't understand why MOZ crawl only the homepage of our webiste https://www.modelos-de-curriculum.com We add the website correctly, and we asked for crawling all the pages. But the tool find only the homepage. Why? We are testing the tool before to suscribe. But we need to be sure that the tool is working for our website. If you can please help us.0 -
Crawling issue
Hello,
Product Support | | Benjamien
I have added the campaign IJsfabriek Strombeek (ijsfabriekstrombeek.be) to my account. After the website had been crawled, it showed only 2 crawled pages, but this site has over 500 pages. It is divided into four versions: a Dutch, French, English and German version. I thought that could be the issue because I only filled in the root domain ijsfabriekstrombeek.be , so I created another campaign with the name ijsfabriekstrombeek with the url ijsfabriekstrombeek.be/nl . When MOZ crawled this one, I got the following remark:
**Moz was unable to crawl your site on Feb 21, 2018. **Your page redirects or links to a page that is outside of the scope of your campaign settings. Your campaign is limited to pages with ijsfabriekstrombeek.be/nl in the URL path, which prevents us from crawling through the redirect or the links on your page. To enable a full crawl of your site, you may need to create a new campaign with a broader scope, adjust your redirects, or add links to other pages that include ijsfabriekstrombeek.be/nl. Typically errors like this should be investigated and fixed by the site webmaster. I have checked the robots.txt and that is fine. There are also no robots meta tags in the code, so what can be the problem? I really need to see an overview of all the pages on the website, so I can use MOZ for the reason that I prescribed, being SEO improvement. Please come back to me soon. Is there a possibility that I can see someone sort out this issue through 'Join me'? Thanks0 -
Still no invite to site crawl beta! Why bother?
Well, I was informed that I was en-queue to be invited to the Moz Site Crawl v2. I have several client sites making use of SNI b/c, well... CDN's. What is the point of telling me I may receive an invitation shortly, then hearing nothing back and not being able to crawl their sites... this makes this service 100% useless as I can simply use a couple of different tools (free) to perform the same tasks... don't get me wrong... I would rather use Moz and this is not intended to flame the service as I think it could be great... if only it worked. I cannot justify the lack of response, nor the lack of service (what we intended to use here) for the price. It seems like this is simply a waiting game wherein Moz expects me to pay for this service and THEN I will receive my invite? Is it at all possible that anyone can look into this and/or my invite status. If I cannot sample these features before long, you've lost a solid potential client. (Not my loss)
Product Support | | jmsdonline0 -
Haven't received an update on site crawl issues more than a week
Hello, my account has be scheduled to have the next updated report on 1st March. However, up till now, the latest data I have for our site crawl issues is made on 21st Feb. May I know if there is any issue related to this? Any way that i can draw the data for this week?
Product Support | | Robylin10 -
Moz having a few issues with their site?
I've seemed to have a load of issues as of late. Just small graphical things but still enough to make me concerned. 1. https://mza.bundledseo.com/community/users/4207879 being noindex (however it seems to other users an external tools work) (image proof: https://i.gyazo.com/2e77c94ffd068718e9e149e8afef1e44.png ) 2. No mozpoints: https://gyazo.com/9e0777af3ebbdb6ad58a395bafe28a52 3. I'm a staff member? https://i.gyazo.com/5d7b156c13fa1d0bb6c2e0af3467423f.png 4. Trying to respond when logged out: https://i.gyazo.com/9f5fe8efd372920391a9a54178561e0b.png (not really a bug, however could be a better experience) These have all been sent via support, just wondered if it was just me having issues? Seems i'm staff on this one too: https://i.gyazo.com/6f1bc6372180f4f9f1cb1c7d6f6d941c.png
Product Support | | ThomasHarvey0 -
Why can I not crawl this site
I wanted to add this site as new campaign: new.kbc.be But it won't accept it. Why?
Product Support | | KBC0 -
Received emails about new ranking, crawl and on page reports, but nothing new shows.
Yesterday morning I had emails about updated crawl, ranking and on page reports being available however nothing in my dashboard is newer than 2/21. I waited through yesterday to see if things changed, logged in and out etc but nothing new has shown up. Any ideas on why that is the case?
Product Support | | sea2dca0 -
Duplicate Content Report: Duplicate URLs being crawled with "++" at the end
Hi, In our Moz report over the past few weeks I've noticed some duplicate URLs appearing like the following: Original (valid) URL: http://www.paperstone.co.uk/cat_553-616_Office-Pins-Clips-and-Bands.aspx?filter_colour=Green Duplicate URL: http://www.paperstone.co.uk/cat_553-616_Office-Pins-Clips-and-Bands.aspx?filter_colour=Green**++** These aren't appearing in Webmaster Tools, or in a Screaming Frog crawl of our site so I'm wondering if this is a bug with the Moz crawler? I realise that it could be resolved using a canonical reference, or performing a 301 from the duplicate to the canonical URL but I'd like to find out what's causing it and whether anyone else was experiencing the same problem. Thanks, George
Product Support | | webmethod0