Moz & Xenu Link Sleuth unable to crawl a website (403 error)
-
It could be that I am missing something really obvious however we are getting the following error when we try to use the Moz tool on a client website. (I have read through a few posts on 403 errors but none that appear to be the same problem as this)
Moz Result
Title 403 : Error
Meta Description 403 Forbidden
Meta Robots_Not present/empty_
Meta Refresh_Not present/empty_
Xenu Link Sleuth Result
Broken links, ordered by link:
error code: 403 (forbidden request), linked from page(s): Thanks in advance!
-
Hey Liam,
Thanks for following up. Unfortunately, we use thousands of dynamic IPs through Amazon Web Services to run our crawler and the IP would change from crawl to crawl. We don't even have a set range for the IPs we use through AWS.
As for throttling, we don't have a set throttle. We try to space out the server hits enough to not bring down the server, but then hit the server as often as necessary in order to crawl the full site or crawl limit in a reasonable amount of time. We try to find a balance between hitting the site too hard and having extremely long crawl times. If the devs are worried about how often we hit the server, they can add a crawl delay of 10 to the robots.txt to throttle the crawler. We will respect that delay.
If the devs use Moz, as well, they would also be getting a 403 on their crawl because the server is blocking our user agent specifically. The server would give the same status code regardless of who has set up the campaign.
I'm sorry this information isn't more specific. Please let me know if you need any other assistance.
Chiaryn
-
Hi Chiaryn
The sage continues....this is the response my client got back from the developers - please could you let me have the answers to the two questions?
Apparently as part of their ‘SAF’ (?) protocols, if the IT director sees a big spike in 3<sup>rd</sup> party products trawling the site he will block them! They did say that they use moz too. What they’ve asked me to get from moz is:
- Moz IP address/range
- Level of throttling they will use
I would question that if THEY USE MOZ themselves why would they need these answers but if I go back with that I will be going around in circles - any chance of letting me know the answer(s)?
Thanks in advance.
Liam
-
Awesome - thank you.
Kind Regards
Liam
-
Hey There,
The robots.txt shouldn't really affect 403s; you would actually get a "blocked by robots.txt" error if that was the cause. Your server is basically telling us that we are not authorized to access your site. I agree with Mat that we are most likely being blocked in the htaccess file. It may be that your server is flagging our crawler and Xenu's crawler as troll crawlers or something along those lines. I ran a test on your URL using a non-existent crawler, Rogerbot with a capital R, and got a 200 status code back but when I run the test with our real crawler, rogerbot with a lowercase r, I get the 403 error (http://screencast.com/t/Sv9cozvY2f01). This tells me that the server is specifically blocking our crawler, but not all crawlers in general.
I hope this helps. Let me know if you have any other questions.
Chiaryn
Help Team Ninja -
Hi Mat
Thanks for the reply - robots.txt file is as follows:
## The following are infinitely deep trees User-agent: * Disallow: /cgi-bin Disallow: /cms/events Disallow: /cms/latest Disallow: /cms/cookieprivacy Disallow: /cms/help Disallow: /site/services/megamenu/ Disallow: /site/mobile/ I can't get access to the .htaccess file at present (we're not the developers) Anyone else any thoughts? Weirdly I can get Screaming Frog info back on the site :-/
-
403s are tricky to diagnose because they, by their very nature, don't tell you much. They're sort of the server equivalent of just shouting "NO!".
You say Moz & Xenu are receiving the 403. I assume that it loads properly from a browser.
I'd start looking at the .htaccess . Any odd deny statements in there? It could be that an IP range or user agent is blocked. Some people like to block common crawlers (Not calling Roger names there). Check the robots.txt whilst you are there, although that shouldn't return a 403 really.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Best blog practices for website
For my Insurance website blog, I use MOZ to help me find high DA authoritative sites, then either generate ideas from them, or rewrite the copy. If I rewrite the copy, I tend to pull from 2 - 3 top authoritative sites. Just so I don't get in trouble, but still offer the most concision information. _My question is, Is this ok to do? _ Secondly, I just read that on some .Gov sites the information is public, and that you can use it as long as you give credit. _My questions is, how do I tell which information is public? _ Thank you in advance 🙂
Moz Pro | | MissThumann0 -
Abnormal crawl issues appearing in my Moz results
I have been asked to look at a site for a friend and was more than surprised to see 16,9k crawl issues appear in the dashboard... of this 6,238 are duplicate page content and 5878 are duplicated page titles. What on earth is going on? I have spoken to the web developer as it appears there is a dev site somewhere and this is his response [Can I stress that Google determines which site was in the index first and then removes other sites it sees as having duplicate content. Our dev sites appearing in the search index would not affect your ranking due to duplicate content as Google would see your site as the first site with the content] As I cannot make contact with him, I am scratching my head, surely a dev site should be no-indexed, it sounds as though he is saying that its ok because Google will take the main site as the first site with the content... Very confused! Help need MOZ community. Manythanks, Sarah
Moz Pro | | Mutatio_Digital0 -
Increase in authorization permission errors (Access Denied - Error 403)
Hi MOZ community, Since last week when I changed my theme in a WP installation I noticed (in WMT and MOZ tool) that I have increased number in authorization permission errors (error 403-forbidden). What happens is that I received a 403 error for almost every single URL of my site. All these URLs are not "real" ones but they all have my email in the end. i.e. I get an 403 error for the "/contact/[email protected]" whilst the real URL is just "/contact/" This happens, as I said, for almost every single page of my site. I have no other crawling or indexation issues, all URLs are correctly indexed. All new pages are correctly indexed as well. URIs ending with "[email protected]" are not indexed off course. WP and all installed plugins & theme are on the latest available release. For SEO purposes I use Yoast SEO WP plugin. The site in questions is: fantasylogic.com Any suggestions would be highly appreciated. Thank you in advance
Moz Pro | | gpapatheodorou0 -
MozBar & OSE
I'm doing some research for an e-commerce store, and I'm getting very different link counts from the Mozbar & Opensite explorer. Any idea why? Which is more accurate? wasqL&DGCBV wasqL&DGCBV#1
Moz Pro | | chris.kent0 -
Sudden, drastic drop in # of links and root domains on website?
Noticed that my links and root domain numbers had dropped by ~40%, seemingly overnight. Any idea what could have contributed to this (Toolbar update???)?
Moz Pro | | wlefevre0 -
SEOMoz reports and 404 errors
My SEOMoz report shows a 404 error, found today for this url: http://globalheavyhaul.com/google.com i do not have this anchor text anywhere on my website. How did Roger figure out that somebody looked for that page? Do I need to worry about 404 errors that are the result of user mistakes, instead of actual bad links?
Moz Pro | | FreightBoy0 -
Broken Links and Duplicate Content Errors?
Hello everybody, I’m new to SEOmoz and I have a few quick questions regarding my error reports: In the past, I have used IIS as a tool to uncover broken links and it has revealed a large amount of varying types of "broken links" on our sites. For example, some of them were links on my site that went to external sites that were no longer available, others were missing images in my CSS and JS files. According to my campaign in SEOmoz, however, my site has zero broken links (4XX). Can anyone tell me why the IIS errors don’t show up in my SEOmoz report, and which of these two reports I should really be concerned about (for SEO purposes)? 2. Also in the "errors" section, I have many duplicate page titles and duplicate page content errors. Many of these "duplicate" content reports are actually showing the same page more than once. For example, the report says that "http://www.cylc.org/" has the same content as "http://www.cylc.org/index.cfm" and that, of course, is because they are the same page. What is the best practice for handling these duplicate errors--can anyone recommend an easy fix for this?
Moz Pro | | EnvisionEMI0 -
Inbound links
Hi all, I was working on some monthly SEO reports and I noticed that the number of inbound links was a lot less compared to last month. I measured these number of inbound links with the mozBar, e.g. May 2011 - 49.306, June 2011 - 10.659, July 2011 - 1.010 Do you have any idea how this could be possible? I'm really looking forward to your answers! Thanks all!
Moz Pro | | Partouter0