Crawl Diagnostics 403 on home page...
-
In the crawl diagnostics it says oursite.com/ has a 403. doesn't say what's causing it but mentions no robots.txt. There is a robots.txt and I see no problems. How can I find out more information about this error?
-
Hi Dana,
Thanks for writing in. The robots.txt file would not cause a 403 error. That type of error is actually related to the way the server responds to our crawler. Basically, this means the server for the site is telling our crawler that we are not allowed to access the site. Here is a resource that explains the 403 http status code pretty thoroughly: http://pcsupport.about.com/od/findbyerrormessage/a/403error.htm
I looked at both of the campaigns on your account and I am not seeing a 403 error for either site, though I do see a couple of 404 page not found errors on one of the campaigns, which is a different issue.
If you are still seeing the 403 error message on one of your crawls, you would just need to have the webmaster update the server to allow rogerbot to access the site.
I hope this helps. Please let me know if you have any other questions.
-Chiaryn
-
Okay, so I couldn't find this thread and started a new one. Sorry...
... The problem persists.
RECAP
I have two blocks in my htaccess both are for amazonaws.com.
I have gone over our server block logs and see only amazon addresses and bot names.
I did a fetch as google with our WM Tools and fetch it did. Success!
Why isn't thiscrawler able to access? Many other bots are crawling right now.
Why can I use the seomoz on-page feature to crawl a single page but the automatic crawler wont access the site? Just took a break from typing this to try the on-page on our robots.txt, worked fine. Use the keyword "Disallow" and it gave me a C. =0)
... now if we could just crawl the rest of the site...
any help on this would be greatly appreciated.
-
I think I do. I just (a few minutes ago) went through a 403 problem being reported by another site trying access an html file for verification. Apparently they are connecting with an ip that's blocked by our htaccess. I removed the blocks told them to try again and it worked no problem. I see that SEOMoz has only crawled 1 page. Off to see if I can trigger a re-crawl now...
-
hmmm... not sure why this is happening. maybe add this line to the top of your robots.txt and see if it fixes by next week. it certainly won't hurt anything:
User-agent: * Allow: /
-
No problem. Looking at my Google WM Tools , crawl stats don't show any errors.
Thanks
User-Agent: *
Disallow: /*?zenid=
Disallow: /editors/
Disallow: /email/
Disallow: /googlecheckout/
Disallow: /includes/
Disallow: /js/
Disallow: /manuals/ -
OH this is only in SEOmoz's crawl diagnostics that you're seeing this error. That explains why robots.txt could be affecting it. I misread this earlier and thought you were finding the 403 on your own in-browser.
Can you paste the robots.txt file into here so we can see it? I would imagine that has everything to do with it now that I've correctly read your post --my apologies
-
apache
-
a 403 is a Forbidden code usually pertaining to Security and Permissions.
Are you running your server in an Apache or IIS environment? Robots.txt shouldn't affect a site's visibility to the public it only talks to site crawlers.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Page with "Missing Title Tag" isn't a page
Hello, I am going through the various errors that the Moz Pro Crawl report and some non-existent pages keep coming up in the report. For example, one error category is "Missing Title Tag" with one page identified. But this page http://www.immigroup.com/news/“http%3A/crs.yorku.ca”?page=2 isn't real. It would have been a 404 were there not a redirect for everything that is /news/gobbledygook to /news. So my question is: when moz (or GA for that matter) identifies these pages as "real" and having errors, do I need to take this seriously? And what do I do about it? Thanks! George
Moz Pro | | canadageorge0 -
Problem with On-page
I have an issue. I have added 5 keywords but when i go to the "on page" tab. They are not there... So i press on "Add keyword" and it takes me to another page where i can see all my keywords. So i go back to the "on page" and no keyword shows up. I wanna have a summary of the weekly crawl for the on page of these keywords and it's not showing up 😞 Anybody knows why?
Moz Pro | | theseolab0 -
Crawl diagnostics up to date after Magento ecommerce site crawl?
Howdy Mozzers, I have a Magento ecommerce website and I was wondering if the data (errors/warnings) from the Crawl diagnostics are up to date. My Magento website has 2.439 errors, mainly 1.325 duplicate page content and 1.111 duplicate page title issues. I already implemented the Yoast meta data plugin that should fix these issues, however I still see there errors appearing in the crawl diagnostics, but when going to the mentioned URL in the crawl diagnostics for e.g.: http://domain.com/babyroom/productname.html?dir=desc&targetaudience=64&order=name and checking the source code and searching for 'canonical' I do see: http://domain.com/babyroom/productname.html" />. Even I checked the google serp for url: http://domain.com/babyroom/productname.html?dir=desc&targetaudience=64&order=name and I couldn't find the url indexed in Google. So it basically means the Yoast meta plugin actually worked. So what I was wondering is why I still see the error counted in the crawl diagnostics? My goal is to remove all the errors and bring it all to zero in the crawl diagnostics. And now I am still struggling with the "overly-dynamic URL" (1.025) and "too many on-page links" (9.000+) I want to measure whether I can bring the warnings down after implementing an AJAX-based layered navigation. But if it's not updating it here crawl diagnostics I have no idea how to measure the success of eliminating the warnings. Thanks for reading and hopefully you all can give me some feedback.
Moz Pro | | videomarketingboys0 -
Has any on else experienced a spike in crawl errors?
Hi, Since the last time our sites were crawled in SEOmoz they are all showing a spike in Errors. (Mainly duplicate page titles and duplicate content). We haven't changed anything to the structure of the sites but they are all using the same content management system. The image is an example of what we are witnessing for all our sites based on the same system. Is anyone else experiencing anything similar? or does anyone know of any changes that SEOmoz has implemented which may be affecting this? Thanks in advance, Anthony. WzdQV WzdQV WzdQV.jpg WzdQV.jpg
Moz Pro | | BallyhooLtd1 -
Does crawling help in optimisation.?
the website is as it was last week. no optimisation from my side for 10 days now. i was ranked 5 with my keyword not much competition there. however 2 days ago i registrred at seomoz and created a campaign for my website with my keywords that were ranked 5 in search. today i see that my rank has gone up to 2. i have nt done any optimisation neither have ii created any backlinks. so how and why did i climb up? i just created a campaign and let seomoz crawl my website for 2days. am i to assume seomoz crawl optimises website? if that is the case then can i create a campaign crawl pages, climb up in searches, delete the campaign after a week, create it again crawl pages and climb up and so on ? please advise?
Moz Pro | | wahin10 -
Google Hiding Indexed Pages from SERPS?
Trying to troubleshoot an issue with one of our websites and noticed a weird discrepancy. Our site should only have 3 pages in the index. The main landing page with a contact form and two policy pages, yet google reports over 1,100 pages (that part is not a mystery, I know where they are coming from.....multi site installations of popular CMS's leave much to be desired in actually separating websites) Here is a screen shot showing the results of the site command: http://www.diigo.com/item/image/2jing/oseh I have set my search settings to show 100 (the max number of results) results per page. Everything is fine until I get to page three where I get the standard "In order to show you the most relevant results, we have omitted some entries very similar to the 122 already displayed." But wait a second, I clicked on page three, now there are only two pages of results and the number of results reported has dropped to 122 http://www.diigo.com/item/image/2jing/r8c9 When I click on the "show omitted results" I do get some more results, and the returned results jumps back up to 1,100. However I only get three pages of results. And when I click on the last page the number of results returned changes to 205 http://www.diigo.com/item/image/2jing/jd4h Is this a difference between indexes (same thing happens when I turn instant search back on, Shows over 1,100 results but when I get to the last page of results it changes to 205). Any other way of getting this info? I am trying to go in and identify how these pages are being generated, but I have to know what ones are showing up in the index for that to happen. Only being able to access 1/5th of the pages indexed is not cool. Anyone have any idea about this or experience with it? For reference I was going through with SEOmoz's excellent toolbar and exporting the results to csv (using the Mozilla plugin). I guess google doesn't like people doing that so maybe this is a way to protect against scraping by only showing limited results in the Site: command. Thanks!
Moz Pro | | prima-2535090 -
On Page Analysis and Grading
I am new here and happy to be! My site is an ecommerce site with hundreds of products. I have set up campaigns to track specific products. For the on page analysis where SEOMOZ gives you a grade I have 2 urls showing. But 1 of the urls is getting an A, and 1 is getting a F. But they are the same url and obviously go to the same page. Any help would be appreciated!
Moz Pro | | Confections0 -
How do you get Mozbot to crawl your website
I trying to get the mozbot to crawl my site so I can get new crawl diagnostics info. Anyone know how this can be done?
Moz Pro | | Romancing0