Sorry, but that URL is inaccessible?
-
Hi,
I am trying to grade some pages and keywords using the "On-page grader" tool but for each URL that I try, the tool returns me a "Sorry, but that URL is inaccessible". The thing is that I have already used previously and without any problem some of these URLs. In fact, I have just realized that, while the same URL (for example: www.lacasadelaaldea.com) works in the "On-page optimization tool", it doesn't in the "On-page grader" right now.
I have looked if someone could have experienced the same issue and I have found some other threads talking about it... so I have checked with my hosting provider that there is no firewall or any other thing causing this problem but they can't find anything. How do you make the call to the server? What could be happening?
Thanks in advance,
Juan
-
Wow, I feel a little bit embarrassed to share with you what really happened...
I was going to take a screenshot and then I realized that I was writting down the URL where should go the keywords and viceversa so it was all my bad I have just come back from my summer vacation yesterday so I guess I am a little rusty and my mind is still adjusting
Sorry for the inconvinience! And thanks a lot to both of you anyway
-
Hi Juan
We are not seeing any issues with the on-page grader. Can you share a screenshot of the error message?
-
Thanks for your quick reply Patrick. I hadn't thought about that so I'm going to check it!
-
Hi there
I would review your robots.txt and make sure that you're not blocking crawlers from finding important information that may be contained in some of your extensions or directories that you have blocked. You seem to have quite a few, and based that I don't know how your resources are set up/I am skimming, that would be an area to look. This could also apply to the notifications about blocking JSS and CSS that Google has sent out recently.
Hope this helps! Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How do I disallow crawl on a directory when it's a prefix to my site's URL?
I am trying to disallow our media repository (hosted elsewhere, but appears as a directory on our site) from being crawled by robots but it is not a subdirectory of the site, it's a prefix. So I need to disallow: mediabank.mywebsite.org Not: mysite.org/mediabank What would I need to put in my robots.txt and/or the other host's robots.txt to make this happen? Thanks!
Moz Bar | | Simon-Plan0 -
On-Page Grader Url is inaccessible
Hi everybody. I'm trying to use on -page grader for https://www.upscaledinnerclub.com and get "Sorry, but that URL is inaccessible." Robots.txt are empty, another thread on MOZ was talking about DNS check - it's all good. So, I can't figure out why this is happening. Also I am trying the same for another website https://www.regexseo.com - the same story. Common thing is that they both are on Google App Engine. And at first i thought that was the problem. Bu then i checked this one : https://www.logitinc.com/ and it's working, even though this website is on GAE as well. None of these website have robots.txt or any differences in setup or settings. Any thoughts?
Moz Bar | | DmitriiK0 -
MOZ Page Grader - Sorry, but that URL is inaccessible?
Hi, I am trying to use the MOZ page grader on this page - https://www.respuestasparaelhombre.com/datos-sobre-disfunción-eréctil-preguntas but it is saying Sorry, but that URL is inaccessible - any ideas? Thanks
Moz Bar | | Jason_Marsh1230 -
URLS appearing twice in Moz crawl
I have asked this question before and got a Moz response to which i replied but no reply after that. Hi, We have noticed in our moz crawl that urls are appearing twice so urls like this - http://www.recyclingbins.co.uk/about/ www.recyclingbins.co.uk/about/ Thought it may be possible rel=canonical issue as can find URL's but no linking URL's to the pages. Does anyone have any ideas? Thank you Jon I did the crawl test and they were not there
Moz Bar | | imrubbish0 -
Moz Crawler URL paramaters & duplicate content
Hi all, this is my first post on Moz Q&A 🙂 Questions: Does the Moz Crawler take into account rel="canonical" for search results pages with sorting / filtering URL parameters? How much time does it take for an issue to disappear from the issues list after it's been corrected? Does it come op in the next weekly report? I'm asking because the crawler is reporting 50k+ pages crawled, when in reality, this number should be closer to 1000. All pages with query parameters have the correct canonical tag pointing to the root URL, so I'm wondering whether I need to noindex the other pages for the crawler to report correct data?: Original (canonical URL): DOMAIN.COM/charters/search/mx/BS?search_location=cabo-san-lucas Filter active URL: DOMAIN.COM/charters/search/mx/BS?search_location=cabo-san-lucas&booking_date=&booking_days=1&booking_persons=1&priceFilter%5B%5D=0%2C500&includedPriceFilter%5B%5D=drinks-soft Also, if noindex is the only solution, will it impact the ranking of the pages involved? Note: Google and Bing are semi-successful in reporting index page count, each reporting around 2.5k result pages when using the site:DOMAIN.com query. The rel canonical tag was missing for a short period of time about 4 weeks ago, but since fixing the issue these pages still haven't been deindexed. Appreciate any suggestions regarding Moz Crawler & Google / Bing index count!
Moz Bar | | Vukan_Simic0 -
Ajax #! URL support?
Hi Moz, My site is currently following the convention outlined here: https://support.google.com/webmasters/answer/174992?hl=en Basically since pages are generated via Ajax we are setup to direct bots that replace the #! in a url with ?escaped_fragment to cached versions of the ajax generated content. For example, if the bot sees this url: http://www.discoverymap.com/#!/California/Map-of-Carmel/73 it will replace it will instead access the page: http://www.discoverymap.com/?escaped_fragment=/California/Map-of-Carmel/73 In which case my server serves the cached html instead of the live page. This is all per Googles direction and is indexing fine. However the MOZ bot does not do this. It seems like a fairly straight-forward feature to support. Rather than ignoring the hash, you look to see if it is a #! and then try to spider the url replaced with ?escaped_fragment. Our server does the rest. If this is something MOZ plans on supporting in the future I would love to know. If there is other information that would be great. Also, pushstate is not practical for everyone due to limited browser support, etc. Thanks, Dustin Updates: I am editing my question because it won't let me respond to my own question. It says I need to sign up for MOZ analytics. I was signed up for Moz Analytics?! Now I am not? I responded to my invitation weeks ago? Anyway, you are misunderstanding how this process works. There is no site-map involved. The bot reads this URL on the page: http://www.discoverymap.com/#!/California/Map-of-Carmel/73 And when it is ready to spider the page for content it, it spider's this URL instead: http://www.discoverymap.com/?escaped_fragment=/California/Map-of-Carmel/73 The server does the rest, it is simply telling Roger to recognize the #! format and replace it with ?escaped_fragment Though I obviously do not know how Roger is coded but it is a simple string replacement. Thanks.
Moz Bar | | oneactlife0