Unsolved Ooops. Our crawlers are unable to access that URL
-
hello
i have enter my site faroush.com but i got an error
Ooops. Our crawlers are unable to access that URL - please check to make sure it is correct
what is problem ? -
I'm encountering the same problem with my website CFMS Bill Status. It seems that both my main website is totally inaccessible to web crawlers. I'm investigated all possible causes such as server configurations, robots.txt restrictions, and security measures. But still haven't found out any clue yet.
-
Have you tried those steps I've suggested earlier? Like checking out settings?
-
Make sure your website can be seen by everyone and isn't blocked by any security settings. Try opening your website from different devices and networks to see if it works. Also, check if your website's settings are stopping search engines from seeing it. Look for any rules that might be blocking search engines in a file called robots.txt. If you find any, make sure they're not stopping search engines from looking at your site.
-
I am getting same error on my website Apne TV
It's been 7 days I am getting same error again and again
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Does Moz Pro include Moz Local
My client has bought about six Moz Local accounts and are pleased with results. We have not yet used your Moz Pro program. The client might be interested in switching to the Moz Pro if those Moz Local accounts can be included into it. Please let me know as soon as possible. Thanks!
Moz Pro | | gallowaywebteam0 -
Unsolved Old Account Data
I've just got our account live again after the old payment card had expired. Now I'm back in the account I don't see my old set up and all the site / keywords I've previously had set up. Can you help?
Moz Pro | | Paul_Coupe0 -
Identify Page Not Found Visits
Hello everyone! I have always known enough about Google Analytics and SEO to be dangerous, but was not a focus for me. I am working on a project were I am looking at stuff where my knowledge is limited. The scenario is that the domain I am looking at will serve a 404 error, but keeps the url, I guess for tracking purposes. At the same time, there is a page "Page_Not_Found" that has elevated visits. I am not sure how to tell where the visits are coming from to the PNF since the Previous Page is mostly identified as "(entrance)" Is the PNF correlated to the process of serving an error page but not changing the URL? Ideally, I am looking to identify and improve the 404 visits. I hope that I provided clear enough information. Happy to provide more as needed.
Intermediate & Advanced SEO | | HankHoffmeier0 -
Unsolved How does Moz compile the "Important pages on your site are returning a 4xx error!" report?
I have over 200 links in this report (mostly from a staging site). I have deleted that staging site and I cannot find the reference to the other links. So my question is, where is Moz finding these links?
Moz Pro | | nomad_blogger0 -
Paid API Access
We've been using your API on regular basis over the past few months, when lately we started having problems accessing the data it provides.I received an empty object when tried fetching info using the following URL:string sUrlToScan = "http://lsapi.seomoz.com/linkscape/url-metrics/"+sHost+"?AccessID=member-d73bd20330&Expires=1355756373&Signature=....";* Please note - I removed the Signature field for security reasonsI would really appreciate it if you could let me know why the service has stopped working.Is it because the 'Expires' is no longer valid?Could you please provide me with the up-to-date AccessID + Expires + Signature values so I could continue using your API?Thanks!
Moz Pro | | Sorezki0 -
How to remove URLS from from crawl diagnostics blocked by robots.txt
I suddenly have a huge jump in the number of errors in crawl diagnostics and it all seems to be down to a load of URLs that should be blocked by robots.txt. These have never appeared before, how do I remove them or stop them appearing again?
Moz Pro | | SimonBond0 -
Dynamic URL pages in Crawl Diagnostics
The crawl diagnostic has found errors for pages that do not exist within the site. These pages do not appear in the SERPs and are seemingly dynamic URL pages. Most of the URLs that appear are formatted http://mysite.com/keyword,%20_keyword_,%20key_word_/ which appear as dynamic URLs for potential search phrases within the site. The other popular variety among these pages have a URL format of http://mysite.com/tag/keyword/filename.xml?sort=filter which are only generated by a filter utility on the site. These pages comprise about 90% of 401 errors, duplicate page content/title, overly-dynamic URL, missing meta decription tag, etc. Many of the same pages appear for multiple errors/warnings/notices categories. So, why are these pages being received into the crawl test? and how to I stop it to gauge for a better analysis of my site via SEOmoz?
Moz Pro | | Visually0 -
Why is blocking the SEOmoz crawler considered a red "error?"
Why is blocking the SEOmoz crawler considered a red "error?" Please see attached image... Y3Vay.png
Moz Pro | | vkernel0