Robots.txt - Allow and Disallow. Can they be the same?
-
Hi All,
I need some help on the following:
Are the following commands the same?
User-agent: *
Disallow:
or
User-agent: *
Allow: /
I'm a bit confused. I take it that the first one allows all the bots but the second one blocks all the bots.
Is that correct?
Many thanks,
Aidan
-
Hi Aidan
I'm getting a similar problem on a site I'm working on. The on page rank checker "can't reach the page". I've checked everything obvious (at least I think I have!)
May I ask how you eventually resolved it?
Thanks Aidan
-
Hi
you can use this tool for be sure that the crawler see your files
http://pro.seomoz.org/tools/crawl-test
but you must wait for receive the report to a email.
when you say:
"get the following msg when I try to run On Page Analysis:"
the tools is this?
http://pro.seomoz.org/tools/on-page-keyword-optimization/new
for check the website you can use this:
http://www.opensiteexplorer.org
Ciao
Maurizio
-
Hi,
Thanks for the clarification. So the Robots.txt isn't blocking anything.
Do you know why then i cannot use SEOMoz On Page Analysis and Xenu and Screaming Frog only return 3 URLs?
I get the following msg when I try to run On Page Analysis:
"Oops! We were unable to reach the papge you requested for your report. Please try again later."
Would there be something else blocking me? GWMT Parameters maybe?
-
E' un piacere.
but I don't understand the problem.
if the site have this robots.txt
**User-agent: ***
Allow: /
every crawler can index and see all files of the this website and Seo moz also.
Maybe the problem is different?
Ciao
-
Thanks Maurizio,
I need to do some analysis on this site. Is there a way to use my SEO tools (screaming frog, SEOMoz) to ignore the robots.txt to enable me to do a good site audit?
Thanks again for the answers. Much appreciated
Aidan
-
Hi Aidan
User-agent: *
Disallow:and
User-agent: *
Allow: /are the same
Ciao
Maurizio -
Hi Maurizio,
The reason I asked is because I am working on a site and it's robots.txt is :
User-agent: *
Allow: /
Why would they have this?
I can't use On-Page Analysis or Screaming Frog as it only results in 3 URLs.
Thanks again,
Aidan
-
Hi
1° example:
User-agent: *
Disallow:all User-agent can index your files
2° example
User-agent: *
Disallow: /never User-agent"can index you files
other example here:
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=156449
Ciao
Maurizio
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Has anyone ever tested to see if having an ads.txt file provided any SEO lift?
I know that the ads.txt system is designed to prevent ad fraud and technically has nothing to do with search. That said, the presence of such a file would seem to be an indicator of overall site quality because it would show that a site owner wants to participate in a fraud-free system. Has anyone ever tested that? If so, they don't seem to have published their results. Maybe it's a secret weapon that some pros are using and not sharing?
Web Design | | scodtt0 -
Disallow: /sr/ and Disallow: /si/ - robots.txt
Hello Mozzers - I have come across the two directives above in a robots.txt file of a website - the web dev isn't sure what they meant although he implemented robots.txt - I think just legacy stuff that nobody has analysed for years - I vaguely recall sr means search request but can't remember. If any of you know what these directives do, then please let me know.
Web Design | | McTaggart0 -
SEO Ranking: Can Child Theme Compete with Custom Theme?
Ranking for New York City commercial real estate is extremely competitive. We compete against: www.squarefoot.com, www.42floors.com, www.Loopnet.com, www.wework.com and a dozen other optimized sites. Our site was designed in 2012. We plan on upgrading it. From an SEO perspective, can we compete by purchasing a Wordpress real estate theme and customizing it into a child theme? Our better ranking competitors are using custom themes where the code has been very streamlined to make the sites quick and easy to index by spiders. Would we gain a significant edge by custom coding? This is somewhat technical for a business owner and I am trying to get my head around it. Our existing site is www.nyc-officespace-leader.com. Some of the themes we are considering are: -http://main.wpestatetheme.org/homepage -http://houzez01.favethemes.com/ -http://realhomes.inspirythemes.biz/property/ From an SEO perspective is creating a child theme from the above a good approach? Or will a custom theme give us an advantage. If there is an advantage is that edge so marginal that it is not significant? In terms of coding, is a custom site much more labor, 2x, 3x the time to code? Also is the maintenance of a custom site much more involved? Also, as a related question, my developer since 2012 has created many custom plugins for Wordpress. Is this a no, no? Will avoiding custom plugins add to the development cost? Even if we use a child theme from an existing real estate website, I would hope that the improved user interface will provide a boast in at least conversions if not SEO. Thanks, Alan
Web Design | | Kingalan10 -
Https pages indexed but all web pages are http - please can you offer some help?
Dear Moz Community, Please could you see what you think and offer some definite steps or advice.. I contacted the host provider and his initial thought was that WordPress was causing the https problem ?: eg when an https version of a page is called, things like videos and media don't always show up. A SSL certificate that is attached to a website, can allow pages to load over https. The host said that there is no active configured SSL it's just waiting as part of the hosting package just in case, but I found that the SSL certificate is still showing up during a crawl.It's important to eliminate the https problem before external backlinks link to any of the unwanted https pages that are currently indexed. Luckily I haven't started any intense backlinking work yet, and any links I have posted in search land have all been http version.I checked a few more url's to see if it’s necessary to create a permanent redirect from https to http. For example, I tried requesting domain.co.uk using the https:// and the https:// page loaded instead of redirecting automatically to http prefix version. I know that if I am automatically redirected to the http:// version of the page, then that is the way it should be. Search engines and visitors will stay on the http version of the site and not get lost anywhere in https. This also helps to eliminate duplicate content and to preserve link juice. What are your thoughts regarding that?As I understand it, most server configurations should redirect by default when https isn’t configured, and from my experience I’ve seen cases where pages requested via https return the default server page, a 404 error, or duplicate content. So I'm confused as to where to take this.One suggestion would be to disable all https since there is no need to have any traces to SSL when the site is even crawled ?. I don't want to enable https in the htaccess only to then create a https to http rewrite rule; https shouldn't even be a crawlable function of the site at all.RewriteEngine OnRewriteCond %{HTTPS} offor to disable the SSL completely for now until it becomes a necessity for the website.I would really welcome your thoughts as I'm really stuck as to what to do for the best, short term and long term.Kind Regards
Web Design | | SEOguy10 -
Bing Indexation and handling of X-ROBOTS tag or AngularJS
Hi MozCommunity, I have been tearing my hair out trying to figure out why BING wont index a test site we're running. We're in the midst of upgrading one of our sites from archaic technology and infrastructure to a fully responsive version.
Web Design | | AU-SEO
This new site is a fully AngularJS driven site. There's currently over 2 million pages and as we're developing the new site in the backend, we would like to test out the tech with Google and Bing. We're looking at a pre-render option to be able to create static HTML snapshots of the pages that we care about the most and will be available on the sitemap.xml.gz However, with 3 completely static HTML control pages established, where we had a page with no robots metatag on the page, one with the robots NOINDEX metatag in the head section and one with a dynamic header (X-ROBOTS meta) on a third page with the NOINDEX directive as well. We expected the one without the meta tag to at least get indexed along with the homepage of the test site. In addition to those 3 control pages, we had 3 pages where we had an internal search results page with the dynamic NOINDEX header. A listing page with no such header and the homepage with no such header. With Google, the correct indexation occured with only 3 pages being indexed, being the homepage, the listing page and the control page without the metatag. However, with BING, there's nothing. No page indexed at all. Not even the flat static HTML page without any robots directive. I have a valid sitemap.xml file and a robots.txt directive open to all engines across all pages yet, nothing. I used the fetch as Bingbot tool, the SEO analyzer Tool and the Preview Page Tool within Bing Webmaster Tools, and they all show a preview of the requested pages. Including the ones with the dynamic header asking it not to index those pages. I'm stumped. I don't know what to do next to understand if BING can accurately process dynamic headers or AngularJS content. Upon checking BWT, there's definitely been crawl activity since it marked against the XML sitemap as successful and put a 4 next to the number of crawled pages. Still no result when running a site: command though. Google responded perfectly and understood exactly which pages to index and crawl. Anyone else used dynamic headers or AngularJS that might be able to chime in perhaps with running similar tests? Thanks in advance for your assistance....0 -
Can anyone help me detect some SEO improvements onpage please...
Can anyone help me detect some SEO improvements onpage please... I have shortened the website URl so its not easily found when searched via search engines.. http://goo.gl/GlfMRl Please have a look and give me some tips. Thanks
Web Design | | Nettv0 -
My 404 page is showing a 4xx error. How can that be fixed?
My actual 404 page is giving a 4xx error.
Web Design | | sbetzen
The page address is http://www.ecowindchimes.com/v/404.asp It loads fine... it is the page all 404's are directed to. Why is it showing a 404 error. The page works. How can this be fixed? Stephen0 -
Does anyone know how much a wordpress site can store (in terms of data) I want to put all my movies on it and use it as a personal global external hard drive! Thanks!!
So basically, I have about 500 GB of movies on my computer and I don't want to buy an external hardrive. I don't want to spend the money A website I could access anytime, and anywhere, without having to carry my external with me everywhere I go. Thanks in advance for any help/ references.
Web Design | | TylerAbernethy0