Robots User-agent Query
-
Am I correct in saying that the allow/disallow is only applied to msnbot_mobile?
mobile robots file
User-agent: Googlebot-Mobile
User-agent: YahooSeeker/M1A1-R2D2
User-agent: MSNBOT_Mobile
Allow: /
Disallow: /1
Disallow: /2/
Disallow: /3
Disallow: /4/
-
Hi Thomas
Unless I'm mistaken. If you list multiple user agents before a rule all the users agents are subjected to the rule.
So what you have is a list of 3 user agents allowed "anything" disallowed 4 specific things.
In the end the rules apply to all.
Don
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I have two robots.txt pages for www and non-www version. Will that be a problem?
There are two robots.txt pages. One for www version and another for non-www version though I have moved to the non-www version.
Technical SEO | | ramb0 -
Site Redesign: 302 Query
Hi there, We'll be redesigning our website www.example.com and as such want to 302 users from www.example.com and all other pages to a new URL www.example.com/landingpage while we go through the redesign. The new landing page will have copy and a sign up form on it and once the redesign is completed, we plan on removing the 302 and sending all traffic back to the original url www.example.com. I'd just like to check that a 302 is the most relevant option here? Obviously, once redesign is completed we'll 301 any old URLs to their new locations once completed.
Technical SEO | | Hemblem0 -
Website Migration Query
We are going to migrate our site but we cannot do this gradually, so before we complete the whole migration, we were thinking of launching the new site on a sub-domain and gradually redirect traffic to the sub-domain, starting with 10%, moving up steadily so that we then migrate to the new site within four/five weeks. The new site will have a new URL structure on the same domain, with a complete re-design and the IP address will be changing as well, even though the server geographical location will remain the same. a) Should we noindex the new sub-domain while the new site is on trial? b) Are there any other issues we should look out for? Thanks in Advance 🙂
Technical SEO | | seoec0 -
Robots.txt Download vs Cache
We made an update to the Robots.txt file this morning after the initial download of the robots.txt file. I then submitted the page through Fetch as Google bot to get the changes in asap. The cache time stamp on the page now shows Sep 27, 2013 15:35:28 GMT. I believe that would put the cache time stamp at about 6 hours ago. However the Blocked URLs tab in Google WMT shows the robots.txt last downloaded at 14 hours ago - and therefore it's showing the old file. This leads me to believe for the Robots.txt the cache date and the download time are independent. Is there anyway to get Google to recognize the new file other than waiting this out??
Technical SEO | | Rich_A0 -
A query about internal linking. Have I got this right?
Hi Guys I think this sounds like a right noobie question, but I am amongst friends so here goes. So our website is an ecommerce site selling magazines. There are certain magazines, for example Vogue, where we sell the UK version, USA, italian, spanish, french etc there's basically 13 different Vogue magazines on our site. The more niche ones attract some good long tail traffic. However, the UK version is competitive and so requires some extra umph to get us a half descent rank. However, when you search "vogue magazine subscription" for example, it's our italian vogue which is listed first. When I looked into this, I found that we had linked out from our UK Vogue to Italian vogue. Could this have given the italian vogue a marginal boost, as it had the additional internal links? What I have now done is add to some, not all, of the variations something along the lines "you will find the UK Vogue magazine here" where "UK Vogue Magazine" is the anchor text. Is this the right thing to do? Will this identify that the UK Vogue page is the higher priority page, or the more important page? I was also going to add to a category page a "Top 10 Womens magazines" section, and link to Vogue from there. Am I barking up the right tree? Thanks Guys Paul
Technical SEO | | TheUniqueSEO0 -
How long does it take for traffic to bounce back from and accidental robots.txt disallow of root?
We accidentally uploaded a robots.txt disallow root for all agents last Tuesday and did not catch the error until yesterday.. so 6 days total of exposure. Organic traffic is down 20%. Google has since indexed the correct version of the robots.txt file. However, we're still seeing awful titles/descriptions in the SERPs and traffic is not coming back. GWT shows that not many pages were actually removed from the index but we're still seeing drastic rankings decreases. Anyone been through this? Any sort of timeline for a recovery? Much appreciated!
Technical SEO | | bheard0 -
Rel canonical with index follow on query string URLs
Hi guys, Quick question regarding the rel canonical tag. I have lots of links pointing at me with query strings and previously used some code to determine if query strings were in the URL and if they were then not to index that page. If there weren't query strings then the page would be indexed and followed. I assume I can now use the rel canonical tag on each of these pages so the value goes to the proper URL minus any query string. However do I need to have the rel canonical tag above the index, follow tag on the page? So URL is site.com/page.html?ref=ABC meta robots is "index, follow" Rel canonical is "site.com/page.html" Does the order of the meta robots and canonical tag matter? Thanks in advance!
Technical SEO | | panini0 -
Query String Redirection
In PHP, I'm wanting to store a session variable based upon a link that's clicked. I'm wanting to avoid query strings on pages that have content. My current workaround is to have a link with query strings to a php file that does nothing but snags the variables via $_GET, stores them into $_SESSION, and then redirects. For example, consider this script, that I have set up to force to a mobile version. Accessed via something like a href="forcemobile.php?url=(the current filename)" session_start(); //Location of vertstudios file on your localhost. Include trailing slash $loc = "http://localhost/web/vertstudios/"; //If GET variable not defined, this page is being accessed directly. //In that case, force to 404 page. Same case for if mobile session variable //not defined. if(!(isset($_GET["url"]) && isset($_SESSION["mobile"]))){ header("Location: http://www.vertstudios.com/404.php"); exit(); } //Snag the URL $url = $_GET["url"]; //Set the mobile session to true, and redirect to specified URL $_SESSION["mobile"] = true;header("Location: " . $loc . $url); ?> Will this circumvent the issue caused by using query strings?
Technical SEO | | JoeQuery0