Robots User-agent Query
-
Am I correct in saying that the allow/disallow is only applied to msnbot_mobile?
mobile robots file
User-agent: Googlebot-Mobile
User-agent: YahooSeeker/M1A1-R2D2
User-agent: MSNBOT_Mobile
Allow: /
Disallow: /1
Disallow: /2/
Disallow: /3
Disallow: /4/
-
Hi Thomas
Unless I'm mistaken. If you list multiple user agents before a rule all the users agents are subjected to the rule.
So what you have is a list of 3 user agents allowed "anything" disallowed 4 specific things.
In the end the rules apply to all.
Don
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Ranking penalty for "accordion" content -- hidden prior to user interaction
Will content inside an "accordion" module be ranked as non-hidden content? Is there an official guide by google and other search engines addressing this? Example of accordion element: https://v4-alpha.getbootstrap.com/components/collapse/#accordion-example Will all elements in the example above be seen + treated equally by search engines?
Technical SEO | | houlihanlokey1 -
Adding multi-language sitemaps to robots.txt
I am working on a revamped multi-language site that has moved to Magento. Each language runs off the core coding so there are no sub-directories per language. The developer has created sitemaps which have been uploaded to their respective GWT accounts. They have placed the sitemaps in new directories such as: /sitemap/uk/sitemap.xml /sitemap/de/sitemap.xml I want to add the sitemaps to the robots.txt but can't figure out how to do it. Also should they have placed the sitemaps in a single location with the file identifying each language: /sitemap/uk-sitemap.xml /sitemap/de-sitemap.xml What is the cleanest way of handling these sitemaps and can/should I get them on robots.txt?
Technical SEO | | MickEdwards0 -
How to Remove Old Comment Page Query String URLs
I used to use a comments program on my website that created comment pages in the form of http://www.example.com/web-page.htm?comm_page=2. When I switched to a new comments program, I worried that these old comment URLs would be considered duplicate content. I created a 301 redirect that, for example, would redirect http://www.example.com/web-page.htm?comm_page=2 to http://www.example.com/web-page.htm and disallowed them in robots.txt, which I later learned was not the thing to do.. I have removed the URLs from being disallowed in robots.txt. However, many months later, these comment page URLs keep appearing in Google's index from time to time. I use the "Remove URLs" tool in Google Webmaster Tools to remove the URLs from Google's index, but more URLs appear a few days later. How can I get rid of these URLs for good? Thanks!
Technical SEO | | MrFrost0 -
Do i have my robots.txt file set up properly
Hi, just doing some seo on my site and i am not sure if i have my robots file set correctly. i use joomla and my website is www.in2town.co.uk. here is my robots file, does this look correct to you User-agent: *
Technical SEO | | ClaireH-184886
Disallow: /administrator/
Disallow: /cache/
Disallow: /components/
Disallow: /includes/
Disallow: /installation/
Disallow: /language/
Disallow: /libraries/
Disallow: /media/
Disallow: /modules/
Disallow: /plugins/
Disallow: /templates/
Disallow: /tmp/
Disallow: /xmlrpc/ many thanks1 -
Canonical solution for query strings?
Greetings, The Hotel company where I'm employed uses query strings in it's url's to track customers. The query strings are integrated into our property management system, and they help identify who we need to pay commissions to, so they aren't going anywhere. While I understand that session variables could have been a better solution, I sort of inherited this problem. The issue I'm running into is that my Webmaster tools picks up these query strings as actual url's. So for instance: www.url.com/index.php?P_SOURCE=WBFQ Seems like a duplicate page of my root, and like wise for all my other pages that use our booking widget. So, Is there a canonical solution to this issue? or would 301/302's be the only solution. Also, we may have 10 different but specific query strings to put into our urls. Would the 301/302 approach cause any server issues for say 10 pages? So 10 pages x 10 access codes = a lot of redirects. Thanks in advance, Cyril
Technical SEO | | Nola5040 -
Wordpress Robots.txt Sitemap submission?
Alright, my question comes directly from this article by SEOmoz http://www.seomoz.org/learn-seo/r... Yes, I have submitted the sitemap to google, bing's webmaster tools and and I want to add the location of our site's sitemaps and does it mean that I erase everything in the robots.txt right now and replace it with? <code>User-agent: * Disallow: Sitemap: http://www.example.com/none-standard-location/sitemap.xml</code> <code>???</code> because Wordpress comes with some default disallows like wp-admin, trackback, plugins. I have also read this, but was wondering if this is the correct way to add sitemap on Wordpress Robots.txt. [http://www.seomoz.org/q/removing-...](http://www.seomoz.org/q/removing-robots-txt-on-wordpress-site-problem) I am using Multisite with Yoast plugin so I have more than one sitemap.xml to submit Do I erase everything in Robots.txt and replace it with how SEOmoz recommended? hmm that sounds not right. like <code> <code>
Technical SEO | | joony2008
<code>User-agent: *
Disallow: </code> Sitemap: http://www.example.com/sitemap_index.xml</code> <code>``` Sitemap: http://www.example.com/sub/sitemap_index.xml ```</code> <code>?????????</code> ```</code>0 -
Confused about robots.txt
There is a lot of conflicting and/or unclear information about robots.txt out there. Somehow, I can't make out what's the best way to use robots even after visiting the official robots website. For example I have the following format for my robots. User-agent: * Disallow: javascript.js Disallow: /images/ Disallow: /embedconfig Disallow: /playerconfig Disallow: /spotlightmedia Disallow: /EventVideos Disallow: /playEpisode Allow: / Sitemap: http://www.example.tv/sitemapindex.xml Sitemap: http://www.example.tv/sitemapindex-videos.xml Sitemap: http://www.example.tv/news-sitemap.xml Is this correct and/or recommended? If so, then how come I see a list of over 200 or so links blocked by robots when Im checking out Google Webmaster Tools! Help someone, anyone! Can't seem to understand this robotic business! Regards,
Technical SEO | | Netpace0 -
Using robots.txt to deal with duplicate content
I have 2 sites with duplicate content issues. One is a wordpress blog. The other is a store (Pinnacle Cart). I cannot edit the canonical tag on either site. In this case, should I use robots.txt to eliminate the duplicate content?
Technical SEO | | bhsiao0