Robots.txt Showing in SERP Results
-
Currently doing a technical audit for a website and when I search "Site:website.com -www" the only result is website.com/robots.txt
I was wondering if anyone else has come across this before -- or what this may mean from a technical audit standpoint.
Thank you!
-
nonsense. Search for https://www.google.com.au/search?q=inurl%3Arobots.txt&pws=0
Some of the first results with visible robots.txt I see are:
I refuse to believe that "something is seriously wrong" with any of these sites.
-
It's quite common for Google to index robots.txt files. (and also, rather odd) But check out all of these robots.txt files:
https://www.google.com/search?q=inurl%3Arobots.txt&pws=0&gl=us
So it's nothing to be alarmed by. With your particular query. "Site:website.com -www" it only shows pages indexed without the "www" so this just says that all the indexed pages most likely begin with www. The exception, of course, is the robots.txt file.
The bigger question for me is, why does Google cache robots.txt files? Oh well.
-
The robots.txt file should not show up. Sounds like there is something seriously wrong.
-
Did you also search for site:www.webiste.com? Are they blocking the site?What's in the actual robots file?
Also, does this happen when you search for the site in Bing?
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots User-agent Query
Am I correct in saying that the allow/disallow is only applied to msnbot_mobile? mobile robots file User-agent: Googlebot-Mobile User-agent: YahooSeeker/M1A1-R2D2 User-agent: MSNBOT_Mobile Allow: / Disallow: /1 Disallow: /2/ Disallow: /3 Disallow: /4/
Technical SEO | | ThomasHarvey1 -
Old URLs Appearing in SERPs
Thirteen months ago we removed a large number of non-corporate URLs from our web server. We created 301 redirects and in some cases, we simply removed the content as there was no place to redirect to. Unfortunately, all these pages still appear in Google's SERPs (not Bings) for both the 301'd pages and the pages we removed without redirecting. When you click on the pages in the SERPs that have been redirected - you do get redirected - so we have ruled out any problems with the 301s. We have already resubmitted our XML sitemap and when we run a crawl using Screaming Frog we do not see any of these old pages being linked to at our domain. We have a few different approaches we're considering to get Google to remove these pages from the SERPs and would welcome your input. Remove the 301 redirect entirely so that visits to those pages return a 404 (much easier) or a 410 (would require some setup/configuration via Wordpress). This of course means that anyone visiting those URLs won't be forwarded along, but Google may not drop those redirects from the SERPs otherwise. Request that Google temporarily block those pages (done via GWMT), which lasts for 90 days. Update robots.txt to block access to the redirecting directories. Thank you. Rosemary One year ago I removed a whole lot of junk that was on my web server but it is still appearing in the SERPs.
Technical SEO | | RosemaryB2 -
Schema showing up in product listing?
I have added schema to my product page as follows: Example product description ..... and so forth. However, the schema is showing up on the product page itself. How do I keep the schema on the SERP page, but not on the product page? My client doesn't want it to affect the site design. Thanks!
Technical SEO | | DWallace0 -
Date apppearing in SERPS
Anyone have an idea why dates might be appearing in search results for a small number of my pages and not others? There is no date on any of the pages themselves and nothing in the back-end CMS indicating dates should appear. Some of the dates (presumably from when the snippets were originally created) are over a year old and make all the information look terribly out of date. Any ideas?
Technical SEO | | BrettCollins0 -
Duplicate Content for our Advertising Sites Showing in Search Results
Hello, My company has a couple different sites (Magento Stores) for Organic, Adwords and AdCenter purposes.They are mirror sites of each except for phone number, contact form, ect. Here is our organic site: http://www.oxygenconcnetratorstore.com/ Adwords and Adcenter site respectively: http://www.oxygenconcnetratorstore.com/portable/
Technical SEO | | chuck-layton
http://www.oxygenconcnetratorstore.com/oxygen/ The problem is, both the Adwords and AdCenter stores appear in Google SERP when you put in the exact URL. I have "noindex/nofollow" tag on both the advertising sites but they are still showing in search results. I feel we are getting hurt for basically have 3 sites of duplicate content. Is there a reason why the sites would be showing in search results even with the nofollow/index tags?? Any help would be awesome. Thanks. seomoz.jpg0 -
OK to block /js/ folder using robots.txt?
I know Matt Cutts suggestions we allow bots to crawl css and javascript folders (http://www.youtube.com/watch?v=PNEipHjsEPU) But what if you have lots and lots of JS and you dont want to waste precious crawl resources? Also, as we update and improve the javascript on our site, we iterate the version number ?v=1.1... 1.2... 1.3... etc. And the legacy versions show up in Google Webmaster Tools as 404s. For example: http://www.discoverafrica.com/js/global_functions.js?v=1.1
Technical SEO | | AndreVanKets
http://www.discoverafrica.com/js/jquery.cookie.js?v=1.1
http://www.discoverafrica.com/js/global.js?v=1.2
http://www.discoverafrica.com/js/jquery.validate.min.js?v=1.1
http://www.discoverafrica.com/js/json2.js?v=1.1 Wouldn't it just be easier to prevent Googlebot from crawling the js folder altogether? Isn't that what robots.txt was made for? Just to be clear - we are NOT doing any sneaky redirects or other dodgy javascript hacks. We're just trying to power our content and UX elegantly with javascript. What do you guys say: Obey Matt? Or run the javascript gauntlet?0 -
Robots.txt for subdomain
Hi there Mozzers! I have a subdomain with duplicate content and I'd like to remove these pages from the mighty Google index. The problem is: the website is build in Drupal and this subdomain does not have it's own robots.txt. So I want to ask you how to disallow and noindex this subdomain. Is it possible to add this to the root robots.txt: User-agent: *
Technical SEO | | Partouter
Disallow: /subdomain.root.nl/ User-agent: Googlebot
Noindex: /subdomain.root.nl/ Thank you in advance! Partouter0 -
Self-Cannibalization showing again and again
Hi.. We were just doing the on page for, http://www.wogwear.com/christian-bracelets.html While checking on ON PAGE tool, it keeps on showing, Avoid Keyword Self-Cannibalization Its an eCommerce website and all the products in that page had titles in h2 . I thought, that was creating problem, so I tried changing them to h3 and h4 , but still no good. What am I missing ?
Technical SEO | | qubesys0