Can I rely on just robots.txt
-
We have a test version of a clients web site on a separate server before it goes onto the live server.
Some code from the test site has some how managed to get Google to index the test site which isn't great!
Would simply adding a robots text file to the root of test simply blocking all be good enough or will i have to put the meta tags for no index and no follow etc on all pages on the test site also?
-
You can do the inbound link check right here using SEOMoz's Open Site Explorer tool to check for links to the dev site, whether it's in a subdomain, subfolder or a separate site.
Good luck!
Paul
-
thats a great help cheers
wheres the best place to do an inbound link check?
-
You're actually up against a bit of a sticky wicket here, SS. You do need the no-index, no-follow meta tags on each page as Irving mentions.
HOWEVER! If you also add a robots.txt directive not to index the site, the search crawlers will not crawl your pages and therefore will never see the noindex metatag to know to remove the incorrectly-indexed pages from their index.
My recommendation is for a belt & suspenders approach.
- implement the meta no-index, no-follow tags throughout the dev site, but do NOT immediately implement the robots.txt exclusion. Wait a day or two until the pages get recrawled and the bots discover the noindex metatags
- Use the Remove URL tools in both Google and Bing Webmaster Tools to request removal of all the dev pages you are aware have been indexed.
- Then add the exclusion directive to the robots.txt file to keep the crawlers out from then on (leaving the no-index, no-follow tags in place).
- check back in the SERPS periodically to check that no other dev pages have been indexed. IF they have, do another manual removal request.
Does that make sense?
Paul
P.S. As a last measure, run an inbound links check on the dev pages that got indexed to find out which external pages are linking to the dev pages. Get those inbound links removed ASAP so the search engines aren't getting any signals to index the dev site. Last option would be to simply password-protect the directory the dev site is in. A little less convenient, but guaranteed to keep the crawlers out.
-
cheers, i thought as much
-
You cannot rely on robots.txt alone, you need to add the meta noindex tag to the pages as well to ensure that they will not get indexed.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How can I make it so that robots.txt is not ignored due to a URL re-direct?
Recently a site moved from blog.site.com to site.com/blog with an instruction like this one: /etc/httpd/conf.d/site_com.conf:94: ProxyPass /blog http://blog.site.com
Technical SEO | | rodelmo4
/etc/httpd/conf.d/site_com.conf:95: ProxyPassReverse /blog http://blog.site.com It's a Wordpress.org blog that was set as a subdomain, and now is being redirected to look like a directory. That said, the robots.txt file seems to be ignored by Google bot. There is a Disallow: /tag/ on that file to avoid "duplicate content" on the site. I have tried this before with other Wordpress subdomains and works like a charm, except for this time, in which the blog is rendered as a subdirectory. Any ideas why? Thanks!0 -
One server, two domains - robots.txt allow for one domain but not other?
Hello, I would like to create a single server with two domains pointing to it. Ex: domain1.com -> myserver.com/ domain2.com -> myserver.com/subfolder. The goal is to create two separate sites on one server. I would like the second domain ( /subfolder) to be fully indexed / SEO friendly and have the robots txt file allow search bots to crawl. However, the first domain (server root) I would like to keep non-indexed, and the robots.txt file disallowing any bots / indexing. Does anyone have any suggestions for the best way to tackle this one? Thanks!
Technical SEO | | Dave1000 -
What can I do with 8000+ 302 temporary redirects?
Hi I'm working on a clients site that has an odd structure with 8000+ 302 temporary redirects. They are related to actions on the site (they have to be there and work this way for the site to function) but they also have a stupid number of perameters. Would it be ok to block them all in the robots.txt file? Would that make any difference?
Technical SEO | | TomVolpe0 -
Can you help with some concerns over mobile site in SERPS
Hi Guys I wonder if you can put me right on this query please? We have a mobile version of our site, on an m. domain. it is actually http://m.uniquemagazines.co.uk. I have created a mobile sitemap, and added the sitemap reference to the robots.txt file. I have setup the site in web master tools, and submitted the sitemap to web master tools. The sitemap can be found at http://m.uniquemagazines.co.uk/sitemap.xml. This is an index file, which in turn links to 3 sitemaps, one for pages, one for product pages and one for category pages. I have not blocked any crawlers in the robots.txt file. My query is we have some products showing up in the SERPS with both the mobile and normal website. Secondly, in web master tools I am told that our mobile sitemap contains 3675 submitted URLs. However, it states under the WEB banner that 3675 submitted, and 3675 indexed. But under MOBILE it states 3675 submitted, none indexed... I will attempt to attach a screenshot from WMT of this. Do you have any ideas what has gone wrong here? is it wrong, or is this what you would expect to see? Many thanks Paul ScreenHunter_18%20Feb.%2021%2010.55.jpg
Technical SEO | | TheUniqueSEO0 -
Duplicate Content - Just how killer is it?
Yesterday I received my ranking report and was extremely disappointed that my high-priority pages dropped in rank for a second week in a row for my targeted keywords. This is after running them through the gradecard and getting As for each of them on the keywords I wanted. I looked at my google webmaster tools and saw new duplicate content pages listed, which were the ones I had just modified to get my keyword targeting better. In my hastiness to work on getting the keyword usage up, I neglected to prevent these descriptions from coming up when viewing the page with filter parameters, sort parameters and page parameters... so google saw these descriptions as duplicate content (since myurl.html and myurl.html?filter=blah are seen as different). So my question: is this the likely culprit for some pretty drastic hits to ranking? I've fixed this now, but are there any ways to prevent this in the future? (I know _of _canonical tags, but have never used them, and am not sure if this applies in this situation) Thanks! EDIT: One thing I forgot to ask as well: has anyone inflicted this upon themselves? And how long did it take you to recover?
Technical SEO | | Ask_MMM0 -
I have 404 errors but can't find where these links are?
The 4xx report had 0 errors, and then on the recent crawl it found over 200. They are all variations on real URLs e.g.: Real URL: http://www.bullseyeuk.com/10-up-deluxe-literature-holder.html 404 Error URL: http://www.bullseyeuk.com/10-up-deluxe-literature-holder.html �� None of them are linked to the root domain and I can't find where they are coming from. Any ideas? Thanks Jack
Technical SEO | | JackMurphy0 -
Can I Disallow Faceted Nav URLs - Robots.txt
I have been disallowing /*? So I know that works without affecting crawling. I am wondering if I can disallow the faceted nav urls. So disallow: /category.html/? /category2.html/? /category3.html/*? To prevent the price faceted url from being cached: /category.html?price=1%2C1000
Technical SEO | | tylerfraser
and
/category.html?price=1%2C1000&product_material=88 Thanks!0 -
My client has lost his URL - is there anything he can do to salvage SEO?
My new client has had his URL for 8 years and built up good SEO, visitors and links. He has now lost it and the cost of getting it back is prohibitive. Apart from contacting all the places he is currently getting links from, is there anything he can do to salvage SEO and site visitors? Is there anyway he can get 301s done if he no longer owns the URL? If he starts again with a new URL, and loads all the new content on it, will submitting a site map help Google understand its not duplicate and all the content is just at a new URL? He is hoping that contacting Google and explaining will help them "look kindly", but I have never heard anything like this happening! Any ideas? Many thanks
Technical SEO | | Chammy0