Will disallowing URL's in the robots.txt file stop those URL's being indexed by Google
-
I found a lot of duplicate title tags showing in Google Webmaster Tools. When I visited the URL's that these duplicates belonged to, I found that they were just images from a gallery that we didn't particularly want Google to index. There is no benefit to the end user in these image pages being indexed in Google.
Our developer has told us that these urls are created by a module and are not "real" pages in the CMS.
They would like to add the following to our robots.txt file
Disallow: /catalog/product/gallery/
QUESTION: If the these pages are already indexed by Google, will this adjustment to the robots.txt file help to remove the pages from the index?
We don't want these pages to be found.
-
That's why I mentioned: "eventually". But thanks for the added information. Hopefully it's clear now for the original poster.
-
Looking at this video - https://www.youtube.com/watch?v=KBdEwpRQRD0&feature=youtu.be Matt Cutts advises to use the noindex tag on every individual page. However, this is very time consuming if you're dealing wit a large volume of pages.
The other option he recommends is to use the robots.txt file as well as the URL removal tool in GWMT, Although this is the second choice option, it does seem easier for us to implement than the noindex tag.
-
Hi,
Yes, if you put any url in the robots.txt it will not be shown in the search results after some time even if your pages were already indexed. Because when your disallow urls in the robots.txt , Google will stop crawling that page and eventually will stop indexing those pages.
-
Hi Nico
Great response thanks.
This is certainly something I'm taking into consideration and will question my developer about this.
-
Thanks Thomas.
I'm now finding out from my developer is we are able to noindex these pages with the meta robots.
If this is something that isn't possible, it's likely that we'll add to the robots.txt as you did.
Either way I think will be progress to different degrees.
-
I don' think Martijn's statement is quite correct as I have made different experiences in an accidental experiment. Crawling is not the same as indexing. Google will put pages it cannot crawl into the index ... and they will stay there unless removed somehow. They will probably only show up for specific searches, though
Completely agree, I have done the same for a website I am doing work with, ideally we would noindex with meta robots however that isn't possible. So instead we added to the robots.txt, the number of indexed pages have dropped, yet when you search exactly it just says the description can't be reached.
So I was happy with the results as they're now not ranking for the terms they were.
-
I don' think Martijn's statement is quite correct as I have made different experiences in an accidental experiment. Crawling is not the same as indexing. Google will put pages it cannot crawl into the index ... and they will stay there unless removed somehow. They will probably only show up for specific searches, though
In September 2015 I catapulted a website from ~3.000 to 130.000 indexed pages (roughly). 127.000 were essentially canonicalised duplicates (yes, it did make sense) but also blocked by robots.txt - but put into the index nonetheless. The problem was a dynamically generated parameter, always different, always blocked by robots.
The title was equal to the link text; the description became "A description for this result is not available because of this site's robots.txt – learn more." (If Google cannot crawl a URL Google will usually take titles from links pointing to that URL). No sign of disappearing. In fact, Google was happy to add more and more to its index ...
At the start of December 2015 I removed the robots.txt block - Google could now read the canonicals or noindex on the URLs ... the pages only began dropping out, slowly and in bunches of a few thousand in March 2016 - probably due to the very low relevancy and crawl budget assigned to them. Right now there are still about 24.000 pages in the index.
So my answer would be: No - disabling crawling in the robots.txt will NOT remove a page from the index. For that you need to noindex them (which sometimes also works if done in robots.txt, I've heard). Disallowing URLs in the robots.txt will very likely drop pages to the end of useful results, though, as Andy described. (I don't know if this has any influence on the general evaluation of the site as a whole; I'd guess not.)
Regards
Nico
-
Thanks Martijn. This is what I was assuming would happen. However, I got a confusing message from my developer which said the following,
"won't remove the URL's from the index but it will mean that they will only show up for very specific searches that customers are extremely unlikely to use. It will also increase Asgard's crawl budget as Google and Bing won't try to crawl these URLs. Would you be happy with this solution?"
I would tend to still agree with your statement though.
-
Yes they will be eventually. As you disallow Google to crawl the URLs it will probably start hiding the descriptions for some of these image pages soon as they can't crawl them anymore. Then at some point they'll stop looking at them at all.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Block session id URLs with robots.txt
Hi, I would like to block all URLs with the parameter '?filter=' from being crawled by including them in the robots.txt. Which directive should I use: User-agent: *
Intermediate & Advanced SEO | | Mat_C
Disallow: ?filter= or User-agent: *
Disallow: /?filter= In other words, is the forward slash in the beginning of the disallow directive necessary? Thanks!1 -
ECommerce Replatforming URL's
We are in the process of re-platforming our eCommerce site to Magento 2. For the most part, the majority of site content will remain the same. Unfortunately on our current platform, we have been inconsistent with the use of .html as a URL suffix. As a result, our category and product pages are half and half - /stainless-steel-hardware.html
Intermediate & Advanced SEO | | BoatOutfitters
&
/stainless-steel-hardware We are considering taking the opportunity to clean up and standardize our URLs. (Drop the .html from all URLs on the new site and 301 redirect these to the same URL without the .html) Our concern is that many of the .html pages are good categories with strong page rank and I've read many articles about page rank loss from 301 redirects. We are debating internally if it really makes sense to take an SEO hit for something is seemingly small as dropping the .html from the URL. It would be a no-brainer if we were taking the opportunity to change to more SEO friendly natural language URLs. However currently our URL's appear acceptable with the exception of the inconsistent suffix. Thanks in advance for any insight on how you would approach this!2 -
Syndicated content with meta robots 'noindex, nofollow': safe?
Hello, I manage, with a dedicated team, the development of a big news portal, with thousands of unique articles. To expand our audiences, we syndicate content to a number of partner websites. They can publish some of our articles, as long as (1) they put a rel=canonical in their duplicated article, pointing to our original article OR (2) they put a meta robots 'noindex, follow' in their duplicated article + a dofollow link to our original article. A new prospect, to partner with with us, wants to follow a different path: republish the articles with a meta robots 'noindex, nofollow' in each duplicated article + a dofollow link to our original article. This is because he doesn't want to pass pagerank/link authority to our website (as it is not explicitly included in the contract). In terms of visibility we'd have some advantages with this partnership (even without link authority to our site) so I would accept. My question is: considering that the partner website is much authoritative than ours, could this approach damage in some way the ranking of our articles? I know that the duplicated articles published on the partner website wouldn't be indexed (because of the meta robots noindex, nofollow). But Google crawler could still reach them. And, since they have no rel=canonical and the link to our original article wouldn't be followed, I don't know if this may cause confusion about the original source of the articles. In your opinion, is this approach safe from an SEO point of view? Do we have to take some measures to protect our content? Hope I explained myself well, any help would be very appreciated, Thank you,
Intermediate & Advanced SEO | | Fabio80
Fab0 -
Is 1:1 301 redirect required on indexed URL when restructing URL even if the new URL is canonicalized?
Hello folks, We are restructuring some URLS which forms a fair chunk of the content of the domain.
Intermediate & Advanced SEO | | HB17
These content are auto generated rather than manually created unlike other parts of the website. The same content is currently accessible from two URLs: /used-books/autobiography-a-long-walk-to-freedom-isbn
/autobiography/used-books/a-long-walk-to-freedom-isbn The URL 1 uses the URL 2 as the canonical url and it has worked allright since Moz does
not show the two as duplicate of each other. Google has also indexed the canonical URL although
there is still a few 'URL 1s' which were indexed before the canonical was implemented. The updated URL structure will look like something like this: /used-books/autobiography-a-long-walk-to-freedom-author-name-isbn
/autobiography/used-books/a-long-walk-to-freedom-authore-name-isbn It would be great to have just a single URL but a few business requirement prevents
us from having just the canonical URL only even with the new structure. Since we will still have two URLs to access the same content and we were wondering
whether we will need to do a 1:1 301 redirect on the current URLs or since there will be canonical URL
(/autobiography/used-books/a-long-walk-to-freedom-authore-name-isbn),
we won't need to worry about doing the 1:1 redirect on the the indexed content? Please note that the content will still be accessible from the OLD URL (unless 301ed of course). If it is advisable to do a 1:1 301 redirect this is what we intend to do: /used-books/autobiography-a-long-walk-to-freedom-isbn 301 to
/used-books/autobiography-a-long-walk-to-freedom-author-name-isbn /autobiography/used-books/a-long-walk-to-freedom-isbn 301 to
/autobiography/used-books/a-long-walk-to-freedom-authore-name-isbn Any advice/suggestions would be greated appreciated. Thank you.0 -
How to get a site out of Google's Sandbox
Hi I am working on a website that is ranking well in bing for the domain name / exact url search but appears no where in Google or Yahoo. I have done the site search in Google and it is indexed so I am presuming it is in the sandbox. The website was originally developed in India and I do not know whether it had some history of bad backlinks. The website itself is well optimised and I have checked all pages in Moz - getting a grade A. Webmaster Tools is not showing any manual actions - I was wondering what I could do next?
Intermediate & Advanced SEO | | AllieMc0 -
Google Indexed Old Backups Help!
I have the bad habit of renaming a html page sitting on my server, before uploading a new version. I usually do this after a major change. So after the upload, on my server would be "product.html" as well as "product050714".html. I just stumbled on the fact G has been indexing these backups. Can I just delete them and produce a 404?
Intermediate & Advanced SEO | | alrockn0 -
Will blocking google and SE's from indexing images hurt SEO?
Hi, We have a bit of a problem where on a website we are managing, there are thousands of "Dynamically" re-sized images. These are stressing out the server as on any page there could be upto 100 dynamically re-sized images. Google alone is indexing 50,000 pages a day, so multiply that by the number of images and it is a huge drag on the server. I was wondering if it maybe an idea to blog Robots (in robots.txt) from indexing all the images in the image file, to reduce the server load until we have a proper fix in place. We don't get any real value from having our website images in "Google Images" so I am wondering if this could be a safe way of reducing server load? Are there any other potential SEO issues this could cause?? Thanks
Intermediate & Advanced SEO | | James770 -
Blocking Dynamic URLs with Robots.txt
Background: My e-commerce site uses a lot of layered navigation and sorting links. While this is great for users, it ends up in a lot of URL variations of the same page being crawled by Google. For example, a standard category page: www.mysite.com/widgets.html ...which uses a "Price" layered navigation sidebar to filter products based on price also produces the following URLs which link to the same page: http://www.mysite.com/widgets.html?price=1%2C250 http://www.mysite.com/widgets.html?price=2%2C250 http://www.mysite.com/widgets.html?price=3%2C250 As there are literally thousands of these URL variations being indexed, so I'd like to use Robots.txt to disallow these variations. Question: Is this a wise thing to do? Or does Google take into account layered navigation links by default, and I don't need to worry. To implement, I was going to do the following in Robots.txt: User-agent: * Disallow: /*? Disallow: /*= ....which would prevent any dynamic URL with a '?" or '=' from being indexed. Is there a better way to do this, or is this a good solution? Thank you!
Intermediate & Advanced SEO | | AndrewY1