Can URLs blocked with robots.txt hurt your site?
-
We have about 20 testing environments blocked by robots.txt, and these environments contain duplicates of our indexed content. These environments are all blocked by robots.txt, and appearing in google's index as blocked by robots.txt--can they still count against us or hurt us?
I know the best practice to permanently remove these would be to use the noindex tag, but I'm wondering if we leave them they way they are if they can still hurt us.
-
90% not, first of all, check if google indexed them, if not, your robots.txt should do it, however I would reinforce that by making sure those URLs are our of your sitemap file and make sure your robots's disallows are set to ALL *, not just google for example.
Google's duplicity policies are tough, but they will always respect simple policies such as robots.txt.
I had a case in the past when a customer had a dedicated IP, and google somehow found it, so you could see both the domain's pages and IP's pages, both the same, we simply added a .htaccess rule to point the IP requests to the domain, and even when the situation was like that for long, it doesn't seem to have affected them. In theory google penalizes duplicity but not in this particular cases, it is a matter of behavior.
Regards!
-
I've seen people say that in "rare" cases, links blocked by Robots.txt will be shown as search results but there's no way I can imagine that would happen if it's duplicates of your content.
Robots.txt lets a search engine know not to crawl a directory - but if another resource links to it, they may know it exists, just not the content of it. They won't know if it's noindex or not because they don't crawl it - but if they know it exists, they could rarely return it. Duplicate content would have a better result, therefore that better result will be returned, and your test sites should not be...
As far as hurting your site, no way. Unless a page WAS allowed, is duplicate, is now NOT allowed, and hasn't been recrawled. In that case, I can't imagine it would hurt you that much either. I wouldn't worry about it.
(Also, noindex doesn't matter on these pages. At least to Google. Google will see the noindex first and will not crawl the page. Until they crawl the page it doesn't matter if it has one word or 300 directives, they'll never see it. So noindex really wouldn't help unless a page had already slipped through.)
-
I don't believe they are going to hurt you, it is more of a warning that if you are trying to have these indexed that at the moment they can't be accessed. When you don't want them to be indexed i.e. in this case, I don't believe you are suffering because of it.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can anyone help me diagnose an indexing/sitemap issue on a large e-commerce site?
Hey guys. Wondering if someone can help diagnose a problem for me. Here's our site: https://www.flagandbanner.com/ We have a fairly large e-commerce site--roughly 23,000 urls according to crawls using both Moz and Screaming Frog. I have created an XML sitemap (using SF) and uploading to Webmaster Tools. WMT is only showing about 2,500 urls indexed. Further, WMT is showing that Google is indexing only about 1/2 (approx. 11,000) of the urls. Finally (to add even more confusion), when doing a site search on Google (site:) it's only showing about 5,400 urls found. The numbers are all over the place! Here's the robots.txt file: User-agent: *
Intermediate & Advanced SEO | | webrocket
Allow: /
Disallow: /aspnet_client/
Disallow: /httperrors/
Disallow: /HTTPErrors/
Disallow: /temp/
Disallow: /test/ Disallow: /i_i_email_friend_request
Disallow: /i_i_narrow_your_search
Disallow: /shopping_cart
Disallow: /add_product_to_favorites
Disallow: /email_friend_request
Disallow: /searchformaction
Disallow: /search_keyword
Disallow: /page=
Disallow: /hid=
Disallow: /fab/* Sitemap: https://www.flagandbanner.com/images/sitemap.xml Anyone have any thoughts as to what our problems are?? Mike0 -
Can I use duplicate content in different US cities without hurting SEO?
So, I have major concerns with this plan. My company has hundreds of facilities located all over the country. Each facility has it's own website. We have a third party company working to build a content strategy for us. What they came up with is to create a bank of content specific to each service line. If/when any facility offers that service, they then upload the content for that service line to that facility website. So in theory, you might have 10-12 websites all in different cities, with the same content for a service. They claim "Google is smart, it knows its content all from the same company, and because it's in different local markets, it will still rank." My contention is that duplicate content is duplicate content, and unless it is "localize" it, Google is going to prioritize one page of it and the rest will get very little exposure in the rankings no matter where you are. I could be wrong, but I want to be sure we aren't shooting ourselves in the foot with this strategy, because it is a major major undertaking and too important to go off in the wrong direction. SEO Experts, your help is genuinely appreciated!
Intermediate & Advanced SEO | | MJTrevens1 -
Robots.txt - Do I block Bots from crawling the non-www version if I use www.site.com ?
my site uses is set up at http://www.site.com I have my site redirected from non- www to the www in htacess file. My question is... what should my robots.txt file look like for the non-www site? Do you block robots from crawling the site like this? Or do you leave it blank? User-agent: * Disallow: / Sitemap: http://www.morganlindsayphotography.com/sitemap.xml Sitemap: http://www.morganlindsayphotography.com/video-sitemap.xml
Intermediate & Advanced SEO | | morg454540 -
Pages getting into Google Index, blocked by Robots.txt??
Hi all, So yesterday we set up to Remove URL's that got into the Google index that were not supposed to be there, due to faceted navigation... We searched for the URL's by using this in Google Search.
Intermediate & Advanced SEO | | bjs2010
site:www.sekretza.com inurl:price=
site:www.sekretza.com inurl:artists= So it brings up a list of "duplicate" pages, and they have the usual: "A description for this result is not available because of this site's robots.txt – learn more." So we removed them all, and google removed them all, every single one. This morning I do a check, and I find that more are creeping in - If i take one of the suspecting dupes to the Robots.txt tester, Google tells me it's Blocked. - and yet it's appearing in their index?? I'm confused as to why a path that is blocked is able to get into the index?? I'm thinking of lifting the Robots block so that Google can see that these pages also have a Meta NOINDEX,FOLLOW tag on - but surely that will waste my crawl budget on unnecessary pages? Any ideas? thanks.0 -
301's, Mixed-Case URLs, and Site Migration Disaster
Hello Moz Community, After placing trust in a developer to build & migrate our site, the site launched 9 weeks ago and has been one disaster after another. Sadly, after 16 months of development, we are building again, this time we are leveled-up and doing it in-house with our people. I have 1 topic I need advice on, and that is 301s. Here's the deal. The newbie developer used a mixed-case version for our URL structure. So what should have been /example-url became /Example-Url on all URLs. Awesome right? It was a duplicate content nightmare upon launch (among other things). We are re-building now. My question is this, do we bite the bullet for all URLs and 301 them to a proper lower-case URL structure? We've already lost a lot of link equity from 301ing the site the first time around. We were a PR 4 for the last 5 years on our homepage, now we are a PR 3. That is a substantial loss. For our primary keywords, we were on the first page for the big ones, for the last decade. Now, we are just barely cleaving to the second page, and many are 3rd page. I am afraid if we 301 all the URLs again, a 15% reduction in link equity per page is really going to hurt us, again. However, keeping the mixed-case URL structure is also a whammy. Building a brand new site, again, it seems like we should do it correctly and right all the previous wrongs. But on the other hand, another PR demotion and we'll be in line at the soup kitchen. What would you do?
Intermediate & Advanced SEO | | yogitrout10 -
What happens when I redirect an entire site to an established page on another site?
Hi There, I have a website which is dedicated to selling ONE product (in different forms) or my main brand site. It is branded similarly, targets similar keywords, and gets some traffic which convert to leads. Additionally, the auxiliary site has a Google Rank 2 in its own right. I am thinking of consolidating this "auxillary" site to the specific product page on my main site. The reason I am considering doing this is to give a "boost" to the main product page on our main site which has many core keywords sitting with SERP ranking of between 11-20 (so not in first 10) Because this auxiliary site it gets traffic and leads in its own right, I don't want this to be to the detriment of my leads overall. Question is - if I 301 redirect the entire domain from my auxillary site to the equivalent product on my main site am I likely to see a large "boost" to that product page? (i.e. will I likely see my ranking rise from 11 - 20 significantly)
Intermediate & Advanced SEO | | love-seo-goodness0 -
Googlebot Can't Access My Sites After I Repair My Robots File
Hello Mozzers, A colleague and I have been collectively managing about 12 brands for the past several months and we have recently received a number of messages in the sites' webmaster tools instructing us that 'Googlebot was not able to access our site due to some errors with our robots.txt file' My colleague and I, in turn, created new robots.txt files with the intention of preventing the spider from crawling our 'cgi-bin' directory as follows: User-agent: * Disallow: /cgi-bin/ After creating the robots and manually re-submitting it in Webmaster Tools (and receiving the green checkbox), I received the same message about Googlebot not being able to access the site, only difference being that this time it was for a different site that I manage. I repeated the process and everything, aesthetically looked correct, however, I continued receiving these messages for each of the other sites I manage on a daily-basis for roughly a 10-day period. Do any of you know why I may be receiving this error? is it not possible for me to block the Googlebot from crawling the 'cgi-bin'? Any and all advice/insight is very much welcome, I hope I'm being descriptive enough!
Intermediate & Advanced SEO | | NiallSmith1 -
Disallow my store in robots.txt?
Should I disallow my store directory in robots.txt? Here is the URL: https://www.stdtime.com/store/ Here are my reasons for suggesting this: SEOMOZ finds crawl "errors" in there that I don't care about I don't think I care if the search engines index those pages I only have one product, and it is not an impulse buy My product has a 60 day sales cycle, so price is less important than features
Intermediate & Advanced SEO | | raywhite0