Development/Test Ecommerce Website Mistakenly Indexed
-
My question is - relatively speaking, how damaging to SEO is it to have BOTH your development/testing site and your live version indexed/crawled by Google and appearing in the SERPs?
We just launched about a month ago, and made a change to the robots text on the development site without noticing ... which lead to it being indexed too.So now the ecommerce website is duplicated in Google ... each under different URLs of course (and on diff servers, DNS etc)
We'll fix it right away ... and block crawlers to the development site. But again, may general question is what is the general damage to SEO ... if any ... created by this kind of mistake. My feeling is nothing significant
-
No my friend, no! I'm saying we'll point the existing staging/testing environment to the production version and will stop using it as staging instead of closing it completely like I mentioned earlier. And, we'll launch a fresh instance for staging/testing use case.
This will help us transferring majority if the link juice of already indexed staging/testing instance.
-
Why would you want to 301 a staging/dev environment to a production site? Unless you plan on making live changes to the production server (not safe), you'd want to keep them separate. Especially for eCommerce it would be important to have different environments to test and QA before pushing a change live. Making any change that impacts a number of pages could damage your ability to generate revenue from the site. You don't take down the development/testing site, because that's your safe environment to test changes before pushing updates to production.
I'm not sure I follow your recommendation. Am I missing a critical point?
-
Hi Eric,
Well, that's a valid point that bots might have considered your staging instances as the main website and hence, this could end up giving you nothing but a face palm.
The solution you suggested is similar to the one I suggested where we are not getting any benefit from the existing instance by removing it or putting noindex everywhere.
My bad! I assumed your staging/testing instance(s) got indexed recently only and are not very powerful from domain & page authority perspective. In fact, being a developer, I should have considered the worst case only
Thanks for pointing out the worst case Eric i.e when your staging/testing instances are decently old and you don't want to loose their SEO values while fixing this issue. And, here'e my proposed solution for it: don't removed the instance, don't even put a noindex everywhere. The better solution would be establishing a 301 redirect bridge from your staging/testing instance to your original website. In this case, ~90% of the link juice that your staging/testing instances have earned, will get passed. Make sure each and every URL of the staging/testing instance is properly 301 redirecting to the original instance.
Hope this helps!
-
It could hurt you in the long run (Google may decide the dev site is more relevant than your live site), but this is an easy fix. No-index your dev site. Just slap a site-wide noindex meta tag across all the pages, and when you're ready to move that code to the production site you remove that instance of code.
Disallowing from the robots.txt file will help, but that's a soft request. The best way to keep the dev site from being indexed is to use the noindex tag. Since it seems like you want to QA in a live environment that would prevent search engines from indexing the site, and still allow you to test in a production-like scenario.
-
Hey,
I recently faced the same issue when the staging instances got indexed accidentally and we were open for the duplicate content penalty (well, that's not cool). After a decent bit of research, I followed the following steps and got rid of this issue:
- I removed my staging instances i.e staging1.mysite.com, staging2.mysite.com and so on. Removing such instances helps you deindex already indexed pages faster than just blocking the whole website from robots.txt
- Relaunched the staging instances with a slightly different name like new-staging1.mysite.com, new-staging2.mysite.com and disallow bots on these instances from the day zero to avoid this mess again.
This helped me fixing this issue asap. Hope this helps!
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google indexed "Lorem Ipsum" content on an unfinished website
Hi guys. So I recently created a new WordPress site and started developing the homepage. I completely forgot to disallow robots to prevent Google from indexing it and the homepage of my site got quickly indexed with all the Lorem ipsum and some plagiarized content from sites of my competitors. What do I do now? I’m afraid that this might spoil my SEO strategy and devalue my site in the eyes of Google from the very beginning. Should I ask Google to remove the homepage using the removal tool in Google Webmaster Tools and ask it to recrawl the page after adding the unique content? Thank you so much for your replies.
Intermediate & Advanced SEO | | Ibis150 -
Why is the meta description not the same as in the index?
Hi all, When I search for keywords concerning "little wannahaves", the meta description in attachment 1 appears. This is however not the meta description I gave in. When I search for "site:littewannahaves.nl" the right meta description appears, see attachment 2. Does anyone know how why these two differ and how I can fix this? According to webmaster tools there should not be any error. Thanks in advance! P3FMNzP.png nkDXqRc.png
Intermediate & Advanced SEO | | U-Digital0 -
How to take out international URL from google US index/hreflang help
Hi Moz Community, Weird/confusing question so I'll try my best. The company I work for also has an Australian retail website. When you do a site:ourbrand.com search the second result that pops up is au.brand.com, which redirects to the actual brand.com.au website. The Australian site owner removed this redirect per my bosses request and now it leads to a an unavailable webpage. I'm confused as to best approach, is there a way to noindex the au.brand.com URL from US based searches? My only problem is that the au.brand.com URL is ranking higher than all of the actual US based sub-cat pages when using a site search. Is this an appropriate place for an hreflang tag? Let me know how I can help clarify the issue. Thanks,
Intermediate & Advanced SEO | | IceIcebaby
-Reed0 -
HTTPS Certificate Expired. Website with https urls now still in index issue.
Hi Guys This week the Security certificate of our website expired and basically we now have to wail till next Tuesday for it to be re-instated. So now obviously our website is now index with the https urls, and we had to drop the https from our site, so that people will not be faced with a security risk screen, which most browsers give you, to ask if you are sure that you want to visit the site, because it's seeing it as an untrusted one. So now we are basically sitting with the site urls, only being www... My question what should we do, in order to prevent google from penalizing us, since obviously if googlebot comes to crawl these urls, there will be nothing. I did however re-submitted it to Google to crawl it, but I guess it's going to take time, before Google picks up that now only want the www urls in the index. Can somebody please give me some advice on this. Thanks Dave
Intermediate & Advanced SEO | | daveza0 -
Links on My website
I am looking to create some more trust on my website by subscribing to BBB. I have heard that my site is penalized and loses "link juice" if I place the BBB logo link in my page footer on every page of my website. Does anyone know how much I am penalized? Should I only put it on my conversion pages and maybe my main 10 sub pages? My main goal is to assist in getting conversions but I don't want to do it at the expense of getting a penalty. Any help is greatly appreciated. Thank you, Boo
Intermediate & Advanced SEO | | Boodreaux0 -
Indexed non existent pages, problem appeared after we 301d the url/index to the url.
I recently read that if a site has 2 pages that are live such as: http://www.url.com/index and http://www.url.com/ will come up as duplicate if they are both live... I read that it's best to 301 redirect the http://www.url.com/index and http://www.url.com/. I read that this helps avoid duplicate content and keep all the link juice on one page. We did the 301 for one of our clients and we got about 20,000 errors that did not exist. The errors are of pages that are indexed but do not exist on the server. We are assuming that these indexed (nonexistent) pages are somehow linked to the http://www.url.com/index The links are showing 200 OK. We took off the 301 redirect from the http://www.url.com/index page however now we still have 2 exaact pages, www.url.com/index and http://www.url.com/. What is the best way to solve this issue?
Intermediate & Advanced SEO | | Bryan_Loconto0 -
Website is not getting indexed in Google! Not sure why?
I just came up with my new blog, its not live yet but the 1<sup>st</sup> landing page is ready, up and running… all is fine but here is the only problem is its not getting indexed in Google and I am not really sure why? .xml sitemap is there Google webmaster and analytics are there Website contain at least that much real social shares that it should get indexed in Google Few Links may be coming from Famous Bloggers and SEOmoz (both sites are very authentic in their respective domains) It’s the 4 day the website is up I don’t think website is not getting indexed in Google just because it contains 1 landing page and a thank you page! Any clue or help will be appreciated. www.setalks.com is the domain
Intermediate & Advanced SEO | | MoosaHemani0 -
Should we block urls like this - domainname/shop/leather-chairs.html?brand=244&cat=16&dir=ascℴ=price&price=1 within the robots.txt?
I've recently added a campaign within the SEOmoz interface and received an alarming number of errors ~9,000 on our eCommerce website. This site was built in Magento, and we are using search friendly url's however most of our errors were duplicate content / titles due to url's like: domainname/shop/leather-chairs.html?brand=244&cat=16&dir=asc&order=price&price=1 and domainname/shop/leather-chairs.html?brand=244&cat=16&dir=asc&order=price&price=4. Is this hurting us in the search engines? Is rogerbot too good? What can we do to cut off bots after the ".html?" ? Any help would be much appreciated 🙂
Intermediate & Advanced SEO | | MonsterWeb280