Should I let Google crawl my production server if the site is still under development?
-
I am building out a brand new site. It's built on Wordpress so I've been tinkering with the themes and plug-ins on the production server. To my surprise, less than a week after installing Wordpress, I have pages in the index.
I've seen advice in this forum about blocking search bots from dev servers to prevent duplicate content, but this is my production server so it seems like a bad idea.
Any advice on the best way to proceed? Block or no block? Or something else? (I know how to block, so I'm not looking for instructions).
- We're around 3 months from officially launching (possibly less).
- We'll start to have real content on the site some time in June, even though we aren't planning to launch.
- We should have a development environment ready in the next couple of weeks.
Thanks!
-
Thank you for the detailed response, Paul. I'll get cracking on your suggestions.
I was mostly worried that if I blocked it now, it would be mad at me later. You've given me a way to deal with the bot concerns.
I am less concerned that anyone will find these pages. I only knew about their index status because of one of my monitoring services which alerted me that google was crawling.
-
Thanks for the confirmation, Dan! Looks like you're up & working early on a Sunday morning
-
In my opinion, no, you definitely should NOT allow the production server to be indexed while it's in this state. For all intents and purposes it IS your dev server at the moment, and the last thing you want is for the search crawlers to think that what's there will be representative of the quality of your site when it's finished.
My recommendation:
- get the current site out of the SERPs. (Use WordPress setting in Settings -> Read to check the "Discourage from indexing" box. DON'T add a no-index in robots.txt until the pages have all dropped out of the SERPs)
- when the dev site goes into operation, make _certain_right from the start it cannot be crawled (vastly better than trying to fix the problem after it get's accidentally indexed).
- as soon as you have time, build a proper front page and a few content pages on the production site that indicate what the full site will be about, and get some strong basic, well-written content on there that will also remain after the go-live. (keep ALL the rest of the pages of the prod site out of the SERPs with meta no-index tags)
- once you have a the new, stable, basic content up on prod, allow the SEs to start indexing it.
This gets the messy stuff out of the SERPs before it can pollute the index (and gives you a bad reputation with any actual visitors to the site who shouldn't be seeing your tinkering). By getting some real content as soon as possible, even on a very basic template, you'll start giving the SEs a quality idea of what is to come. Wouldn't hurt to start building a few backlinks once the basic content is up on prod - e.g. links from its new social profiles etc.
This way, when the full site goes live, you'll already have some quality visibility in the engines, so it will be quicker to get the rest of the new site crawled and indexed.
Does that make sense?
Paul
P.S. If at all appropriate, use the basic prod content to show why/how they should connect with you on social media, and offer them a chance to sign up for your newsletter notification of when the site goes live. (It's never too early to start trying to get those subscribers!)
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
[Very Urgent] More 100 "/search/adult-site-keywords" Crawl errors under Search Console
I just opened my G Search Console and was shocked to see more than 150 Not Found errors under Crawl errors. Mine is a Wordpress site (it's consistently updated too): Here's how they show up: Example 1: URL: www.example.com/search/adult-site-keyword/page2.html/feed/rss2 Linked From: http://an-adult-image-hosting.com/search/adult-site-keyword/page2.html Example 2 (this surprised me the most when I looked at the linked from data): URL: www.example.com/search/adult-site-keyword-2.html/page/3/ Linked From: www.example.com/search/adult-site-keyword-2.html/page/2/ (this is showing as if it's from our own site) http://a-spammy-adult-site.com/search/adult-site-keyword-2.html Example 3: URL: www.example.com/search/adult-site-keyword-3.html Linked From: http://an-adult-image-hosting.com/search/adult-site-keyword-3.html How do I address this issue?
Intermediate & Advanced SEO | | rmehta10 -
My site dropped 10 spot on Google last week... why ?
Hello I have launched my site in April last year. http://www.hobartphotographertasmania.com.au/ It took a while but I finally got to rank 2 on Google for "Hobart photographer" and rank 1 for "photographer Hobart Tasmania". Suddenly last week my site dropped to page 2 and I don't understand why. Was there a Google update ? If not, what can be the reason for such a sudden drop ? Thanks Loic
Intermediate & Advanced SEO | | loiclg0 -
SEOMOZ crawler is still crawling a subdomain despite disallow
This is for our client with a subdomain. We only want to analyze their main website as this is the one we want to SEO. The subdomain is not optimized so we know it's bound to have lots of errors. We added the disallow code when we started and it was working fine. We only saw the errors for the main domain and we were able to fix them. However, just a month ago, the errors and warnings spiked up and the errors we saw were for the subdomain. As far as our web guys are concerned. the disallow code is still there and was not touched. User-agent: rogerbot Disallow: / We would like to know if there's anything we might have unintentionally changed or something we need to do so that the SEOMOZ crawler will stop going through the subdomain. Any help is greatly appreciated!
Intermediate & Advanced SEO | | TheNorthernOffice790 -
Magento Hidden Products & Google Not Found Errors
We recently moved our website over to the Magento eCommerce platform. Magento has functionality to make certain items not visible individually so you can, for example, take 6 products and turn it into 1 product where a customer can choose their options. You then hide all the individual products, leaving only that one product visible on the site and reducing duplicate content issues. We did this. It works great and the individual products don't show up in our site map, which is what we'd like. However, Google Webmaster Tools has all of these individual product URLs in its Not Found Crawl Errors. ! For example: White t-shirt URL: /white-t-shirt Red t-shirt URL: /red-t-shirt Blue t-shirt URL: /blue-t-shirt All of those are not visible on the site and the URLs do not appear in our site map. But they are all showing up in Google Webmaster Tools. Configurable t-shirt URL: /t-shirt This product is the only one visible on the site, does appear on the site map, and shows up in Google Webmaster Tools as a valid URL. ! Do you know how it found the individual products if it isn't in the site map and they aren't visible on the website? And how important do you think it is that we fix all of these hundreds of Not Found errors to point to the single visible product on the site? I would think it is fairly important, but don't want to spend a week of man power on it if the returns would be minimal. Thanks so much for any input!
Intermediate & Advanced SEO | | Marketing.SCG0 -
How do I presuade Google to re-consider my site?
A few weeks ago I got an emai from Google that my site is suspected to violating Google guidelines-->suspected links manipulationg Google Page rank. My site dropped to the second page. I have contacted some of the top webmasters who link to me and they have removed the links or added a nofollow. When I asked for re-consideation I got an answear that there are still suspected links. What do I do now? I can't remove all of my links?! BTW this happened before the offical Pinguin Update.
Intermediate & Advanced SEO | | Ofer230 -
Disallowed Pages Still Showing Up in Google Index. What do we do?
We recently disallowed a wide variety of pages for www.udemy.com which we do not want google indexing (e.g., /tags or /lectures). Basically we don't want to spread our link juice around to all these pages that are never going to rank. We want to keep it focused on our core pages which are for our courses. We've added them as disallows in robots.txt, but after 2-3 weeks google is still showing them in it's index. When we lookup "site: udemy.com", for example, Google currently shows ~650,000 pages indexed... when really it should only be showing ~5,000 pages indexed. As another example, if you search for "site:udemy.com/tag", google shows 129,000 results. We've definitely added "/tag" into our robots.txt properly, so this should not be happening... Google showed be showing 0 results. Any ideas re: how we get Google to pay attention and re-index our site properly?
Intermediate & Advanced SEO | | udemy0 -
Google bot vs google mobile bot
Hi everyone 🙂 I seriously hope you can come up with an idea to a solution for the problem below, cause I am kinda stuck 😕 Situation: A client of mine has a webshop located on a hosted server. The shop is made in a closed CMS, meaning that I have very limited options for changing the code. Limited access to pagehead and can within the CMS only use JavaScript and HTML. The only place I have access to a server-side language is in the root where a Defualt.asp file redirects the visitor to a specific folder where the webshop is located. The webshop have 2 "languages"/store views. One for normal browsers and google-bot and one for mobile browsers and google-mobile-bot.In the default.asp (asp classic). I do a test for user agent and redirect the user to one domain or the mobile, sub-domain. All good right? unfortunately not. Now we arrive at the core of the problem. Since the mobile shop was added on a later date, Google already had most of the pages from the shop in it's index. and apparently uses them as entrance pages to crawl the site with the mobile bot. Hence it never sees the default.asp (or outright ignores it).. and this causes as you might have guessed a huge pile of "Dub-content" Normally you would just place some user-agent detection in the page head and either throw Google a 301 or a rel-canon. But since I only have access to JavaScript and html in the page head, this cannot be done. I'm kinda running out of options quickly, so if anyone has an idea as to how the BEEP! I get Google to index the right domains for the right devices, please feel free to comment. 🙂 Any and all ideas are more then welcome.
Intermediate & Advanced SEO | | ReneReinholdt0 -
On-Site Optimization Tips for Job site?
I am working on a job site that only ranks well for the homepage with very low ranking internal pages. My job pages do not rank what so ever and are database driven and often times turn to 404 pages after the job has been filled. The job pages have to no content either. Anybody have any technical on-site recommendations for a job site I am working on especially regarding my internal pages? (Cross Country Allied.com)
Intermediate & Advanced SEO | | Melia0