New server update + wrong robots.txt = lost SERP rankings
-
Over the weekend, we updated our store to a new server. Before the switch, we had a robots.txt file on the new server that disallowed its contents from being indexed (we didn't want duplicate pages from both old and new servers).
When we finally made the switch, we somehow forgot to remove that robots.txt file, so the new pages weren't indexed. We quickly put our good robots.txt in place, and we submitted a request for a re-crawl of the site.
The problem is that many of our search rankings have changed. We were ranking #2 for some keywords, and now we're not showing up at all. Is there anything we can do? Google Webmaster Tools says that the next crawl could take up to weeks! Any suggestions will be much appreciated.
-
Dr. Pete,
I just ran across one of your webinars yesterday and you brought up some great ideas. Earned a few points in my book
Too often SEOs see changes in the rankings and react to counter-act the change. Most of the time these bounces are actually a GOOD sign. It means Google saw your changes and is adjusting to them. If your changes were positive you should see positive results. I have rarely found an issue where a user made a positive change and got a negative result from Google. Patience is a virtue.
-
Thanks everyone for the help! Fortunately we remedied the problem almost immediately, so it only took about a day to get our rankings back. I think the sitemap and fixed robots.txt were the most important factors.
-
I agree, let Google re-index first and then re evaluate the situation.
-
I hate to say it, but @inhouseninja is right - there's not a lot you can do, and over-reacting could be very dangerous. In other words - don't make a ton of changes just to offset this - Google will re-index.
A few minor cues that are safe:
(1) Re-submit your XML sitemap
(2) Build a few new links (authoritative ones, especially)
(3) Hit social media with your new URLs
All 3 are at least nudges to re-index. They aren't magic bullets, but you need to get Google's attention.
-
Remain calm. You should be just fine. It just takes time for Google to digest the new robots.txt. I would be concerned if things didn't change in 3-4 weeks. Adopt a rule to not freak out on Google until you've given the problem 14 days to resolve. Sometimes Google moves things around and this is natural.
If you want Google to crawl your site faster, build some links and do some social media. That will encourage Google to speed it up.
-
If this is all that happened the next crawl should fix it. Just sit tight and they should bounce up again in a week or so.
-
That does not sound fun at all.... So you just changed the server, complete copy?
My first question would be other than the server did anything else change? Copy or URL's
My second question would be is the other server still up and live to the internet?
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Block session id URLs with robots.txt
Hi, I would like to block all URLs with the parameter '?filter=' from being crawled by including them in the robots.txt. Which directive should I use: User-agent: *
Intermediate & Advanced SEO | | Mat_C
Disallow: ?filter= or User-agent: *
Disallow: /?filter= In other words, is the forward slash in the beginning of the disallow directive necessary? Thanks!1 -
Truncated sitelinks in SERP
Anyway, I've seen these pop up for certain searches (not all). I'm curious as to how these truncated sitelinks appear. When searching for a company, the full sitelinks appear with descriptions and the like, but these little ones are a little new to me (apologies, I don't know if they have an official name). Is there a way to get them to appear for our page? Or more importantly, does it matter? iZlJZ
Intermediate & Advanced SEO | | Parker8180 -
Wildcarding Robots.txt for Particular Word in URL
Hey All, So I know that this isn't a standard robots.txt, I'm aware of how to block or wildcard certain folders but I'm wondering whether it's possible to block all URL's with a certain word in it? We have a client that was hacked a year ago and now they want us to help remove some of the pages that were being autogenerated with the word "viagra" in it. I saw this article and tried implementing it https://builtvisible.com/wildcards-in-robots-txt/ and it seems that I've been able to remove some of the URL's (although I can't confirm yet until I do a full pull of the SERPs on the domain). However, when I test certain URL's inside of WMT it still says that they are allowed which makes me think that it's not working fully or working at all. In this case these are the lines I've added to the robots.txt Disallow: /*&viagra Disallow: /*&Viagra I know I have the solution of individually requesting URL's to be removed from the index but I want to see if anybody has every had success with wildcarding URL's with a certain word in their robots.txt? The individual URL route could be very tedious. Thanks! Jon
Intermediate & Advanced SEO | | EvansHunt0 -
After Server Migration - Crawling Gets slow and Dynamic Pages wherein Content changes are not getting Updated
Hello, I have just performed doing server migration 2 days back All's well with traffic moved to new servers But somehow - it seems that w.r.t previous host that on submitting a new article - it was getting indexed in minutes. Now even after submitting page for indexing - its taking bit of time in coming to Search Engines and some pages wherein content is daily updated - despite submitting for indexing - changes are not getting reflected Site name is - http://www.mycarhelpline.com Have checked in robots, meta tags, url structure - all remains well intact. No unknown errors reports through Google webmaster Could someone advise - is it normal - due to name server and ip address change and expect to correct it automatically or am i missing something Kindly advise in . Thanks
Intermediate & Advanced SEO | | Modi0 -
New Website Launch - what to do with the URLs of the pages with ranks
Hey there, So, we are "redesigning" our website, it will have a new user journey and overall layout, use, and feel. Situation: Previously, most of our keywords ranked over time organically though all of them pull up our domain.com as the landing page. Now that we are redesigning the site, most of the keywords pointing to the home page will now have their own page. Keywords properly grouped and content will now be on topic and focused per page. Q: What are the things that we need to do so we won't lose those keywords? Appreciate your help. Also, if you can cite specific SEO checklist when redesigning a site, that'll be a great help! Thanks! Jac
Intermediate & Advanced SEO | | jac.reyes0 -
Discrepancy in keyword ranking from webmasters and actual ranking.
I have been tracking ranks of some keywords important to my business since the last 2 months. Recently I have observed that, for one of my keywords, google webmasters is giving the avg position as 8 but when i search in google it comes in the 6th page. I know that webmasters tools gives the average position but i do not think there will be such big difference in the ranks. Please help.Thanks.
Intermediate & Advanced SEO | | seomoz12320 -
New Site: Use Aged Domain Name or Buy New Domain Name?
Hi,
Intermediate & Advanced SEO | | peterwhitewebdesign
I have the opportunity to build a new website and use a domain name that is older than 5 years or buy a new domain name. The aged domain name is a .net and includes a keyword.
The new domain would include the same keyword as well as the U.S. state abbreviation. Which one would you use and why? Thanks for your help!0 -
XML Sitemap instruction in robots.txt = Worth doing?
Hi fellow SEO's, Just a quick one, I was reading a few guides on Bing Webmaster tools and found that you can use the robots.txt file to point crawlers/bots to your XML sitemap (they don't look for it by default). I was just wondering if it would be worth creating a robots.txt file purely for the purpose of pointing bots to the XML sitemap? I've submitted it manually to Google and Bing webmaster tools but I was thinking more for the other bots (I.e. Mozbot, the SEOmoz bot?). Any thoughts would be appreciated! 🙂 Regards, Ash
Intermediate & Advanced SEO | | AshSEO20110