New server update + wrong robots.txt = lost SERP rankings
-
Over the weekend, we updated our store to a new server. Before the switch, we had a robots.txt file on the new server that disallowed its contents from being indexed (we didn't want duplicate pages from both old and new servers).
When we finally made the switch, we somehow forgot to remove that robots.txt file, so the new pages weren't indexed. We quickly put our good robots.txt in place, and we submitted a request for a re-crawl of the site.
The problem is that many of our search rankings have changed. We were ranking #2 for some keywords, and now we're not showing up at all. Is there anything we can do? Google Webmaster Tools says that the next crawl could take up to weeks! Any suggestions will be much appreciated.
-
Dr. Pete,
I just ran across one of your webinars yesterday and you brought up some great ideas. Earned a few points in my book
Too often SEOs see changes in the rankings and react to counter-act the change. Most of the time these bounces are actually a GOOD sign. It means Google saw your changes and is adjusting to them. If your changes were positive you should see positive results. I have rarely found an issue where a user made a positive change and got a negative result from Google. Patience is a virtue.
-
Thanks everyone for the help! Fortunately we remedied the problem almost immediately, so it only took about a day to get our rankings back. I think the sitemap and fixed robots.txt were the most important factors.
-
I agree, let Google re-index first and then re evaluate the situation.
-
I hate to say it, but @inhouseninja is right - there's not a lot you can do, and over-reacting could be very dangerous. In other words - don't make a ton of changes just to offset this - Google will re-index.
A few minor cues that are safe:
(1) Re-submit your XML sitemap
(2) Build a few new links (authoritative ones, especially)
(3) Hit social media with your new URLs
All 3 are at least nudges to re-index. They aren't magic bullets, but you need to get Google's attention.
-
Remain calm. You should be just fine. It just takes time for Google to digest the new robots.txt. I would be concerned if things didn't change in 3-4 weeks. Adopt a rule to not freak out on Google until you've given the problem 14 days to resolve. Sometimes Google moves things around and this is natural.
If you want Google to crawl your site faster, build some links and do some social media. That will encourage Google to speed it up.
-
If this is all that happened the next crawl should fix it. Just sit tight and they should bounce up again in a week or so.
-
That does not sound fun at all.... So you just changed the server, complete copy?
My first question would be other than the server did anything else change? Copy or URL's
My second question would be is the other server still up and live to the internet?
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moving site to new domain without access to redirect from old to new. How can I do this with as little loss to SERP results as possible?
I've been hired to build a new site for a customer. They were duped by some shady characters at goglupe.com (If you can reach them, tell them they are rats--phone is disconnected, address is a comedy club on Mission in SF). Glupe owns the domain name and would not transfer or give FTP access prior to dropping off the face of the earth. The customer doesn't want to chase after them with lawyers, so we are moving on. New domain, new site with much of the same content as previous site. All that I have access to is the old wordpress site. I plan to build the new site, then remove all pages/posts from the old site. Is there anything I can do to salvage the current page 1 ranking? Obviously, the new domain will take some time to get back there. Just hoping to avoid any pitfalls or penalties if I can. If I had complete access, I would follow all the standard guidelines. But I don't. Any thoughts? Thanks! Chris
Intermediate & Advanced SEO | | c_estep_tcbguy0 -
"noindex, follow" or "robots.txt" for thin content pages
Does anyone have any testing evidence what is better to use for pages with thin content, yet important pages to keep on a website? I am referring to content shared across multiple websites (such as e-commerce, real estate etc). Imagine a website with 300 high quality pages indexed and 5,000 thin product type pages, which are pages that would not generate relevant search traffic. Question goes: Does the interlinking value achieved by "noindex, follow" outweigh the negative of Google having to crawl all those "noindex" pages? With robots.txt one has Google's crawling focus on just the important pages that are indexed and that may give ranking a boost. Any experiments with insight to this would be great. I do get the story about "make the pages unique", "get customer reviews and comments" etc....but the above question is the important question here.
Intermediate & Advanced SEO | | khi50 -
Weird Google SERPs after New Domain Transfer 301
Hi, I have some very weird results in the SERPS - We did not do a complete 301 of the entire domain, but rather individual pages. We did the transfer back on 10th of June, and I was checking to see if there were any results on old domain of pages that were transferred to new domain via 301. There were, but... Now I have the following occurring in the search results: Title of Page (Links to old domain!! )
Intermediate & Advanced SEO | | bjs2010
www.oldomain.com › ... › Figures & Sculptures › Tall Sculptures (these last 2 breadcrumbs link to NEW domain??!!)
Bla bla bla (meta description from new domain meta description I know it's Monday, but this one has got me quite concerned! - Any insight appreciated! Am I going nuts?0 -
Robots.txt: how to exclude sub-directories correctly?
Hello here, I am trying to figure out the correct way to tell SEs to crawls this: http://www.mysite.com/directory/ But not this: http://www.mysite.com/directory/sub-directory/ or this: http://www.mysite.com/directory/sub-directory2/sub-directory/... But with the fact I have thousands of sub-directories with almost infinite combinations, I can't put the following definitions in a manageable way: disallow: /directory/sub-directory/ disallow: /directory/sub-directory2/ disallow: /directory/sub-directory/sub-directory/ disallow: /directory/sub-directory2/subdirectory/ etc... I would end up having thousands of definitions to disallow all the possible sub-directory combinations. So, is the following way a correct, better and shorter way to define what I want above: allow: /directory/$ disallow: /directory/* Would the above work? Any thoughts are very welcome! Thank you in advance. Best, Fab.
Intermediate & Advanced SEO | | fablau1 -
Where are we going wrong?
I've been going over all the top ranking factors, running my site through Moz analytics and page graders, just researching the heck out of this, and I'm trying to figure out where we're going wrong. The site is www.imageworkscreative.com - and we're being outranked by newer sites. We need to rank for terms like web design va, custom web design, web design firm, etc. We publish blog updates on average once a week, and promote those via social media and a few syndication services. We were using a link builder who wasn't following our instructions regarding competitor backlinks ... pretty sure her work ended up hurting more than it helped. I'd like to hire a consultant or high-quality link builder to help get things going, for us and for our clients. Any recommendations would be much appreciated, as well as any advice for us in terms of overall issues that need to be resolved. Thanks!
Intermediate & Advanced SEO | | ScottImageWorks0 -
If i disallow unfriendly URL via robots.txt, will its friendly counterpart still be indexed?
Our not-so-lovely CMS loves to render pages regardless of the URL structure, just as long as the page name itself is correct. For example, it will render the following as the same page: example.com/123.html example.com/dumb/123.html example.com/really/dumb/duplicative/URL/123.html To help combat this, we are creating mod rewrites with friendly urls, so all of the above would simply render as example.com/123 I understand robots.txt respects the wildcard (*), so I was considering adding this to our robots.txt: Disallow: */123.html If I move forward, will this block all of the potential permutations of the directories preceding 123.html yet not block our friendly example.com/123? Oh, and yes, we do use the canonical tag religiously - we're just mucking with the robots.txt as an added safety net.
Intermediate & Advanced SEO | | mrwestern0 -
Block an entire subdomain with robots.txt?
Is it possible to block an entire subdomain with robots.txt? I write for a blog that has their root domain as well as a subdomain pointing to the exact same IP. Getting rid of the option is not an option so I'd like to explore other options to avoid duplicate content. Any ideas?
Intermediate & Advanced SEO | | kylesuss12 -
Ranking problems
Hi All My site is live for a year now. I;m getting tons of traffic (alexa 54k) and business are good. The only problem is that I have 0 page rank....I have checked again and again the site;s structure to see if there is anything wrong with the site but everything seems to be ok. Google just added search links to the site (megamoneygames) which looks very nice. For example, none of my competitors have search links but they all have page rank of 4 while I have 0. In addition, for some reason the site's age (days) shows 0 although it is live for a year now... Do you have any idea of what is going on? do I have errors in the site? Thanks
Intermediate & Advanced SEO | | Pariplay0