Google Indexing - what did I missed??
-
Hello, all SEOers~
I just renewed my web site about 3 weeks ago, and in order to preserve SEO values as much as possible, I did 301 redirect, XML Sitemap and so on for minimize the possible data losses.
But the problem is that about week later from site renewal, my team some how made mistake and removed all 301 redirects. So now my old site URLs are all gone from Google Indexing and my new site is not getting any index from Google. My traffic and rankings are also gone....OMG
I checked Google Webmaster Tool, but it didn't say any special message other than Google bot founds increase of 404 error which is obvious.
Also I used "fetch as google bot" from webmaster tool to increase chance to index but it seems like not working much.
I am re-doing 301 redirect within today, but I am not sure it means anything anymore.
Any advise or opinion?? Thanks in advance~!
-
thanks for your kind advice.
I will try to follow up your suggestions~ thanks
-
Hi there,
Complete your 301 redirects, but do it 1-on-1 basis - one old URL toward one new URL. DO NOT redirect all your URLs to your home page. (after you do that, verify to make sure is indeed 301 redirect and not other types of redirects like 302).
a) The most beneficial way is to 301 redirect as much as possible following a structural way: the old categories to the new categories and so on. Don't worry there are no limits on how many 301 redirects you can use, just don't loop them with intermediary redirects, like: old URL -> 301 -> intermediary URL -> 301 -> final active URL. Go directly from the old to the new, final, active URL in 1 step if possible.
b) Verify if in your Webmaster Tools there are old Sitemaps. If there are, delete the old ones and create new ones that have to contain only new URLs.
c) Make the same move for the robots.txt file as well. (If you don't have a robots.txt file, create one and place it in the root of your domain name, e.g. www.example.com/robots.txt )
d) If possible, use all instances of "fetch as google bot" and then subscribe those URLs for crawling but do it as much as possible for the main node pages from your website (e.g. main categories), don't waste this function for final product pages, as Googlebot will go link-by-link from the categories and will re-discover all your URLs.
e) Be patient, the Page Rank and old traffic flow won't happen over night. It can take up to 3 months for Googlebot to re-discover and re-index all the pages of your website (i know it's a long time but usually happens a lot sooner).
f) Keep a close eye on Webmaster Tools account and make sure you solve any problems that appear in a due time.
g) Scan your entire new website with a software to make sure you don't have broken links, it's important. If you find any broken links, solve them imediately.
I hope it helps.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why would Google not index all submitted pages?
On Google Search console we see that many of our submitted pages weren't indexed. What could be the reasons? | Web pages |
Technical SEO | | Leagoldberger
| 130,030 Submitted |
| 87,462 Indexed |0 -
Google Update Frequency
Hi, I recently found a large number of duplicate pages on our site that we didn't know existed (our third-party review provider was creating a separate page for each product whether it was reviewed or not - the ones not reviewed are almost identical so they have been no indexed. Question - how long do you have to typically wait for Google to pick this up On our site? Is it a normal crawl or do we need to wait for the next Panda review (if there is such a thing)? Thanks much.
Technical SEO | | trophycentraltrophiesandawards0 -
Best way to fix a whole bunch of 500 server errors that Google has indexed?
I got a notification from Google Webmaster tools saying that they've found a whole bunch of server errors. It looks like it is because an earlier version of the site I'm doing some work for had those URLs, but the new site does not. In any case, there are now thousands of these pages in their index that error out. If I wanted to simply remove them all from the index, which is my best option: Disallow all 1,000 or so pages in the robots.txt ? Put the meta noindex in the headers of each of those pages ? Rel canonical to a relevant page ? Redirect to a relevant page ? Wait for Google to just figure it out and remove them naturally ? Submit each URL to the GWT removal tool ? Something else ? Thanks a lot for the help...
Technical SEO | | jim_shook0 -
Google Not Indexed WWW name
Here is my domain - http://www.plugnbuy.com . When i see through "site" google not showing with WWW index but the same when i do without WWW.. it is showing in search. So yesturday i changed the setting from GWM to preferred domain as a WWW appear but today still not showing anything... Please help..
Technical SEO | | mamuti0 -
Missing Cache - very strange
Anyone have experience with a cache going missing from a page that had a cache in the past? We’re overhauling a page and noticed the cache was gone from Google results. We don’t know if this event is good/bad/doesn’t matter but I am curious why this happens. I am positive the cache was missing before we updated the page today because a programmer mentioned they did try checking for a cache for a historical load time prior to today for a different project. I have attached two screenshots to illustrate two things: 1) What google delivers for a cache: of the page instead of the normal cache page 2) Even though you can see a cache of any of the indexed pages we have from a serp, the cached link is missing in serps for the mentioned page Has anyone seen this before? thanks! IhnAf SL8ax
Technical SEO | | CouponCactus0 -
Google support eTag?
Hello~ People! I have a questions regarding eTag. I know Google support If-Modified-HTTP-Header aka last modified header. I used eTag instead of last modified header. It seems like Google does support, yet here is my questions. code.google suggest as following. GData-Version: 2.0
Technical SEO | | Artience
ETag: "C0QBRXcycSp7ImA9WxRVFUk." but I used etag as following . ETag: "10cd712-eaae-b279a480" I didnt include "GData-Version: 2.0". is this mean Google may not support my etag?0 -
Why google index my IP URL
hi guys, a question please. if site:112.65.247.14 , you can see google index our website IP address, this could duplicate with our darwinmarketing.com content pages. i am not quite sure why google index my IP pages while index domain pages, i understand this could because of backlink, internal link and etc, but i don't see obvious issues there, also i have submit request to google team to remove ip address index, but seems no luck. Please do you have any other suggestion on this? i was trying to do change of address setting in Google Webmaster Tools, but didn't allow as it said "Restricted to root level domains only", any ideas? Thank you! boson
Technical SEO | | DarwinChinaSEO0 -
Index forum sites
Hi Moz Team, somehow the last question i raised a few days ago not only wasnt answered up until now, it was also completely deleted and the credit was not "refunded" - obviously there was some data loss involved with your restructuring. Can you check whether you still find the last question and answer it quickly? I need the answer 🙂 Here is one more question: I bought a website that has a huge forum, loads of pages with user generated content. Overall around 500.000 Threads with 9 Million comments. The complete forum is noindex/nofollow when i bought the site, now i am thinking about what is the best way to unleash the potential. The current system is vBulletin 3.6.10. a) Shall i first do an update of vbulletin to version 4 and use the vSEO tool to make the URLs clean, more user and search engine friendly before i switch to index/follow? b) would you recommend to have the forum in the folder structure or on a subdomain? As far as i know subdomain does take lesser strenght from the TLD, however, it is safer because the subdomain is seen as a separate entity from the regular TLD. Having it in he folder makes it easiert to pass strenght from the TLD to the forum, however, it puts my TLD at risk c) Would you release all forum sites at once or section by section? I think section by section looks rather unnatural not only to search engines but also to users, however, i am afraid of blasting more than a millionpages into the index at once. d) Would you index the first page of a threat or all pages of a threat? I fear duplicate content as the different pages of the threat contain different body content but the same Title and possibly the same h1. Looking forward to hear from you soon! Best Fabian
Technical SEO | | fabiank0