Google still listing old domain
-
Hi
We moved to a new domain back in March 2014 and redirected most pages with a 301 and submitted change of domain request through Google Webmaster tools. A couple of pages were left as 302 redirect as they had rubbish links pointing to them and we had previously had a penalty.
Google was still indexing the old domain and our rankings hadn't recovered. Last month we took away the 302 redirects and just did a blanket 301 approach from old domain to new in the the thinking that as the penalty had been lifted from the old domain there was no harm in sending everything to new domain.
Again, we submitted the change of domain in webmaster tools as the option was available to us but its been a couple of weeks now and the old domain is still indexed
Am I missing something? I realise that the rankings may not have recovered partly due to the disavowing / disregarding of several links but am concerned this may be contributing
-
Hi
I now have a robots.txt for the old site and I created a sitemap by replacing the current domain with the old one and uploaded.
Weirdly when I search for the non-www version of the old domain the pages indexed has increased!
According to WMT the Crawl postponed because robots.txt was inaccessible however I've checked it returns status 200 and the Robots.txt Tester says it's successful even though it never updates the timestamp.
-
Hi Marie
Many thanks for your response,
I've just looked in Webmater tools at the old domain and the option to change domains is there again but I also noticed when looking at the crawl errors there was a message along the lines of crawl postponed as robots.txt was inaccessible.
At the moment it's just a blanket redirect at IIS level so following your advice I'll re-establish the old site's robots.txt and a sitemap and see if Google crawls the 301's to the new domain.
In some ways I'm glad I haven't missed anything but would be nice if just the new domain indexed after all this time !
Thanks again
-
This is odd. The pages all seem to redirect from the old site to the new, so why is Google still indexing those old pages?
I can't see the robots.txt on the old site as it redirects, but is it possible that the robots.txt on fhr-net.co.uk is blocking Google? If this is the case, then Google probably wouldn't be able to see the old site and recognize the redirects.
It may also help to add a sitemap for the old site and also to ask Google to fetch and render the old site's pages and then submit them to the index. This should cause the 301's to be seen and processed by Google.
-
Even after all this time, there are still over 700 pages indexed on our old domain even though we have submitted the change of address twice in Webmaster tools, the second one being about 6 months ago if not longer
old domain is www.fhr-net.co.uk
Any advice would be appreciated
-
No worries,
I appreciate you taking the time to answer my question
-
I think that I'm so used to answering questions about penalized sites that I assumed that you had moved domains because of a penalty. My apologies!
Sounds like you've got the right idea.
-
Thanks for responses,
One week on and since submitting the second change of domain in GWT we've seen the number of pages indexed for the old domain drop from over 1300 to around 700 this week which is something
Regarding the redirect debate, it's an interesting read thanks for sending that. Isn't the situation the same as a site that didn't have a penalty in that you should be monitoring your backlink profile and reconfiguring or disavowing links outside the guidelines whilst carrying out activities that will naturally build decent links and therefore redress the balance?
-
This doesn't answer your question, but I just wanted to point out that the 301 or 302 redirects are not a good idea. Even if you got the penalty lifted, there still can be unnatural links there that can harm you in the eyes of the Penguin algorithm. A 301 will redirect those bad links to the new site. A 302, if left in place long enough will do the same.
Here's an article I wrote today that goes into greater detail:
-
Oh, it may be that it's the other way around with canonical URL-s. At least according to Google (here: https://support.google.com/webmasters/answer/6033086?hl=en
- _Each destination URL should have a self-referencing rel="canonical" meta tag. _
-
Hmm.. certainly someone with more experience than myself would have a more elegant solution, but I would still try to do this by establishing the canonical URL because you don't want to delist: https://support.google.com/webmasters/answer/139066#6
If you can configure your server, you can use
rel="canonical"
HTTP headers to indicate the canonical URL for HTML documents and other files such as PDFs. Say your site makes the same PDF available via different URLs (for example, for tracking purposes), like this:_http://www.example.com/downloads/white-paper.pdf http://www.example.com/downloads/partner-1/white-paper.pdf http://www.example.com/downloads/partner-2/white-paper.pdf http://www.example.com/downloads/partner-3/white-paper.pdf_
In this case, you can use a
rel="canonical"
HTTP header to specify to Google the canonical URL for the PDF file, as follows:Link: <http: www.example.com="" downloads="" white-paper.pdf="">; rel="canonical"</http:>
-
Hi there
The old pages don't exist any more to add the canonical they're 301's from old domain to new but over 1000 pages show up for site:www.fhr-net.co.uk
-
Got it, you must have tried adding the canonical URL meta tags already, right? If not, check out: http://moz.com/blog/rel-confused-answers-to-your-rel-canonical-questions
"...in late 2009, Google announced support for cross-domain use of rel=canonical. This is typically for syndicated content, when you’re concerned about duplication and only want one version of the content to be eligible for ranking...
..First off, Google may choose to ignore cross-domain use of rel=canonical if the pages seem too different or it appears manipulative. The ideal use of cross-domain rel=canonical would be a situation where multiple sites owned by the same entity share content, and that content is useful to the users of each individual site. In that case, you probably wouldn’t want to use 301-redirects (it could confuse users and harm the individual brands), but you may want to avoid duplicate content issues and control which property Google displays in search results. I would not typically use rel=canonical cross-domain just to consolidate PageRank..."
-
Thanks for your reply,
It's not that I want to de-list the old domain as I would rather people get to the site using that domain than not at all but, my concern is that for whatever reason the transfer hasn't completed as it's been such a long time and we're for instance not getting the full benefit of sites linking to the old domain passed to the new one
-
If your goal is to delist the old domain I am going to copy the answer I just gave at http://moz.com/community/q/how-to-exclude-all-pages-on-a-subdomain-for-search, simply because it's clear and works quickly (48h) in my experience.
This is the authoritative way that Google recommends at https://support.google.com/webmasters/answer/1663419?hl=en&rd=1:
- Add an robots.txt file for your domain. Usually via FTP. Add the "noindex" meta-tags to every page as well.
- Add your subdomain as a separate site in Google Webmaster Tools
- On the Webmaster Tools home page, click the site you want.
- On the Dashboard, click Google Index on the left-hand menu.
- Click Remove URLs.
- Click New removal request.
- Type the URL of the page you want removed from search results (not the Google search results URL or cached page URL), and then click Continue. How to find the right URL. The URL is case-sensitive—use exactly the same characters and capitalization that the site uses.
- Click Yes, remove this page.
- Click Submit Request.
To exclude the entire domain, simply enter the domain URL (e.g. http://domain.com) at the 7th step.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Domain transfer
Hi Everyone! I work for a company that had a new website built through a digital agency and that website went live 2 months ago. I have only been with the company for 2 months and find myself under the gun, because they are not coming up in organic search results that they used to. The old domain was 301'ed to point to the new domain. There were also 1 to 1 redirects set up to transfer the old page authority to the new, equivalent pages. In everyone's experience, how long does it take for a domain to start populating for search terms that it used to? I know there are mixed reviews on 301 transfers and how much of the old domain authority is preserved when you take this route. What is your experience? Just looking for some answers. Thanks in advance! John
Intermediate & Advanced SEO | | AdvisGroup0 -
Google Indexing & Caching Some Other Domain In Place of Mine-Lost Ranking -Sucuri.net Found nothing
Again I am facing same Problem with another wordpress blog. Google has suddenly started to Cache a different domain in place of mine & caching my domain in place of that domain. Here is an example page of my site which is wrongly cached on google, same thing happening with many other pages as well - http://goo.gl/57uluq That duplicate site ( protestage.xyz) is showing fully copied from my client's site but showing all pages as 404 now but on google cache its showing my sites. site:protestage.xyz showing all pages of my site only but when we try to open any page its showing 404 error My site has been scanned by sucuri.net Senior Support for any malware & there is none, they scanned all files, database etc & there is no malware found on my site. As per Sucuri.net Senior Support It's a known Google bug. Sometimes they incorrectly identify the original and the duplicate URLs, which results in messed ranking and query results. As you can see, the "protestage.xyz" site was hacked, not yours. And the hackers created "copies" of your pages on that hacked site. And this is why they do it - the "copy" (doorway) redirects websearchers to a third-party site [http://www.unmaskparasites.com/security-report/?page=protestage.xyz](http://www.unmaskparasites.com/security-report/?page=protestage.xyz) It was not the only site they hacked, so they placed many links to that "copy" from other sites. As a result Google desided that that copy might actually be the original, not the duplicate. So they basically hijacked some of your pages in search results for some queries that don't include your site domain. Nonetheless your site still does quite well and outperform the spammers. For example in this query: [https://www.google.com/search?q=](https://www.google.com/search?q=)%22We+offer+personalized+sweatshirts%22%2C+every+bride#q=%22GenF20+Plus+Review+Worth+Reading+If+You+are+Planning+to+Buy+It%22 But overall, I think both the Google bug and the spammy duplicates have the negative effect on your site. We see such hacks every now and then (both sides: the hacked sites and the copied sites) and here's what you can do in this situation: It's not a hack of your site, so you should focus on preventing copying the pages: 1\. Contact the protestage.xyz site and tell them that their site is hacked and that and show the hacked pages. [https://www.google.com/search?q=](https://www.google.com/search?q=)%22We+offer+personalized+sweatshirts%22%2C+every+bride#q=%22GenF20+Plus+Review+Worth+Reading+If+You+are+Planning+to+Buy+It%22 Hopefully they clean their site up and your site will have the unique content again. Here's their email [email protected] 2\. You might want to send one more complain to their hosting provider (OVH.NET) [email protected], and explain that the site they host stole content from your site (show the evidence) and that you suspect the the site is hacked. 3\. Try blocking IPs of the Aruba hosting (real visitors don't use server IPs) on your site. This well prevent that site from copying your site content (if they do it via a script on the same server). I currently see that sites using these two IP address: 149.202.120.102\. I think it would be safe to block anything that begins with 149.202 This .htaccess snippet should help (you might want to test it) #-------------- Order Deny,Allow Deny from 149.202.120.102 #-------------- 4\. Use rel=canonical to tell Google that your pages are the original ones. [https://support.google.com/webmasters/answer/139066?hl=en](https://support.google.com/webmasters/answer/139066?hl=en) It won't help much if the hackers still copy your pages because they usually replace your rel=canonical with their, so Google can' decide which one is real. But without the rel=canonical, hackers have more chances to hijack your search results especially if they use rel=canonical and you don't. I should admit that this process may be quite long. Google will not return your previous ranking overnight even if you manage to shut down the malicious copies of your pages on the hacked site. Their indexes would still have some mixed signals (side effects of the black hat SEO campaign) and it may take weeks before things normalize. The same thing is correct for the opposite situation. The traffic wasn't lost right after hackers created the duplicates on other sites. The effect build up with time as Google collects more and more signals. Plus sometimes they run scheduled spam/duplicate cleanups of their index. It's really hard to tell what was the last drop since we don't have access to Google internals. However, in practice, if you see some significant changes in Google search results, it's not because of something you just did. In most cases, it's because of something that Google observed for some period of time. Kindly help me if we can actually do anything to get the site indexed properly again, PS it happened with this site earlier as well & that time I had to change Domain to get rid of this problem after I could not find any solution after months & now it happened again. Looking forward for possible solution Ankit
Intermediate & Advanced SEO | | killthebillion0 -
What is Google supposed to return when you submit an image URL into Fetch as Google? Is a few lines of readable text followed by lots of unreadable text normal?
I am seeing something like this (Is this normal?): HTTP/1.1 200 OK
Intermediate & Advanced SEO | | Autoboof
Server: nginx
Content-Type: image/jpeg
X-Content-Type-Options: nosniff
Last-Modified: Fri, 13 Nov 2015 15:23:04 GMT
Cache-Control: max-age=1209600
Expires: Fri, 27 Nov 2015 15:23:55 GMT
X-Request-ID: v-8dd8519e-8a1a-11e5-a595-12313d18b975
X-AH-Environment: prod
Content-Length: 25505
Accept-Ranges: bytes
Date: Fri, 13 Nov 2015 15:24:11 GMT
X-Varnish: 863978362 863966195
Age: 16
Via: 1.1 varnish
Connection: keep-alive
X-Cache: HIT
X-Cache-Hits: 1 ����•JFIF••••��;CREATOR: gd-jpeg v1.0 (using IJG JPEG v80), quality = 75
��C•••••••••• •
••
••••••••• $.' ",#(7),01444'9=82<.342��C• ••••
•2!!22222222222222222222222222222222222222222222222222��•••••v••"••••••��••••••••••••••••
•���•••••••••••••}•••••••!1A••Qa•"q•2���•#B��•R��$3br�
••••%&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz���������������������������������������������������������������������������•••••••••••••••••••
•���••••••••••••••w••••••!1••AQ•aq•"2�••B���� #3R�•br�0 -
Single domain or a subfolder?
One of my clients is wondering whether they should move they stand alone business website to a subfolder of their brand website. For example, from http://www.johnlewisforbusiness.com to http://www.johnlewis.com/business. Do you guys think it's a good idea from SEO point of view? Can you recommend any articles on this? What is the expected loss of current value by changing domain and migrating URLs?
Intermediate & Advanced SEO | | Adido-1053990 -
Lower quality new domain link vs higher quality repeat domain link
First time poster here with a dilemma that head scratching and spreadsheets can't solve! I'm trying to work out whether to focus on getting links from new domains or to nurture relationships with the bigger sites in our business and get more links. Of the two links below which does the community here think would be more valuable a signal to Google? Both would be links from within relevant text/post copy. Link 1. Site DA 30. No links currently from this domain. Link 2. Site DA 60. Many links over last 12 months already from this domain. I suspect link 1 but given the enormous disparity in ranking power am I correct?! Thanks for any considered opinions out there! Matthew
Intermediate & Advanced SEO | | mat20150 -
Dropped from Google?
My website www.weddingphotojournalist.co.uk appears to have been penalised by Google. I ranked fairly well for a number of venue related searches from my blog posts. Generally I'd find myself somewhere on page one or towards the top of page two. However recently I found I am nowhere to be seen for these venue searches. I still appear if I search for my name, business name and keywords in my domain name. A quick check of Yahoo and I found I am ranking very well, it is only Google who seem to have dropped me. I looked at Google webmaster tools and there are no messages or clues as to what has happened. However it does show my traffic dropping off a cliff edge on the 19th July from 850 impressions to around 60 to 70 per day. I haven't made any changes to my website recently and hadn't added any new content in July. I haven't added any new inbound links either, a search for inbound links does not show anything suspicious. Can anyone shed any light on why this might happen?
Intermediate & Advanced SEO | | weddingphotojournalist0 -
Old pages still in index
Hi Guys, I've been working on a E-commerce site for a while now. Let me sum it up : February new site is launched Due to lack of resources we started 301's of old url's in March Added rel=canonical end of May because of huge index numbers (developers forgot!!) Added noindex and robots.txt on at least 1000 urls. Index numbers went down from 105.000 tot 55.000 for now, see screenshot (actual number in sitemap is 13.000) Now when i do site:domain.com there are still old url's in the index while there is a 301 on the url since March! I know this can take a while but I wonder how I can speed this up or am doing something wrong. Hope anyone can help because I simply don't know how the old url's can still be in the index. 4cArHPH.png
Intermediate & Advanced SEO | | ssiebn70 -
Dropped Out of Google and Bing
I am helping with a site that at one time I had on page 1 for Google/Bing. Site started to slip in rankings, then someone else did a makeover of the store and botched things by renaming pages, having errors in pages (multiple head/body), mismatch page names from sitemap, etc. Site slipped to page 4/5. I righted things, fixed duplication using canonicalization, made some other changes. Now site is gone completely from Google/Bing for desired keyword. No penalties. Site still shows if do search on domain name. Site is www.plussizeplum.com (plus size lingerie, sorry), keyword target is plus size lingerie. Anyone have any clues, tips, etc on why we fell off the face of the earth? Page Authority/Domain Authority are both comparable to most of the page 1/2 sites for same thing. Thanks for any advice.
Intermediate & Advanced SEO | | dlcohen0