How to move domain content w Penguin Penalty?
-
Hey guys,
I've come to the conclusion the sheer amount of crap links a site of ours has is un repairable. We own a .net version with the same brand name so I'm planning to move our ecommerce store over with all its content.
I can move the site in one swoop but I believe Google will see it as duplicate content if we don't allow the old site to de index first. I would simply take it down for a month but we still get some orders now and then.
Anyone have any ideas? I was thinking of leaving an image up on each page that is no index no follow linked to the new site that explains the site is being moved, etc.
-
If you're trying to completely remove the domain, Anthony's suggestion about using Google's URL removal tool is the quickest way to go.
You'll first want to block crawler access in your robots.txt, then choose the option to "remove directory" in webmaster tools. http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1663427
(Note: robots.txt by itself DOES NOT prevent Google from indexing your pages, it only blocks them from crawling. Using only robots.txt means your pages are likely to stay in the index for quite some time)
That takes care of Google. Bing's process isn't quite as smooth. I'd also throw a meta robots "NOINDEX, FOLLOW" tag on all your pages - this will help with other search engines as well.
You may want to include a message on your old site instructing visitors to update their bookmarks and links - this may help aid with the transition.
Keep in mind, this will severe all links from your old domain, and none of your built up link equity will transfer over. Obviously, you've given this some thought in order to take such a move.
Hope this helps. Best of luck!
-
Per google webmaster tools:
Removing an entire directory or site
In order for a directory or site-wide removal to be successful, the directory or site must be disallowed in the site's robots.txt file. For example, in order to remove the http://www.example.com/secret/ directory, your robots.txt file would need to include:
User-agent: *
Disallow: /secret/
It isn't enough for the root of the directory to return a 404 status code, because it's possible for a directory to return a 404 but still serve out files underneath it. Using robots.txt to block a directory (or an entire site) ensures that all the URLs under that directory (or site) are blocked as well. You can test whether a directory has been blocked correctly using either the Fetch as Googlebot or Test robots.txt features in Webmaster Tools.Only verified owners of a site can request removal of an entire site or directory in Webmaster Tools. To request removal of a directory or site, click on the site in question, then go to Site configuration > Crawler access > Remove URL. If you enter the root of your site as the URL you want to remove, you'll be asked to confirm that you want to remove the entire site. If you enter a subdirectory, select the "Remove directory" option from the drop-down menu.
-
Thanks for the input. So you would put up the new site immediately? Or would you wait a certain amount of time after you take down the old site and put up the new robots.txt file?
I want to make sure Google doesn't see the new site as dup content.
-
I would suggest no index no follow in the old sites robots.txt file. Take down entire site with exception to a blank index page with text stating your site has moved to the new location, and the robots.txt file. I would not have a hot link to new site.
Then bring up the new site. On the next indexing, google should pick up the no index no follow from old site and start the indexing for the new site. I think that would be the best option so that you don't miss out on any potential business or orders.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to Evaluate Original Domain Authority vs. Recent 'HTTPS' Duplicate for Potential Domain Migration?
Hello Everyone, So our site has used ‘http’ for the domain since the start. Everything has been set up for this structure and Google is only indexing these pages. Just recently a second version was created on ‘httpS’. We know having both up is the worst case scenario but now that both are up is it worth just switching over or would the original domain authority warrant just keeping it on ‘http’ and redirecting the ‘httpS’ version? Assuming speed and other elements wouldn’t be an issue and it's done correctly. Our thought was if we could do this quickly it would be easier to just redirect the ‘httpS’ version but was not sure if the Pros of ‘httpS’ would be worth the resources. Any help or insight would be appreciated. Please let us know if there are any further details we could provide that might help. Looking forward to hearing from all of you! Thank you in advance for the help. Best,
Intermediate & Advanced SEO | | Ben-R1 -
Same content, different languages. Duplicate content issue? | international SEO
Hi, If the "content" is the same, but is written in different languages, will Google see the articles as duplicate content?
Intermediate & Advanced SEO | | chalet
If google won't see it as duplicate content. What is the profit of implementing the alternate lang tag?Kind regards,Jeroen0 -
Is This Considered Duplicate Content?
My site has entered SEO hell and I am not sure how to fix it. Up until 18 months ago I had tremendous success on Google and Bing and now my website appears below my Facebook page for the term "Direct Mail Raleigh." What makes it even more frustrating is my competitors have done no SEO and they are dominating this keyword. I thought that the issue was due to harmful inbound links and two months ago I disavowed ones that were clearly spam. Somehow my site has actually gone down! I have a blog that I have updated infrequently and I do not know if it I am getting punished for duplicate content. On Google Webmaster Tools it says I have 279 crawled and indexed pages. Yesterday when I ran the MOZ crawl check I was amazed to find 1150 different webpages on my site. Despite the fact that it does not appear on the webmaster tools I have three different webpages due to the format that the Wordpress blog was created: "http://www.marketplace-solutions.com/report/part2leadershi/", "http://www.marketplace-solutions.com/report/page/91/" and "http://www.marketplace-solutions.com/report/category/competent-leadership/page/3/" What does not make sense to me is why Google only indexed 279 webpages AND why MOZ did not identify these three webpages as duplicate content with the Crawl Test Tool. Does anyone have any ideas? Would it be as easy as creating a massive robot.txt file and just putting 2 of the 3 URLs in that file? Thank you for your help.
Intermediate & Advanced SEO | | DR700950 -
Client is paranoid about Google penguin penalty from getting links from a new website they are building
We have a client that is creating a new promotional website that consists of videos, brands and product reviews (SITE B). After a visitor watches a video on SITE B they will be given a "click to purchase" option that will lead them to the original website (SITE A). Our client is paranoid that since all the outgoing links on the new SITE B are going to the original SITE A there might be algorithm penalty (for one website or both). I find this very unlikely and even recommend "no follow" coding for a peace of mind. However are there any resources/links out there that can back up my argument that they will be alright? Thanks
Intermediate & Advanced SEO | | VanguardCommunications0 -
Meta canonical or simply robots.txt other domain names with same content?
Hi, I'm working with a new client who has a main product website. This client has representatives who also sells the same products but all those reps have a copy of the same website on another domain name. The best thing would probably be to shut down the other (same) websites and redirect 301 them to the main, but that's impossible in the minding of the client. First choice : Implement a conical meta for all the URL on all the other domain names. Second choice : Robots.txt with disallow for all the other websites. Third choice : I'm really open to other suggestions 😉 Thank you very much! 🙂
Intermediate & Advanced SEO | | Louis-Philippe_Dea0 -
Penguin or paid link penalty, or both?
Hello, I have a site, macpokeronline.com, that has seen dramatic decrease in visitors in the last few months, it has went down from 800 per day to 200 per day. It is a pretty complex situation. The site owner purchased paid links from reputable mac sites for years (they were more of followed advertisements, but were only there for SEO Purposes), now that i'm going through the link profligate ins OSE, I can see that a majority of their links come from these sites. There is also a branding issue, there are almost 15,000 links with the anchor text of "macpokeronline.com" These are obviously branded links, I don't know the best way to deal with them (though the majority are coming from the paid link sites) We have just sent the request in to remove the paid links from the sites, and i'm guessing since he is paying over $1000 a month for the links, they will be removed quickly. The site has been receiving significantly less traffic since penguin (apr 24-25) We received a message on July 19th which was the generic unnatural link warning, saying that once we remove links make a reconsideration request. Then on July 23rd, we received another message that says they are taking a "very targeted action on the unnatural links instead of your site as a whole" which I have never seen before. This damage was done before I was hired by this client, I just want to get his traffic back up so I can help him even further, I want to know more about the steps I should take. 1. I will definitely remove the paid ads What else should I do, thanks Zach
Intermediate & Advanced SEO | | BestOdds0 -
Mobile Site - Same Content, Same subdomain, Different URL - Duplicate Content?
I'm trying to determine the best way to handle my mobile commerce site. I have a desktop version and a mobile version using a 3rd party product called CS-Cart. Let's say I have a product page. The URLs are... mobile:
Intermediate & Advanced SEO | | grayloon
store.domain.com/index.php?dispatch=categories.catalog#products.view&product_id=857 desktop:
store.domain.com/two-toned-tee.html I've been trying to get information regarding how to handle mobile sites with different URLs in regards to duplicate content. However, most of these results have the assumption that the different URL means m.domain.com rather than the same subdomain with a different address. I am leaning towards using a canonical URL, if possible, on the mobile store pages. I see quite a few suggesting to not do this, but again, I believe it's because they assume we are just talking about m.domain.com vs www.domain.com. Any additional thoughts on this would be great!0