All Thin Content removed and duplicate content replaced. But still no success?
-
Good morning,
Over the last three months i have gone about replacing and removing all the duplicate content (1000+ page) from our site top4office.co.uk.
Now it been just under 2 months since we made all the changes and we still are not showing any improvements in the SERPS.
Can anyone tell me why we aren't making any progress or spot something we are not doing correctly?
Another problem is that although we have removed 3000+ pages using the removal tool searching site:top4office.co.uk still shows 2800 pages indexed (before there was 3500).
Look forward to your responses!
-
Thanks for your responses. We are talking over 3000 pages of duplicate content which we have no removed and replaced with actual relevant unique and engaging content.
We completed all the content changes on the 6/06/2013. Im thinking to leave it for a while and see whether our rank improves within the next month or so. We may consider moving the site to another domain since its features lots of high quality content.
Thoughts?
-
I've had two sites with Panda problems. One had two copies of hundreds of pages in both .html and .pdf format (to control printing format). The other had a few hundred pages of .edu press releases republished verbatim at their request or with their permission.
Both of these sites had site-wide drops on Panda dates.
We used rel=canonical on the .pdf documents on one site using .htaccess. On the site with the .edu press releases we used noindex/follow.
Both sites recovered to former rankings a few weeks after the changes were made.
If you had a genuine Panda problem and only a Panda problem then a couple months might be about the amount of time needed to see a recovery.
-
That's hard to say. A recent history and link profile like yours won't give your site the authority it needs for index updates at the frequency you would like. It's also possible that a hole has been dug that you cannot pop out of simply by reversing the actions of your past SEO.
You really need a thorough survey of your site, it's history, and it's analytics to determine the extent of the current problem and the best path to take to get out of it. Absent that, shed what bad back links that you can and develop a strategy to build visitor engagement with your brand.
-
The site has not received a manual penalty from Google.
However traffic and generic keywords fell when the previous developer decided to copy all of the products directly from our other site top4office.com.
The site was ranking pretty well in the past. Do you have any kind of ETA of when the updates will take effect
-
Hi Apogee
It can certainly take several months for your pages to drop from the index so if you've removed the pages in GWT and removed the URLs they'll eventually fall out of the index.
Was the site penalized and that's why you removed/replaced the dupe content--meaning were you ranking well and then, all of a sudden your rankings tumbled or are you just now working to build up your rankings? This is an important distinction because there are few examples of sites that received a panda penalty (thin/duplicate content) coming back to life.
If you don't think you've been penalized and you're just working to optimize your site and pull it up in the rankings for the first time, consider how unique your content is and how you're communicating your unique value proposition to the visitor. Keep focusing on those things.
Also, your back link profile looks a bit seedy--in fact, your problem could well be penguin-related. If you were penalized and it was a penguin penalty, you should be looking to clean up some of those links and working to build new ones from more thematically relevant sites.
-
Removing duplicate content won't necessarily increase your search positioning. It will however, give your site the foundations needed to start a (relevant, natural and organic) link building campaign - which if done correctly should increase your SERP's.
You should see content as part of the foundations. Good quality and unique content is usually needed in order to be rankable but it doesn't make you rank necessarily.
Having good quality unique content will also minimise the chances of being hit by an algo update.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Different language with direct translation: duplicate content, meta?
For a site that does NOT want a separate subdomain, or directory, or TLD for a country/language would the directly translated page (static) content/meta be duplicate? (NOT considering a translation of the term/acronym which could exist in another language) i.e. /SEO-city-state in English vs. /SEO-city-state Spanish -In this example a term/acronym that is the same in any language. Outside of duplicate content, are their other conflict potentials in rankings you can think of?
Intermediate & Advanced SEO | | bozzie3110 -
If a website trades internationally and simply translates its online content from English to French, German, etc how can we ensure no duplicate content penalisations and still maintain SEO performance in each territory?
Most of the international sites are as below: example.com example.de example.fr But some countries are on unique domains such example123.rsa
Intermediate & Advanced SEO | | Dave_Schulhof0 -
Ecommerce: remove duplicate product pages or use rel=canonical
Say we have a white-widget that is in our white widget collection and also in our wedding widget collection. Currently, we have 3 different URLs for that product (white-widgets/white-widget and wedding-widgets/white-widget and all-widgets/white-widget).We are automatically generating a rel=canonical tag for those individual collection product pages that canonical the original product page (/all-widgets/white-widget). This guide says that is the structure Zappos uses and says "There is an elegance to this approach. However, I would re-visit it today in light of changes in the SEO world."
Intermediate & Advanced SEO | | birchlore
I noticed that Zappos, and many other shops now actually just link back to the parent product page (e.g. If I am in wedding widget section and click on the widget, I go to all-products/white-widget instead of wedding-widgets/white-widget).So my question is:Should we even have these individual product URLs or just get rid of them altogether? My original thought was that it would help SEO for search term "white wedding widget" to have a product URL wedding-widget/white-widget but we won't even be taking advantage of that by using rel=canonical anyway.0 -
WMT Index Status - Possible Duplicate Content
Hi everyone. A little background: I have a website that is 3 years old. For a period of 8 months I was in the top 5 for my main targeted keyword. I seemed to have survived the man eating panda but not so sure about the blood thirsty penguin. Anyway; my homepage, along with other important pages, have been wiped of the face of Google's planet. First I got rid of some links that may not have been helping and disavowed them. When this didn't work I decided to do a complete redesign of my site with better content, cleaner design, removed ads (only had 1) and incorporated social integration. This has had no effect at all. I filed a reconsideration request and was told that I have NOT had any manual spam penalties made against me, by the way I never received any warning messages in WMT. SO, what could be the problem? Maybe it's duplicate content? In WMT the Index Status indicates that there are 260 pages indexed. However; I have only 47 pages in my sitemap and when I do a site: search on Google it only retrieves 44 pages. So what are all these other pages? Before I uploaded the redesign I removed all the current pages from the index and cache using the remove URL tool in WMT. I should mention that I have a blog on Blogger that is linked to a subdomain on my hosting account i.e. http://blog.mydomain.co.uk. Are the blog posts counted as pages on my site or on Blogger's servers? Ahhhh this is too complicated lol Any help will be much appreciated! Many thanks, Mark.
Intermediate & Advanced SEO | | Nortski0 -
Duplicate content, website authority and affiliates
We've got a dilemma at the moment with the content we supply to an affiliate. We currently supply the affiliate with our product database which includes everything about a product including the price, title, description and images. The affiliate then lists the products on their website and provides a Commission Junction link back to our ecommerce store which tracks any purchases with the affiliate getting a commission based on any sales via a cookie. This has been very successful for us in terms of sales but we've noticed a significant dip over the past year in ranking whilst the affiliate has achieved a peak...all eyes are pointing towards the Panda update. Whenever I type one of our 'uniquely written' product descriptions into Google, the affiliate website appears higher than ours suggesting Google has ranked them the authority. My question is, without writing unique content for the affiliate and changing the commission junction link. What would be the best option to be recognised as the authority of the content which we wrote in the first place? It always appears on our website first but Google seems to position the affiliate higher than us in the SERPS after a few weeks. The commission junction link is written like this: http://www.anrdoezrs.net/click-1428744-10475505?sid=shopp&url=http://www.outdoormegastore.co.uk/vango-calisto-600xl-tent.html
Intermediate & Advanced SEO | | gavinhoman0 -
NOINDEX content still showing in SERPS after 2 months
I have a website that was likely hit by Panda or some other algorithm change. The hit finally occurred in September of 2011. In December my developer set the following meta tag on all pages that do not have unique content: name="robots" content="NOINDEX" /> It's been 2 months now and I feel I've been patient, but Google is still showing 10,000+ pages when I do a search for site:http://www.mydomain.com I am looking for a quicker solution. Adding this many pages to the robots.txt does not seem like a sound option. The pages have been removed from the sitemap (for about a month now). I am trying to determine the best of the following options or find better options. 301 all the pages I want out of the index to a single URL based on the page type (location and product). The 301 worries me a bit because I'd have about 10,000 or so pages all 301ing to one or two URLs. However, I'd get some link juice to that page, right? Issue a HTTP 404 code on all the pages I want out of the index. The 404 code seems like the safest bet, but I am wondering if that will have a negative impact on my site with Google seeing 10,000+ 404 errors all of the sudden. Issue a HTTP 410 code on all pages I want out of the index. I've never used the 410 code and while most of those pages are never coming back, eventually I will bring a small percentage back online as I add fresh new content. This one scares me the most, but am interested if anyone has ever used a 410 code. Please advise and thanks for reading.
Intermediate & Advanced SEO | | NormanNewsome0 -
Google consolidating link juice on duplicate content pages
I've observed some strange findings on a website I am diagnosing and it has led me to a possible theory that seems to fly in the face of a lot of thinking: My theory is:
Intermediate & Advanced SEO | | James77
When google see's several duplicate content pages on a website, and decides to just show one version of the page, it at the same time agrigates the link juice pointing to all the duplicate pages, and ranks the 1 duplicate content page it decides to show as if all the link juice pointing to the duplicate versions were pointing to the 1 version. EG
Link X -> Duplicate Page A
Link Y -> Duplicate Page B Google decides Duplicate Page A is the one that is most important and applies the following formula to decide its rank. Link X + Link Y (Minus some dampening factor) -> Page A I came up with the idea after I seem to have reverse engineered this - IE the website I was trying to sort out for a client had this duplicate content, issue, so we decided to put unique content on Page A and Page B (not just one page like this but many). Bizarrely after about a week, all the Page A's dropped in rankings - indicating a possibility that the old link consolidation, may have been re-correctly associated with the two pages, so now Page A would only be getting Link Value X. Has anyone got any test/analysis to support or refute this??0 -
Duplicate page Content
There has been over 300 pages on our clients site with duplicate page content. Before we embark on a programming solution to this with canonical tags, our developers are requesting the list of originating sites/links/sources for these odd URLs. How can we find a list of the originating URLs? If you we can provide a list of originating sources, that would be helpful. For example, our the following pages are showing (as a sample) as duplicate content: www.crittenton.com/Video/View.aspx?id=87&VideoID=11 www.crittenton.com/Video/View.aspx?id=87&VideoID=12 www.crittenton.com/Video/View.aspx?id=87&VideoID=15 www.crittenton.com/Video/View.aspx?id=87&VideoID=2 "How did you get all those duplicate urls? I have tried to google the "contact us", "news", "video" pages. I didn't get all those duplicate pages. The page id=87 on the most of the duplicate pages are not supposed to be there. I was wondering how the visitors got to all those duplicate pages. Please advise." Note, the CMS does not create this type of hybrid URLs. We are as curious as you as to where/why/how these are being created. Thanks.
Intermediate & Advanced SEO | | dlemieux0