Scraped content ranking above the original source content in Google.
-
I need insights on how “scraped” content (exact copy-pasted version) rank above the original content in Google.
4 original, in-depth articles published by my client (an online publisher) are republished by another company (which happens to be briefly mentioned in all four of those articles). We reckon the articles were re-published at least a day or two after the original articles were published (exact gap is not known). We find that all four of the “copied” articles rank at the top of Google search results whereas the original content i.e. my client website does not show up in the even in the top 50 or 60 results.
We have looked at numerous factors such as Domain authority, Page authority, in-bound links to both the original source as well as the URLs of the copied pages, social metrics etc. All of the metrics, as shown by tools like Moz, are better for the source website than for the re-publisher. We have also compared results in different geographies to see if any geographical bias was affecting results, reason being our client’s website is hosted in the UK and the ‘re-publisher’ is from another country--- but we found the same results. We are also not aware of any manual actions taken against our client website (at least based on messages on Search Console).
Any other factors that can explain this serious anomaly--- which seems to be a disincentive for somebody creating highly relevant original content.
We recognize that our client has the option to submit a ‘Scraper Content’ form to Google--- but we are less keen to go down that route and more keen to understand why this problem could arise in the first place.
Please suggest.
-
**Everett Sizemore - Director, R&D and Special Projects at Inflow: **Use the Google Scraper Report form.
Thanks. I didn't know about this.
If that doesn't work, submit a DMCA complaint to Google.
This does work. We submit dozens of DMCAs to Google every month. We also send notices to sites who have used our content but might know understand copyright infringement.
Everett Sizemore - Director, R&D and Special Projects at Inflow Endorsed 2 minutes ago Until Manoj gives us the URLs so we can look into it ourselves, I'd have to say this is the best answer: Google sucks sometimes. Use the Google Scraper Report form. If that doesn't work, submit a DMCA complaint to Google.
-
Oh, that is a very good point. This is very bad for people who have clients.
-
Thanks, EGOL.
The other big challenge is to get clients to also buy into the idea that it is Google's problem!
-
**In this specific instance, the original source outscores the site where content is duplicated on almost all the common metrics that are deemed to be indicative of a site's relative authority/standing. **
Yes, this happens. It states the problem and Google's inabilities more strongly than I have stated it above.
**Any ideas/ potential solutions that you could help with ---- will be much appreciated. **
I have this identical problem myself. Actually, its Google's problem. They have crap on their shoes but say that they can't smell it.
-
Hi,
Thanks for the response. I'd understand if the original source was indeed new or not so 'powerful' or an established site in the niche that it serves.
In this specific instance, the original source outscores the site where content is duplicated on almost all the common metrics that are deemed to be indicative of a site's relative authority/standing.
Any ideas/ potential solutions that you could help with ---- will be much appreciated.
Thanks
-
Scraped content frequently outranks the original source, especially when the original source is a new site or a site that is not powerful.
Google says that they are good at attributing content to the original publisher. They are delusional. Lots of SEOs believe Google. I'll not comment on that.
If scraped content was not making money for people this practice would have died a long time ago. I submit that as evidence. Scrapers know what Google does not (or refused to admit) and what many SEOs refuse to believe.
-
No, John - we don't use the 'Fetch as Googlebot' for every post. I am intrigued by the possibility you suggest.
Yes, there are lots of unknowns and certain results seem inexplicable --- as we feel this particular instance is. We have looked at and evaluated most of the obvious things to be considered, including the likelihood of the re-publisher having gotten more social traction. However, the actual results are opposite to what we'd expect.
I'm hoping that you/ some of the others in this forum could shed some light on any other factors that could be influencing the results.
Thanks.
-
Thanks for the link, Umar.
Yes, we did fetch the cached versions of both pages--- but that doesn't indicate when the respective pages were first indexed, it just shows when the pages were last cached.
-
No Martijn, the articles have excerpts from representatives of the republisher; there are no links to the re-publisher website.
-
When you're saying you're mentioning the re-publisher briefly in the posts itself does that mean you're also linking to them?
-
Hey Manoj,
That's indeed very weird. There can be multiple reasons for this, for instance, did you try to fetch the cached version of both sites to check when they got indexed? Usually online publication sites have fast indexing rate and it might be possible that your client shared the articles on social before they got indexed and the other site lifted them up.
Do check out this brilliant Moz post, I'm sure you will get the idea what caused this,
Hope this helps!
-
Do you use fetch for google WMT with every post?
If your competitors monitor the site, harvest the content and then publish and use fetch for google - that could explain why google ranks them first. ie google would likely have indexed their content first.
That said there are so many unknown factors at play, ie how does social stack up. Are they using google + etc.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Irrelevant Landing Pages are Ranking on Google SERP
Hi, I have noticed that Google likes to rank random pages on my site higher in the SERPs than the actual relevant content page for that service. Please let me know why it is happening?
Intermediate & Advanced SEO | | RuchiPardal0 -
Google + under Google business domain email account
Hello there, I have a quick and straight question and I am hoping to find answer here. What do we do with a G+ profile that was set up through a business domain's email account that is used by more than one person? We want to use the company name, but we can't as it is considered personal email account although it is under business domain verified by Google. Is there a way that we ask Google to change it and allow us to use the name of the company or should we just deactivate it? Thanks in advance!
Intermediate & Advanced SEO | | montauto0 -
Duplicate content
I run about 10 sites and most of them seemed to fall foul of the penguin update and even though I have never sought inorganic links I have been frantically searching for a link based answer since April. However since asking a question here I have been pointed in another direction by one of your contributors. It seems At least 6 of my sites have duplicate content issues. If you search Google for "We have selected nearly 200 pictures of short haircuts and hair styles in 16 galleries" which is the first bit of text from the site short-hairstyles.com about 30000 results appear. I don't know where they're from nor why anyone would want to do this. I presume its automated since there is so much of it. I have decided to redo the content. So I guess (hope) at some point in the future the duplicate nature will be flushed from Google's index? But how do I prevent it happening again? It's impractical to redo the content every month or so. For example if you search for "This facility is written in Flash® to use it you need to have Flash® installed." from another of my sites that I coincidently uploaded a new page to a couple of days ago, only the duplicate content shows up not my original site. So whoever is doing this is finding new stuff on my site and getting it indexed on google before even google sees it on my site! Thanks, Ian
Intermediate & Advanced SEO | | jwdl0 -
Indexing non-indexed content and Google crawlers
On a news website we have a system where articles are given a publish date which is often in the future. The articles were showing up in Google before the publish date despite us not being able to find them linked from anywhere on the website. I've added a 'noindex' meta tag to articles that shouldn't be live until a future date. When the date comes for them to appear on the website, the noindex disappears. Is anyone aware of any issues doing this - say Google crawls a page that is noindex, then 2 hours later it finds out it should now be indexed? Should it still appear in Google search, News etc. as normal, as a new page? Thanks. 🙂
Intermediate & Advanced SEO | | Alex-Harford0 -
Is there a way to contact Google besides the google product forum?
Our traffic from google has dropped more than 35% and continues to fall. We have been on this forum and google's webmaster forum trying to get help. We received great advice, have waited months, but instead of our traffic improving, it has worsened. We are being penalized by google for many keywords such as trophies, trophies and awards and countless others - we were on page one previously. We filed two reconsideration requests and were told both times that there were no manual penalties. Some of our pages continue to rank well, so it is not across the board (but all of our listings went down a bit). We have made countless changes (please see below). Our busy season was from March to May and we got clobbered. Google, as most people know, is a monopoly when it comes to traffic, so we are getting killed. At first we thought it was Penquin, but it looks like we started getting killed late last year. Lots of unusual things happened - we had a large spike in traffic for two days, then lost our branded keywords, then our main keywords. Our branded keywords came back pretty quickly, but nothing else did. We have received wonderful advice and made most of the changes. We are a very reputable company and have a feeling we are being penalized for something other than spamming. For example, we have a mobile site we added late last year and a wholesale system was added around the same time. Since the date does not coincide with Penquin, we think there is some major technical driver, but have no idea what to do at this point. The webmasters have all been helpful, but nothing is working. We are trying to find out what one does in a situation as we are trying to avoid closing our business. Thank you! Changes Made: 1. We had many crawl errors so we reduced them significantly 2. We had introduced a mobile website in January which we
Intermediate & Advanced SEO | | trophycentraltrophiesandawards
thought may have been the cause (splitting traffic, duplicate content, etc.),
so we had our mobile provider add the site to their robots.txt file. 3. We were told by a webmaster that their were too many
links from our search provider, so we have them put the search pages in a
robots.txt file. 4. We were told that we had too much duplicate content. This was / is true, as we have hundred of legitate products that are similar:
example trophies and certificates that are virtually the same but are
for different sports or have different colors and sizes. Still, we added more content and added no index tags to many products. We compared our % of dups to competitors and it is far less. 5. At the recommendation of another webmaster, we changed
many pages that might have been splitting traffic. 6. Another webmaster told us that too many people were
linking into our site with the same text, namely Trophy Central and that it
might have appeared we were trying to game the system somehow. We have never bought links and don't even have a webmaster although over the last 10 years have worked with programmers and seo companies (but we don't think any have done anything unusual). 7. At the suggestion of another webmaster, we have tried to
improve our link profile. For example,
we found Yahoo was not linking to our url. 8. We were told to setup a 404 page, so we did 9. We were told to ensure that all of the similar domains
were pointing to www.trophycentral.com/ so we setup redirects 10. We were told that a site that we have linking to us from too many places so we reduced it to 1. Our key pages have A rankings from SEOMOZ for the selected keywords. We have made countless other changes recommended by experts
but have seen no improvements (actually got worse). I am the
president of the company and have made most of the above recent changes myself. Our website is trophycentral.com0 -
How to Optimize Product Page for Better Ranking in Google?
Today, I was searching for product page SEO on Google for better optimization of product pages and category level pages. I want to introduce about my website structure. Root category: http://www.vistastores.com/outdoor Sub category: http://www.vistastores.com/outdoor-umbrellas End level category: http://www.vistastores.com/patio-umbrellas Product page: http://www.vistastores.com/patio-umbrellas-california-umbrella-slpt758-f13-red.html I'm doing link building for sub category & end level category page. But, I'm not able to do inbound marketing for product level URLs due to resource problem. But, I have checked that... There are many competitors who are rank well with long trail keyword... I'm selling same product but not ranking well... What is reason behind it? I don't know... Just look at Olefin Red Patio Umbrella search result. I want to rank in top 3 with this kind of long trail keywords... I have made study of following video and article. http://www.seomoz.org/blog/ecommerce-seo-making-product-pages-into-great-content-whiteboard-friday http://www.distilled.net/blog/seo/in-search-of-the-perfect-seo-product-page/ I have done all things which were described by them... But, I'm still waiting for good ranking...
Intermediate & Advanced SEO | | CommercePundit0 -
Duplicate content
Is there manual intervention required for a site that has been flagged for duplicate content to get back to its original rankings, once the duplicated content has been removed? Background: Our site recently experienced a significant drop in traffic around the time that a chunk of content from other sites (ie. duplicate) went live. While it was not an exact replica of the pages on other sites, there was quite a bit of overlap. That content has since been removed, but our traffic hasn't improved. What else can we do to improve our ranking?
Intermediate & Advanced SEO | | jamesti0 -
"Original Content" Dynamic Hurting SEO? -- Strategies for Differentiating Template Websites for a Nationwide Local Business Segment?
The Problem I have a stable of clients spread around the U.S. in the maid service/cleaning industry -- each client is a franchisee, however their business is truly 'local' with a local service area, local phone/address, unique business name, and virtually complete control over their web presence (URL, site design, content; apart from a few branding guidelines). Over time I've developed a website template with a high lead conversion rate, and I've rolled this website out to 3 or 4 dozen clients. Each client has exclusivity in their region/metro area. Lately my white hat back linking strategies have not been yielding the results they were one year ago, including legitimate directories, customer blogging (as compelling as maid service/cleaning blogs can really be!), and some article writing. This is expected, or at least reflected in articles on SEO trends and directory/article strategies. I am writing this question because I see sites with seemingly much weaker back link profiles outranking my clients (using SEOMoz toolbar and Site Explorer stats, and factoring in general quality vs. quantity dynamics). Questions Assuming general on-page optimization and linking factors are equal: Might my clients be suffering because they're using my oft-repeated template website (albeit with some unique 'content' variables)? If I choose to differentiate each client's website, how much differentiation makes sense? Specifically: Even if primary content (copy, essentially) is differentiated, will Google still interpret the matching code structure as 'the same website'? Are images as important as copy in differentiating content? From an 'machine' or algorithm perspective evaluating unique content, I wonder if strategies will be effective such as saving the images in a different format, or altering them slightly in Photoshop, or using unique CSS selectors or slightly different table structures for each site (differentiating the code)? Considerations My understanding of Google's "duplicate content " dynamics is that they mainly apply to de-duping search results at a query specific level, and choosing which result to show from a pool of duplicate results. My clients' search terms most often contain client-specific city and state names. Despite the "original content" mantra, I believe my clients being local businesses who have opted to use a template website (an economical choice), still represent legitimate and relevant matches for their target user searches -- it is in this spirit I ask these questions, not to 'game' Google with malicious intent. In an ideal world my clients would all have their own unique website developed, but these are Main St business owners balancing solutions with economics and I'm trying to provide them with scalable solutions. Thank You! I am new to this community, thank you for any thoughts, discussion and comments!
Intermediate & Advanced SEO | | localizedseo0