Scraped content ranking above the original source content in Google.
-
I need insights on how “scraped” content (exact copy-pasted version) rank above the original content in Google.
4 original, in-depth articles published by my client (an online publisher) are republished by another company (which happens to be briefly mentioned in all four of those articles). We reckon the articles were re-published at least a day or two after the original articles were published (exact gap is not known). We find that all four of the “copied” articles rank at the top of Google search results whereas the original content i.e. my client website does not show up in the even in the top 50 or 60 results.
We have looked at numerous factors such as Domain authority, Page authority, in-bound links to both the original source as well as the URLs of the copied pages, social metrics etc. All of the metrics, as shown by tools like Moz, are better for the source website than for the re-publisher. We have also compared results in different geographies to see if any geographical bias was affecting results, reason being our client’s website is hosted in the UK and the ‘re-publisher’ is from another country--- but we found the same results. We are also not aware of any manual actions taken against our client website (at least based on messages on Search Console).
Any other factors that can explain this serious anomaly--- which seems to be a disincentive for somebody creating highly relevant original content.
We recognize that our client has the option to submit a ‘Scraper Content’ form to Google--- but we are less keen to go down that route and more keen to understand why this problem could arise in the first place.
Please suggest.
-
**Everett Sizemore - Director, R&D and Special Projects at Inflow: **Use the Google Scraper Report form.
Thanks. I didn't know about this.
If that doesn't work, submit a DMCA complaint to Google.
This does work. We submit dozens of DMCAs to Google every month. We also send notices to sites who have used our content but might know understand copyright infringement.
Everett Sizemore - Director, R&D and Special Projects at Inflow Endorsed 2 minutes ago Until Manoj gives us the URLs so we can look into it ourselves, I'd have to say this is the best answer: Google sucks sometimes. Use the Google Scraper Report form. If that doesn't work, submit a DMCA complaint to Google.
-
Oh, that is a very good point. This is very bad for people who have clients.
-
Thanks, EGOL.
The other big challenge is to get clients to also buy into the idea that it is Google's problem!
-
**In this specific instance, the original source outscores the site where content is duplicated on almost all the common metrics that are deemed to be indicative of a site's relative authority/standing. **
Yes, this happens. It states the problem and Google's inabilities more strongly than I have stated it above.
**Any ideas/ potential solutions that you could help with ---- will be much appreciated. **
I have this identical problem myself. Actually, its Google's problem. They have crap on their shoes but say that they can't smell it.
-
Hi,
Thanks for the response. I'd understand if the original source was indeed new or not so 'powerful' or an established site in the niche that it serves.
In this specific instance, the original source outscores the site where content is duplicated on almost all the common metrics that are deemed to be indicative of a site's relative authority/standing.
Any ideas/ potential solutions that you could help with ---- will be much appreciated.
Thanks
-
Scraped content frequently outranks the original source, especially when the original source is a new site or a site that is not powerful.
Google says that they are good at attributing content to the original publisher. They are delusional. Lots of SEOs believe Google. I'll not comment on that.
If scraped content was not making money for people this practice would have died a long time ago. I submit that as evidence. Scrapers know what Google does not (or refused to admit) and what many SEOs refuse to believe.
-
No, John - we don't use the 'Fetch as Googlebot' for every post. I am intrigued by the possibility you suggest.
Yes, there are lots of unknowns and certain results seem inexplicable --- as we feel this particular instance is. We have looked at and evaluated most of the obvious things to be considered, including the likelihood of the re-publisher having gotten more social traction. However, the actual results are opposite to what we'd expect.
I'm hoping that you/ some of the others in this forum could shed some light on any other factors that could be influencing the results.
Thanks.
-
Thanks for the link, Umar.
Yes, we did fetch the cached versions of both pages--- but that doesn't indicate when the respective pages were first indexed, it just shows when the pages were last cached.
-
No Martijn, the articles have excerpts from representatives of the republisher; there are no links to the re-publisher website.
-
When you're saying you're mentioning the re-publisher briefly in the posts itself does that mean you're also linking to them?
-
Hey Manoj,
That's indeed very weird. There can be multiple reasons for this, for instance, did you try to fetch the cached version of both sites to check when they got indexed? Usually online publication sites have fast indexing rate and it might be possible that your client shared the articles on social before they got indexed and the other site lifted them up.
Do check out this brilliant Moz post, I'm sure you will get the idea what caused this,
Hope this helps!
-
Do you use fetch for google WMT with every post?
If your competitors monitor the site, harvest the content and then publish and use fetch for google - that could explain why google ranks them first. ie google would likely have indexed their content first.
That said there are so many unknown factors at play, ie how does social stack up. Are they using google + etc.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Docs
Hi Mozers, I was wondering what do you guys think about indexing Google Docs files as Documents or Spreadsheets? Can you do that and is it any help if you what to get some content on the firs page of Google. And also can Google see that content and links, because when I deactivate the javascript on chrome I couldn't see anything from the content Thanks
Intermediate & Advanced SEO | | VeeamSoftware0 -
About duplicate content
We have to products: - loan for a new car
Intermediate & Advanced SEO | | KBC
- load for a second hand car Except for title tag, meta desc and H1, the content is of course very similmar. Are these pages considered as duplicate content? https://new.kbc.be/product/lenen/voertuig/autolening-tweedehands-auto.html
https://new.kbc.be/product/lenen/voertuig/autolening-nieuwe-auto.html thanks for the advice,0 -
Running Google Ads on the website will impact the Rankings?
Hi, Will Google AdSense those are running above the fold of the website, impact the keywords rankings?
Intermediate & Advanced SEO | | RuchiPardal0 -
Buying a Google News website, against Google Terms?
We are looking at buying a business that has a number of websites Is it against buying a business that has a Google News website and continue to use the site? Once the business is sold, would google remove the site from its News?
Intermediate & Advanced SEO | | JohnPeters0 -
How Google Adwords Can Impact SEO Ranking ?
Hi SEO Gurus, I have a question. How Google Adwords Can Impact SEO Ranking ?
Intermediate & Advanced SEO | | Webdeal
Positive , negative or neutral impact? I will appreciate if you will provide detailed answer Thank you for your time webdeal0 -
What Sources to use to compile an as comprehensive list of pages indexed in Google?
As part of a Panda recovery initiative we are trying to get an as comprehensive list of currently URLs indexed by Google as possible. Using the site:domain.com operator Google displays that approximately 21k pages are indexed. Scraping the results however ends after the listing of 240 links. Are there any other sources we could be using to make the list more comprehensive? To be clear, we are not looking for external crawlers like the SEOmoz crawl tool but sources that would be confidently allow us to determine a list of URLs currently hold in the Google index. Thank you /Thomas
Intermediate & Advanced SEO | | sp800 -
Merging your google places page with google plus page.
I have a map listing showing for the keyword junk cars for cash nj. I recently created a new g+ page and requested a merge between the places and the + page. now when you do a search you see the following. Junk Cars For Cash NJ LLC
Intermediate & Advanced SEO | | junkcars
junkcarforcashnj.com/
Google+ page - Google+ page the first hyperlink takes me to the about page of the G+ and the second link takes me to the posts section within g+. Is this normal? should i delete the places account where the listing was originally created? Or do i leave it as is? Thanks0 -
How do you archive content?
In this video from Google Webmasters about content, https://www.youtube.com/watch?v=y8s6Y4mx9Vw around 0:57 it is advised to "archive any content that is no longer relevant". My question is how do you exactly do that? By adding noindex to those pages, by removing all internal links to that page, by completely removing those from the website? How do you technically archive content? watch?v=y8s6Y4mx9Vw
Intermediate & Advanced SEO | | SorinaDascalu1