Scraped content ranking above the original source content in Google.
-
I need insights on how “scraped” content (exact copy-pasted version) rank above the original content in Google.
4 original, in-depth articles published by my client (an online publisher) are republished by another company (which happens to be briefly mentioned in all four of those articles). We reckon the articles were re-published at least a day or two after the original articles were published (exact gap is not known). We find that all four of the “copied” articles rank at the top of Google search results whereas the original content i.e. my client website does not show up in the even in the top 50 or 60 results.
We have looked at numerous factors such as Domain authority, Page authority, in-bound links to both the original source as well as the URLs of the copied pages, social metrics etc. All of the metrics, as shown by tools like Moz, are better for the source website than for the re-publisher. We have also compared results in different geographies to see if any geographical bias was affecting results, reason being our client’s website is hosted in the UK and the ‘re-publisher’ is from another country--- but we found the same results. We are also not aware of any manual actions taken against our client website (at least based on messages on Search Console).
Any other factors that can explain this serious anomaly--- which seems to be a disincentive for somebody creating highly relevant original content.
We recognize that our client has the option to submit a ‘Scraper Content’ form to Google--- but we are less keen to go down that route and more keen to understand why this problem could arise in the first place.
Please suggest.
-
**Everett Sizemore - Director, R&D and Special Projects at Inflow: **Use the Google Scraper Report form.
Thanks. I didn't know about this.
If that doesn't work, submit a DMCA complaint to Google.
This does work. We submit dozens of DMCAs to Google every month. We also send notices to sites who have used our content but might know understand copyright infringement.
Everett Sizemore - Director, R&D and Special Projects at Inflow Endorsed 2 minutes ago Until Manoj gives us the URLs so we can look into it ourselves, I'd have to say this is the best answer: Google sucks sometimes. Use the Google Scraper Report form. If that doesn't work, submit a DMCA complaint to Google.
-
Oh, that is a very good point. This is very bad for people who have clients.
-
Thanks, EGOL.
The other big challenge is to get clients to also buy into the idea that it is Google's problem!
-
**In this specific instance, the original source outscores the site where content is duplicated on almost all the common metrics that are deemed to be indicative of a site's relative authority/standing. **
Yes, this happens. It states the problem and Google's inabilities more strongly than I have stated it above.
**Any ideas/ potential solutions that you could help with ---- will be much appreciated. **
I have this identical problem myself. Actually, its Google's problem. They have crap on their shoes but say that they can't smell it.
-
Hi,
Thanks for the response. I'd understand if the original source was indeed new or not so 'powerful' or an established site in the niche that it serves.
In this specific instance, the original source outscores the site where content is duplicated on almost all the common metrics that are deemed to be indicative of a site's relative authority/standing.
Any ideas/ potential solutions that you could help with ---- will be much appreciated.
Thanks
-
Scraped content frequently outranks the original source, especially when the original source is a new site or a site that is not powerful.
Google says that they are good at attributing content to the original publisher. They are delusional. Lots of SEOs believe Google. I'll not comment on that.
If scraped content was not making money for people this practice would have died a long time ago. I submit that as evidence. Scrapers know what Google does not (or refused to admit) and what many SEOs refuse to believe.
-
No, John - we don't use the 'Fetch as Googlebot' for every post. I am intrigued by the possibility you suggest.
Yes, there are lots of unknowns and certain results seem inexplicable --- as we feel this particular instance is. We have looked at and evaluated most of the obvious things to be considered, including the likelihood of the re-publisher having gotten more social traction. However, the actual results are opposite to what we'd expect.
I'm hoping that you/ some of the others in this forum could shed some light on any other factors that could be influencing the results.
Thanks.
-
Thanks for the link, Umar.
Yes, we did fetch the cached versions of both pages--- but that doesn't indicate when the respective pages were first indexed, it just shows when the pages were last cached.
-
No Martijn, the articles have excerpts from representatives of the republisher; there are no links to the re-publisher website.
-
When you're saying you're mentioning the re-publisher briefly in the posts itself does that mean you're also linking to them?
-
Hey Manoj,
That's indeed very weird. There can be multiple reasons for this, for instance, did you try to fetch the cached version of both sites to check when they got indexed? Usually online publication sites have fast indexing rate and it might be possible that your client shared the articles on social before they got indexed and the other site lifted them up.
Do check out this brilliant Moz post, I'm sure you will get the idea what caused this,
Hope this helps!
-
Do you use fetch for google WMT with every post?
If your competitors monitor the site, harvest the content and then publish and use fetch for google - that could explain why google ranks them first. ie google would likely have indexed their content first.
That said there are so many unknown factors at play, ie how does social stack up. Are they using google + etc.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is my content being fully read by Google?
Hi mozzers, I wanted to ask you a quick question regarding Google's crawlability of webpages. We just launched a series of content pieces but I believe there's an issue.
Intermediate & Advanced SEO | | TyEl
Based on what I am seeing when I inspect the URL it looks like Google is only able to see a few titles and internal links. For instance, when I inspect one of the URLs on GSC this is the screenshot I am seeing: image.pngWhen I perform the "cache:" I barely see any content**:** image.pngVS one of our blog post image.png Would you agree with me there's a problem here? Is this related to the heavy use of JS? If so somehow I wasn't able to detect this on any of the crawling tools? Thanks!0 -
My competitor is ranking above me for a branded search in Google. How can I come back on top?
I work with an organization that is ranking #2 for a branded search term, second to a competitor. They have zero similarity between their names, and we've worked with them to up their SEO game around all major areas (one drawback: SquareSpace is killing their site speed). Their DA is 59, the competitor's DA is 77. What are some smart, specific ways that we can help our client come back out on top?
Intermediate & Advanced SEO | | ogiovetti0 -
Google + and Schema
I've noticed with a few of the restaurant clients I work with that Schema isn't contributing at all to their SERP -- their Google + page is. Is there any way to have more control over what Google is pulling to help make UX better? I.e. showing photos of the restaurant without a logo, etc.
Intermediate & Advanced SEO | | Anti-Alex0 -
HELP! How do I get Google to value one page over another (older) page that is ranking?
So I have a tactical question and I need mozzers. I'll use widgets as an example: 1- My company used to sell widgets exclusively and we built thousands of useful, branded unique pages that sell widgets. We have thousands of pages that are ranking for widgets.com/brand-widgets-for-sale. (These pages have been live for almost 2 years) 2- We've shifted our focus to now renting widgets. We have about 100 pages focused on renting the same branded widgets. These pages have unique content and photos and can be found at widgets.com/brand-widgets-for-rent. (These pages have been live for about 2-3 months) The problem is that when someone searches just for the brand name, the "for sale" pages dramatically outrank the "for rent" pages. Instead, I want them to find the "for rent" page. I don't want to redirect traffic from the "for sale" pages because someone might still be interested in buying (although as a company, we are super focused on renting). Solutions? "nofollow" the "for sale" pages with the idea that Google will stop indexing "for sale" and start valuing "for rent" over it? Remove "for sale" from sitemap. Help!!
Intermediate & Advanced SEO | | Vacatia_SEO0 -
How much content is needed to be competitive and rank well?
When considering on page / on site seo what process do you use / take to evaluate how much content is needed to be competitive and rank well?
Intermediate & Advanced SEO | | marknorman0 -
Why won't my sub-domain blog rank for my brand name in Google?
For six months or so, my team and I have been trying to get our blog to rank on page one in Google for the term "Instabill." The URL, http://blog.instabill.com, is a sub-domain of our company website and they both use the same IP address. Three pages on our www.Instabill.com site rank in the top three spots when searching our brand name in Google. However, our blog ranks 100+. For our blog, we are currently using b2evolution and nginx. We have tried adding static content on the home page, static content in the sidebar, static content on an About Instabill page, and optimizing blog posts for the keyword Instabill, but nothing seems to work. We appreciate any advice you can provide to us. Thank you!
Intermediate & Advanced SEO | | Instabill
Meghan0 -
Indexation of content from internal pages (registration) by Google
Hello, we are having quite a big amount of content on internal pages which can only be accessed as a registered member. What are the different options the get this content indexed by Google? In certain cases we might be able to show a preview to visitors. In other cases this is not possible for legal reasons. Somebody told me that there is an option to send the content of pages directly to google for indexation. Unfortunately he couldn't give me more details. I only know that this possible for URLs (sitemap). Is there really a possibility to do this for the entire content of a page without giving google access to crawl this page? Thanks Ben
Intermediate & Advanced SEO | | guitarslinger0 -
Deep Page is Ranking for Main Keyword, But I Want the Home Page to Rank
A deep page is ranking for a competitive and essential keyword, I'd like the home page to rank. The main reasons are probably: This specific page is optimized for just that keyword. Contains keyword in URL I've optimized the home page for this keyword as much as possible without sacrificing the integrity of the home page and the other keywords I need to maintain. My main question is: If I use a 301 redirect on this deep page to the home page, am I risking my current ranking, or will my home page replace it on the SERPs? Thanks so much in advance!
Intermediate & Advanced SEO | | ClarityVentures0