Is there a way to make Google realize/detect scraper content?
-
Good morning,Theory states that duplicated content reduces certain keywords’ position in Google. It also says that a web who copy content will be penalized. Furthermore, we have spam report tools and the scraper report to inform against these bad practices.In my case: the website, both, sells content to other sites and write and prepare its own content which is not in sale. However, other sites copy these last ones, publish them and Google do not penalize their position in results (not in organic results neither in Google news), even though they are reported using Google tools for that purpose.Could someone explain this to me? Is there a way to make Google realize/detect these bad practices?Thanks
-
I've found backlinks in scraper websites linking to the scraped website I am taking care of.
They are in css, images, forms.
What's the point in doing it on their side?
-
Stolen content is a big issue today and recent reports have shown that people who steal the content from you will usually knock you out of your search engine position, no matter what your authority, backlink, or social share profiles look like.
This great presentation given by Jon Earnshaw at Brighton SEO last week gives a better idea of how it has affected other websites : http://www.slideshare.net/jonathanearnshaw/is-your-content-working-better-for-someone-else
Google use to have a Scraper report that you could file the offending site and get it removed from the SERPS but they have removed this.
I found a similar way to report the stolen content on this blog post :
http://www.techng.info/removing-your-stolen-content-from-google-search-using-dmca/
Hope this answers your question, even if it is a bit delayed from the original post
-
Hello,
The reporting tools are not particularly useful in this scenario as duplicate content is not a penalty-worthy situation. While Panda is used to destroy spam-oriented content, duplicate content is treated as more of a null/void situation than as a penalty.
For example, when you place your newly-created original content and it is crawled and indexed, Google attributes your domain with being the origin of said content. If another website showcases this content, it is recognized as duplicate by Google (which has compared it to your indexed version) and given no benefit or penalty. In effect, using duplicate content is merely a neutral practice - it's the spam that Google is really after.
Here's a beginner's report on duplicate content that spells it out quite nicely:
https://mza.bundledseo.com/learn/seo/duplicate-content
As Charles mentioned, copied content is not an automatic ban sentence. If it is within "acceptable limits" there is not a detrimental impact to the website. However, if the website is made up of purely copied content from multiple sources, and spams links or keyword stuffs, it will be dealt with accordingly.
In short, this website will not be penalized in the fashion you desire unless they are spamming or keyword stuffing (among other penalty-worthy offences). Your best bet is to beat them out by building up your link profile and continuing to post valuable, original content.
Let me know if there is anything else I can help with.
Rob
-
Theory states that duplicated content reduces certain keywords’ position in Google.
Wrong. Google might omit duplicate results or ban sites practising it, but it doesn't lower rankings based on number of duplicates or something. Otherwise wikipedia or any aggregating websites like car dealers etc would be nowhere to be found.
It also says that a web who copy content will be penalized.
Semi-wrong. It will be penalized if it's spammy and overdoing it.
Watch this video of Matt Cutts on duplicate content - https://www.youtube.com/watch?v=mQZY7EmjbMA
So, my understanding is that there is no 100% working way of getting down scrapers, because some of them are actually "good" scrapers. Like Facebook! - the biggest scraper in the world.
So, to beat them in rankings, just make sure that you are an authority in your industry, have awesome backlink profile and all aspects of SEO are properly implemented. And yes, sometimes those penalization tools can help.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Not Indexing App Content
Hello Mozzers I recently noticed that there has been an increase in crawl errors reported in Google Search console & Google has stopped indexing our app content. Could this be due to the fact that there is a mismatch between the host path name mentioned within the android deeplink (within the alternate tag) and the actual URL of the page. For instance on the following desktop page http://www.example.com.au/page-1 the android deeplink points to http://www.example.com.au/android-app://com.example/http/www.example.com.au/4652374 Please note that the content on both pages (desktop & android) is same.Is this is a correct setup or am I doing something wrong here? Any help would be much appreciated. Thank you so much in advance.
Intermediate & Advanced SEO | | InMarketingWeTrust0 -
Does anyone know how dynamic/personalized website content affects SEO?
A client using Marketo has inquired about personalizing their website content to be personalized based on a persona. To be clear, I'm talking about key website pages, maybe even the Home page, not PPC/campaign specific landing pages. For example, areas of on the site would change to display content differently to a CEO vs a sales person. I'm new to marketing automation and don't exactly know how this piece works. Hoping someone here has experience or can provide pros/cons guidance. How would search engines work with this type of page? Here's Marketo's site explaining what it does: https://docs.marketo.com/display/public/DOCS/Web+Personalization+-+RTP
Intermediate & Advanced SEO | | Flock.Media0 -
Best way to move the content to a different domain without inviting any SERP penalty?
Hi all, We are in a bit of a fix right now. We have around 60-70 articles (Wordpress pages / posts) that we intend to move to another domain of ours. What's the best way to do so such that we do not invite any Google penalty. Here's a detailed information about our case:
Intermediate & Advanced SEO | | stj
Let's say, our site example.com has more 2000 articles. To help us better position our content for one of the sections on example.com, we have started another website, example2.com and want to move those 60-70 articles from example.com to example2.com. What is the best way to do it such that we are not penalised by Google? Is it (a) Move all the said content (60-70 articles) from example.com to example2.com and (b) do a permanent redirect (301) of each of the older article URLs to newer article URLs. What are the other options?0 -
Best way to remove full demo (staging server) website from Google index
I've recently taken over an in-house role at a property auction company, they have a main site on the top-level domain (TLD) and 400+ agency sub domains! company.com agency1.company.com agency2.company.com... I recently found that the web development team have a demo domain per site, which is found on a subdomain of the original domain - mirroring the site. The problem is that they have all been found and indexed by Google: demo.company.com demo.agency1.company.com demo.agency2.company.com... Obviously this is a problem as it is duplicate content and so on, so my question is... what is the best way to remove the demo domain / sub domains from Google's index? We are taking action to add a noindex tag into the header (of all pages) on the individual domains but this isn't going to get it removed any time soon! Or is it? I was also going to add a robots.txt file into the root of each domain, just as a precaution! Within this file I had intended to disallow all. The final course of action (which I'm holding off in the hope someone comes up with a better solution) is to add each demo domain / sub domain into Google Webmaster and remove the URLs individually. Or would it be better to go down the canonical route?
Intermediate & Advanced SEO | | iam-sold0 -
Best way for Google and Bing not to crawl my /en default english pages
Hi Guys, I just transferred my old site to a new one and now have sub folder TLD's. My default pages from the front end and sitemap don't show /en after www.mysite.com. The only translation i have is in spanish where Google will crawl www.mysite.com/es (spanish). 1. On the SERPS of Google and Bing, every url that is crawled, shows the extra "/en" in my TLD. I find that very weird considering there is no physical /en in my urls. When i select the link it automatically redirects to it's default and natural page (no /en). All canonical tags do not show /en either, ONLY the SERPS. Should robots.txt be updated to "disallow /en"? 2. While i did a site transfer, we have altered some of the category url's in our domain. So we've had a lot of 301 redirects, but while searching specific keywords in the SERPS, the #1 ranked url shows up as our old url that redirects to a 404 page, and our newly created url shows up as #2 that goes to the correct page. Is there anyway to tell Google to stop showing our old url's in the SERP's? And would the "Fetch as Google" option in GWT be a great option to upload all of my url's so Google bots can crawl the right pages only? Direct Message me if you want real examples. THank you so much!
Intermediate & Advanced SEO | | Shawn1240 -
Question about copying content
Hi there, I have had a question from a retailer asking if they can take all our content i.e. blog articles, product pages etc, what is best practice here in getting SEO value out of this? Here a few ideas I was thinking of: I was thinking they put canonical tags on all pages where they have copied our content? They copy the content but leave all anchor text in place? Please let me know your thoughts. Kind Regards
Intermediate & Advanced SEO | | Paul780 -
Do you bother cleaning duplicate content from Googles Index?
Hi, I'm in the process of instructing developers to stop producing duplicate content, however a lot of duplicate content is already in Google's Index and I'm wondering if I should bother getting it removed... I'd appreciate it if you could let me know what you'd do... For example one 'type' of page is being crawled thousands of times, but it only has 7 instances in the index which don't rank for anything. For this example I'm thinking of just stopping Google from accessing that page 'type'. Do you think this is right? Do you normally meta NoIndex,follow the page, wait for the pages to be removed from Google's Index, and then stop the duplicate content from being crawled? Or do you just stop the pages from being crawled and let Google sort out its own Index in its own time? Thanks FashionLux
Intermediate & Advanced SEO | | FashionLux0 -
How to make Google forget my pages ?
Hello all ! I've decided to delete many pages from my website wich had poor content. I've made a php 301 redirect from all these old pages to a unique page (not the home page, a deep page). My problem is that this modification has been made a week ago and my position in the SERPs have crashed down... What can I do ? I believe that I'll get up again when Google will see that these pages don't exist anymore but it could take a long time 😞 (these page are in the Google cache with a date older than my modification's date) I've read somewhere that I should put a link to the destination page (where old pages are 301 redirected) but I don't understand how it could help... Can someone help me ? Tell me what I've done wrong... These pages were very poor and I've deleted them in order to boost the global quality of my site... It should help me in the SERPs, not penalize me...
Intermediate & Advanced SEO | | B-CITY0