Duplicate Content - That Old Chestnut!!!
-
Hi Guys,
Hope all is well, I have a question if I may? I have several articles which we have written and I want to try and find out the best way to post these but I have a number of concerns. I am hoping to use the content in an attempt to increase the Kudos of our site by providing quality content and hopefully receiving decent back links.
1. In terms of duplicate content should I only post it on one place or should I post the article in several places? Also where would you say the top 5 or 10 places would be? These are articles on XML, Social Media & Back Links.
2. Can I post the article on another blog or article directory and post it on my websites blog or is this a bad idea?
A million thanks for any guidance.
Kind Regards,
C
-
Hi Craig,
Search this in Google "keyword"+“Guest bloggers wanted” OR “guest blogger wanted”
Then analyse the authority of each site, and choose the best 10 to submit your articles to.
You could also join the guest posting community at myblogguest.com
Good luck!
Greg
-
Thanks Greg,
If you could, where should I look for the best guest post sites? I have heard that some can be good and some can be bad. Now whilst or views and writing is fairly good we are no Technoratiiums......Your thoughts on this would be greatly appreciated?
Thanks,
C
-
Do both.
Google wants to see fresh content on you site, but then you also need contextual backlinks from other sites.
Do 10 guest posts, and publish 10 on your website.
Greg
-
Hi Alan,
Thanks for that, makes a lot of sense. So all that said and done. Would I be better, putting 15-20 articles per month only on our own blog on our website or should I also post some different articles as guest posts?
Nae Easy.
Thanks,
Craig
-
Google seems to have a number of ways to treat duplicate content of the type you are suggesting.
You are almost suggesting syndicated content. This is closely related to a press release, which is issued with one press releases distributor and then published on multiple other sites.
A little investigation shows that google handles such duplicate content pages differently, depending on which way the wind is blowing today.
If the site that posts the secondary copies is a powerful site or a site that google likes, then there seems to be no problem with duplicate content. To check this out, find an Associated Press story , then search for it at google and you will find it on dozens, or even hundreds of newspaper and radio station websites. Most of them will have exactly the same headline and exactly the same content.
Any site that posts the secondary copies, that google doesn't like, for example, covered by panda or penguin or a manual penalty, the content will not be found in search results, unless you go to the end of results and redo the search with the duplicates shown.
If the original site is weak or covered by panda or penguin, then the secondary copies may display in search results, but your original may not. It is also possible that your index pages that contain a snippet of the original may display in the results, but not the complete original, if other, more powerful sites are displaying it.
So, if my observations are valid, it means that you should keep the original to yourself and not syndicate it. If you want to get value from other sites, then write a different story for them (guest post, as DiscoverAfrica suggests) and get a link back to your site within the body of the story.
-
Hi Craig,
I would suggest looking for guest posting opportunities with webmasters in your niche (or similar niche) rather than publishing on Article/Blog directories.
1.) Always only publish your articles in one place, don't try to mass submit the same article to many websites.
2.) You can publish the article on your website, and then syndicate the article on other websites, but this isnt the best idea either. If you decide to guest post, the webmasters usually check to see if the content is original, if not they wont accept it. Even if they do accept it, the link pointing to the original article on your website is merely a "reference" rather than an endorsement from one site to the next.
Hope this makes sense?
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does duplicate content not concern Rand?
Hello all, I'm a new SEOer and I'm currently trying to navigate the layman's minefield that is trying to understand duplicate content issues in as best I can. I'm working on a website at the moment where there's a duplicate content issue with blog archives/categories/tags etc. I was planning to beat this by implementing a noindex meta tag on those pages where there are duplicate content issues. Before I go ahead with this I thought: "Hey, these Moz guys seem to know what they're doing! What would Rand do?" Blogs on the website in question appear in full and in date order relating to the tag/category/what-have-you creating the duplicate content problem. Much like Rand's blog here at Moz - I thought I'd have a look at the source code to see how it was dealt with. My amateur eyes could find nothing to help answer this question: E.g. Both the following URLs appear in SERPs (using site:moz,com and very targeted keywords, but they're there): https://mza.bundledseo.com/rand/does-making-a-website-mobile-friendly-have-a-universally-positive-impact-on-mobile-traffic/ https://mza.bundledseo.com/rand/category/moz/ Both pages have a rel="canonical" pointing to themselves. I can understand why he wouldn't be fussed about the category not ranking, but the blog? Is this not having a negative effect? I'm just a little confused as there are so many conflicting "best practice" tips out there - and now after digging around in the source code on Rand's blog I'm more confused than ever! Any help much appreciated, Thanks
Technical SEO | | sbridle1 -
Looking at creating some auto-generated pages - duplicate content?
Hi Everyone! We just launched a new version of our research site and the main CTA on the page sends users to a subdomain that's blocked by robots.txt. The subdomain link is one of our PPC landing pages and they would be duplicate content for every model (cars). We're also looking at a new content stream of deals pages, on the main domain. The thought process was that we could rank these pages for things like "Volkswagen golf deals" and also use them as canonical URLs from the PPC pages so that Panda doesn't get mad at us for sending hundreds of links to a subdomain that's blocked. It's going to take us a lot of time to write the copy for the deals pages, so if we auto-generate it by pulling a paragraph of copy from the car review, and numerical stats about that model, will it be classes as duplicate and/or is there any downside to doing it? Review Page: http://www.carwow.co.uk/car-reviews/Ford/Fiesta Deals Page: http://www.carwow.co.uk/deals/Ford/Fiesta PPC Landing Page: http://quotes.carwow.co.uk/buy/Ford/Fiesta I can't help but feel that this may all be a bit overkill and perhaps it makes more sense to build 1 central deals page per model with unique content that we can also send the PPC traffic to, then life any block from the quotes. subdomain. But that will take time and we'd also like a quick solution. I'd also question if it's even an issue to link to a blocked subdomain, Google adds the quote URL into the index but can't crawl it, which I've been told is bad - but is it bad enough to do something about? Thanks, JP
Technical SEO | | Matt.Carwow0 -
Using canonical for duplicate contents outside of my domain
I have 2 domains for the same company, example.com and example.sg Sometimes we have to post the same content or event on both websites so to protect my website from duplicate content plenty i use canonical tag to point to either .com or .sg depend on the page. Any idea if this is the right decision Thanks
Technical SEO | | MohammadSabbagh0 -
Looking for a technical solution for duplicate content
Hello, Are there any technical solutions to duplicate content similar to the nofollow tag? A tag which can indicate to Google that we know that this is duplicate content but we want it there because it makes sense to the user. Thank you.
Technical SEO | | FusionMediaLimited0 -
Duplicate Content: Canonicalization vs. Redirects
Hi all, I have a client that I recently started working with whose site was built with the following structure: domain.com
Technical SEO | | marisolmarketing
domain.com/default.asp Essentially, there is a /default.asp version of every single page on the site. That said, I'm trying to figure out the easiest/most efficient way to fix all the /default.asp pages...whether that be 301 redirecting them to the .com version, adding a canonical tag to every .asp page, or simply NOINDEXing the .asp pages. I've seen a few other questions on here that are similar, but none that really say which would be the easiest way to accomplish this without going through every single page... Thanks in advance!0 -
Errors - 7300 - Duplicate Page Content..Help me..
Hi, I just received the crawl report with 7300 errors of duplicate page content. Site built using php. list of errors will be like this.. http://xxxxx.com/channels/ http://xxxxx.com/channels/?page=1 http://xxxxxx.com/channels/?page=2 I am not good in coding and using readymade script for this website. could anyone guide me to fix this issue? Thanks.
Technical SEO | | vilambara0 -
How to prevent duplicate content in archives?
My news site has a number of excerpts in the form of archives based on categories that is causing duplicate content problems. Here's an example with the nutrition archive. The articles here are already posts, so it creates the duplicate content. Should I nofollow/noindex this category page along with the rest and 2011,2012 archives etc (see archives here)? Thanks so much for any input!
Technical SEO | | naturalsociety0 -
Duplicate content and http and https
Within my Moz crawl report, I have a ton of duplicate content caused by identical pages due to identical pages of http and https URL's. For example: http://www.bigcompany.com/accomodations https://www.bigcompany.com/accomodations The strange thing is that 99% of these URL's are not sensitive in nature and do not require any security features. No credit card information, booking, or carts. The web developer cannot explain where these extra URL's came from or provide any further information. Advice or suggestions are welcome! How do I solve this issue? THANKS MOZZERS
Technical SEO | | hawkvt10