Duplicated content detected with MOZ crawl with canonical applied
-
Hi there!
I have a slight problem.
I have a site with Joomla 3.3 that we recently migrated from 2.5.Joomla, for some reason that I don´t really get, creates hundreds of weird urls for the site like
mydomain.com/en -> joomla creates en/home/149-xxx-xxx/xxxxxx-xxxxxx that links to the first one.
The new version 3.3 knows this bug and applies a rel=canonical to the ones created "artificially", so they should not be identified as duplicated.Sample piece of code: en/home/149-all-en/xxxxxxx-xxxxxx" rel="canonical" /
MOZ crawler identifies this as duplicated and like this I have thousands of pages duplicated all with titles, content etc... all the ones created by joomla. Still my site has good SEO results and I can not see any penalties but I am a bit concerned they may come in the future....
Can anyone explain me what is happening?
Thank you in advance for your time,
-
If it's a period of 2 weeks and you're going to do it anyways, I would just make the new content and not go to the expense of setting up redirects and then taking them down, which can cause issues when you plan on recreating a URL.
-
Thank you for your time!
We are going to setup 301 redirects (one colleague suggested importing those directly in the DB of redirects) from those duplicated pages until joomla has a native solution and we have the time to make all unique content, to avoid penalties.
At least, we would solve temporaly the problem, it will take 2 weeks to make all the unique content.
Would that make sense?
Have a nice weekend!
-
I personally would not generate new language sections unless the content has been translated and localized on those pages. Right now your Spanish homepage has English content in the body, so I would view this as incomplete. Ideally you'd translate the entire page for those sections.
When you do that, you'll want to use hreflang, not canonicals, to indicate different versions of the same content.
So, my recommendation is (A) get rid of the Spanish content sections which would solve the duplication problem, or (B) finish translating the content and then install hreflang code, which would also solve the duplication problem.
Unfortunately I don't know of a good hreflang tool for Joomla specifically.
Let me know if that makes sense?
-
Thank you Kane.
I would like to keep the content in all the languages, ,as I think it is useful for customers to enter easily certain areas.
The problem that I am always having is the implementation...There are not real good canonical plugins (that would allow me to do a bulk import), and I am not that advanced as for doing an htaccess redirect with 301... still, I would like that if someone from NL or FI version would like to find the area barcelona could see it....
Anything on mind!? Just to say, I tried SH404, does all the work but rewrites the whole url structure (not possible), I tried canonical http://www.cmsplugin.com/products/components/4-canonical-url which solves the duplication by languages but not the random urls created by 3.3...
Then I decided to leave the plugin I mentioned before, it deletes all the duplicated urls generated automatically but does not solve the language problem...So, here I am
Any suggestion?
-
Also, if you decide to keep the /es/ section of the website then you'll need to look into hreflang instead of canonical tags, because /es/ and /en/ will not be duplicate content once they're translated.
Read this Q&A from Google for details - https://sites.google.com/site/webmasterhelpforum/en/faq-internationalisation#q20
-
Hey Jose,
If you have an /es/ subfolder then ideally you would be translating that content to Spanish, not canonicalizing that content back to the English version.
I can see from http://www.spain-internship.com/es/internships-in-salamanca that not all /es/ pages are translated - is this true across the entire website?
If you don't have any Spanish content, then you should just kill off the /es/ version entirely.
-
Hi there,
Thanks for the update. Now that you told me the problem I found out this is a known bug for joomla and I am working on it.
I found a plugin http://styleware.eu/store/item/26-styleware-content-canonical-plugin that sends all the duplicated urls, generated automatically with a canonical to the home.Sample:
http://www.spain-internship.com/en/home/149-all-en/placement-spain
Now with the link http://www.spain-internship.com" rel="canonical" />.This solves the problem of the core canonical bug.
Would this be a proper solution?Now I only have to change all the ones duplicated due to languages config, block then in robots or canonical but as far as I control it, it is ok.
Please, let me know if this would be a proper solution.
Thank you in advance for your help, if I can help you in some moment with something here we are!
-
Ok, the problem is your pages are all canonical to themselves, the canonical tag should point at the main page for the content, not to every page. For your first example, all pages that get their content from http://www.spain-internship.com/en need to have canonical tags to that page, instead the copy page has this:
href="http://www.spain-internship.com/fi/etusivu/186-all-fi/home-page-fi" rel="canonical" />
it should have
href="http://www.spain-internship.com/fi/" rel="canonical" />
-
I will provide few so you can look!
Detected as duplicated:
http://www.spain-internship.com/en
http://www.spain-internship.com/en/home/149-all-en/placement-spainSame here:
http://www.spain-internship.com/fi
http://www.spain-internship.com/fi/etusivu/186-all-fi/home-page-fihttp://www.spain-internship.com/en/internships-in-salamanca
http://www.spain-internship.com/es/internships-in-salamancaFirst one is the original. The rest one have canonical. Still detected as duplicated.
-
Do you have an example of one of these generated pages as well, everything looks fine on the main page.
-
Hey,
Yes, sure.
This is the duplicated from the /en
http://www.spain-internship.com/en/home/149-all-en/placement-spain
Thanks!
-
Do you have a link to one of these pages so we can look at how it is deploying the canonical onto the page.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Looking for someone from Moz to comment on unrealistic spam score
Two years ago I bought domain name aroundtheworldwithme.com as a travel blog. I built the site up slowly and currently have a DA of 28 with decent Google search results. However, according to Moz my spam score is 43%. I am convinced that something funny is going on to give me this spam score. I have gone though all 27 factors that play into the spam score in great detail. I only fail a few of the checks. These are the Double Click Tag, LinkedIn profile, phone number, email, and Facebook Pixel. Which as far as I know, literally zero travel blog websites provide this info. So I am on par with evert other travel blog website. Now I know Moz will say this "doesn't mean your website is spam, just that our algorithm found that websites with similar attributes are spam." But this is completely bogus. All similar websites to mine have spam scores of 1-2%. All other websites I see with spam scores over 40% are literally spam websites. Why am I literally the only legitimate travel blog site that has a spam score over 40%? My backlink profile is similar to all other travel blogs. I actually have less spammy links as most, as I haven't been around too long. So I don't think my backlinks are causing the high spam score. The only thing I can think of is that my domain name used to be owned by someone else. I have a lot of backlinks from a random blog website that were discovered in 2018, three years before I bought the domain. Is it possible that the domain used to be an actual spam site name and I am now being punished for that? If not, then I cannot think of anything that would cause my high spam score other than fundamental defects in the Moz spam score algorithm. Something is going on, and I'd love someone from Moz to actually be able to have a look at my website and tell me why I have such a high spam score. I know Google doesn't care about Moz's spam score (thankfully) but other websites don't want links from me due to my completely bogus spam score. Thanks everyone
Link Explorer | | Heckmantis
aroundtheworldwithme.com0 -
Moz show me that my backlinks lost every day in my some important keywords
hi, moz show me that my backlinks lost every day in my some important keywords like: ثبت تغییرات شرکت ، ثبت شرکت ، .... but , i think that's not correct. can you explain to me that is for what? my website is: https://vanak.org thank you....
Link Explorer | | alirezatrade01230 -
Google Disavow File Format and MOZ Spam Score Updates
Hi, Is there a defined file format for Google disavow file name? Does it has to be disavowlinks.txt or can we do this like domain-name-date.txt ? Also, since Google does not share their data with Moz, how does MOz updates its spam score after we disavow the bad links? Do we need to connect Google search console with Moz?
Link Explorer | | Sunil-Gupta1 -
Crawl Errors on a Wordpress Website
I am getting a 902 error, "Network Errors Prevented Crawler from Contacting Server" when requesting a site crawl on my wordpress website, https://www.systemoneservices.com. I think the error may be related to site speed and caching, but request a second opinion and potential solutions. Thanks, Rich
Link Explorer | | rweede0 -
Does Moz allow you to upload your disavow to take into account when analysing links?
I was wondering if moz allows you to upload your disavow file and if it has a consideration in the data supplied? Otherwise how can we accurately use moz to understand the true analytics of our site?
Link Explorer | | MoneySite0 -
Moz crawling bot
Hi guys, in OpenSiteExplorer -> Top Pages, there are no page titles displayed in a raport for certain domain, and "HTTP Status" column shows: "Blocked by robots.txt". I tried to find out what the ID of Moz crawling bot is, and on this page: http://moz.com/community/q/seomoz-spider-bot-details someone says it's: Mozilla/5.0 (compatible; rogerBot/1.0; http://www.seomoz.org/dp/rogerbot). However, my robots.txt doesn't have such entry. Take a look: Automatically banned scanners and crawlers section User-agent: 008 Disallow: / user-agent: AhrefsBot Disallow: / User-agent: MJ12bot Disallow: / User-agent: metajobbot Disallow: / User-agent: Exabot Disallow: / User-agent: Ezooms Disallow: / User-agent: fyberspider Disallow: / User-agent: dotbot Disallow: / User-agent: MojeekBot Disallow: / Section end What could be the problem here, then? Why does the Moz bot think I'm blocking it?
Link Explorer | | superseopl0