Best Practice Approaches to Canonicals vs. Indexing in Google Sitemap vs. No Follow Tags
-
Hi There,
I am working on the following website: https://wave.com.au/
I have become aware that there are different pages that are competing for the same keywords.
For example, I just started to update a core, category page - Anaesthetics (https://wave.com.au/job-specialties/anaesthetics/) to focus mainly around the keywords ‘Anaesthetist Jobs’.
But I have recognized that there are ongoing landing pages that contain pretty similar content:
We want to direct organic traffic to our core pages e.g. (https://wave.com.au/job-specialties/anaesthetics/).
This then leads me to have to deal with the duplicate pages with either a canonical link (content manageable) or maybe alternatively adding a no-follow tag or updating the robots.txt. Our resident developer also suggested that it might be good to use Google Index in the sitemap to tell Google that these are of less value?
What is the best approach? Should I add a canonical link to the landing pages pointing it to the category page? Or alternatively, should I use the Google Index? Or even another approach?
Any advice would be greatly appreciated.
Thanks!
-
This all sounds good, just make sure before you proceed, you use GA to check what % of your SEO (segment: "Organic") traffic comes from these URLs. Don't act on a hunch, act on data!
-
Thank you for the comprehensive response this is greatly appreciated my friend.
Yes, I agree. I have since read further and have completely ruled out blocking (robots txt. etc) as an option.
I went back and read some more Moz/SEO articles and I think I have narrowed it down to either:
a) canonicals pointing from the landing pages to the core website category pages
b) NoIndex/Follow tags on the landing pages
Basically, I think the key contextual factors to keep in mind are that:
- The landing pages are basically just sent to people directly by our recruiters in emails and over the phone, so they are almost counted as direct traffic.
- It just contains a form and doesn't encourage click through into our core website beside logo etc. - we just want them to register directly on that page.
- Over the past year, the visits on the landing pages were much, much less, and the bounce rate and exit % was higher.
- my manager has told me to prioritise the SEO towards the core category pages as they see the landing pages as purely for UX/registrations/useful to internal business recruiting practices rather than encouraging organic traffic.
I think canonicals would probably work the best since in some cases the landing pages were ranking higher than the category pages and it should hopefully transfer a bit of ranking power to the category pages.
But perhaps you are right and I can batch apply canonicals monitor the results and then progress.
Once again, thank you for your response.
-
First of all keep in mind that Google has chosen the pages it is deciding to rank for one reason or another, and that canonical tags do not consolidate link equity (SEO authority) in the same way which 301 redirects do
As such, it's possible that you could implement a very 'logical' canonical tag structure, but for whatever reason Google may not give your new 'canonical' URLs the same rankings which it ascribed to the old URLs. So there is a possibility here that, you could lose some rankings! Google's acceptance of both the canonical tag and the 301 redirect depends upon the (machine-like) similarity of the content on both URLs
Think of Boolean string similarity. You get two strings of text, whack them into a tool like this one - and it tells you the 'percentage' of similarity between the two text strings. Google operate something similar yet infinitely more sophisticated. No one has told me that they do this, I have observed it over hundreds of site migration projects where, sometimes Google gives the new site loads of SEO authority through the 301s and sometimes not much at all. For me, the two main causes of Google refusing to accept new canonical URLs are redirect chains (which could include soft redirect chains) but also content 'dissimilarity'. Basically, content has won links and interactions on one URL which prove it is popular and authoritative. If you move that content somewhere else, or tell Google to go somewhere else instead - they have to be pretty certain that the new content is pretty much the same, otherwise it's a risk to them and an 'unknown quantity' in the SERPs (in terms of CTR and stuff)
If you're pretty damn sure that you have loads of URLs which are essentially the same, read the same, reference the same prices for things (one isn't cheaper than the other), that Google has really chosen the wrong page to rank in terms of Google-user click-through UX, then go ahead and lay out your canonical tag strategy
Personally I'd pick sections of the site and do it one part at a time in isolation, so you can minimise losses from disturbing Google and also measure your efforts more effectively / efficiently
If you no-index and robots-block URLs, it KILLS their SEO authority (dead) instead of moving it elsewhere (so steer clear of those except in extreme situations, they're really a last resort if you have the worst sprawling architecture imaginable). 301 redirects can shift ranking URLs and relevance, but don't pipe much authority. 301 redirects (if handled correctly) do all three things
What you have to ask yourself is, if you flat out deleted the pages you don't want to rank (obviously you wouldn't do this, as it would cause internal UX issues on your site) - if you did that, would Google:
A) Rank the other pages in their place from your site, which you want Google to rank
B) Give up on you and just rank similar pages (to the ones you don't want to rank) from other, competing sites instead
If you think (A) - take a measured, sectioned, small approach to canonical tag deployment and really test it before full roll-out. If you think (B), then you are admitting that there's something more Google-friendly one the pages you don't want to be ranking and just have to accept - no, your Google->conversion funnel will never be completely perfect like how you want it to be. You have to satisfy Google, not the other way around
Hope that helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Javascript content not being indexed by Google
I thought Google has gotten better at picking up unique content from javascript. I'm not seeing it with our site. We rate beauty and skincare products using our algorithms. Here is an example of a product -- https://www.skinsafeproducts.com/tide-free-gentle-he-liquid-laundry-detergent-100-fl-oz When you look at the cache page (text) from google none of the core ratings (badges like fragrance free, top free and so forth) are being picked up for ranking. Any idea what we could do to have the rating incorporated in the indexation.
Intermediate & Advanced SEO | | akih0 -
Whats the best practice for acquisition?
Hi, My company have just bought out a competitor. We wan't to dissolve their website and if possible steal some of their link juice. The site hasn't got any spammy links or 404's so i'm not worried in that department. What I am not sure about is which of the following is best practice? a. Redirect every single page (even pages like /?checkout) to a relevant page on our website. b. Only redirect important pages, category pages, contact pages etc and leave the other pages to 404? c. Redirect the important pages to a relevant URL and redirect the less important pages to our homepage. d. Redirect the entire domain to our home page (i assume this isn't a good idea) e. Don't redirect any of the pages just delete the site.
Intermediate & Advanced SEO | | DannyHoodless0 -
Product Pages not indexed by Google
We built a website for a jewelry company some years ago, and they've recently asked for a meeting and one of the points on the agenda will be why their products pages have not been indexed. Example: http://rocks.ie/details/Infinity-Ring/7170/ I've taken a look but I can't see anything obvious that is stopping pages like the above from being indexed. It has a an 'index, follow all' tag along with a canonical tag. Am I missing something obvious here or is there any clear reason why product pages are not being indexed at all by Google? Any advice would be greatly appreciated. Update I was told 'that each of the product pages on the full site have corresponding page on mobile. They are referred to each other via cannonical / alternate tags...could be an angle as to why product pages are not being indexed.'
Intermediate & Advanced SEO | | RobbieD910 -
Canonical tag - link juice to the frontpage
I only wants to be 100% sure about using the canonical tag.. I want to use it on pages that rankes together with the frontpage in Google, but i only want the frontpage to rank alone and to have the link juice from the other 2 sites direct-ed to the frontpage.. Hope you agre that its the correct way to doo so?? Wich one is correct: http://www.testtest.com/”> Or this http://www.testtest.com/”/>
Intermediate & Advanced SEO | | seopeter290 -
Title Tag Best Practices
In light of all the Google updates in 2013, have you updated/changed your title tag best practices? Is the format of (Keyword | Brand) still working well for your optimization efforts or have you started incorporating an approach similar to this format . (Keyword in a Sentence | Brand) Thanks in advance for your opinions.
Intermediate & Advanced SEO | | SEO5Team0 -
Canonical Tags being indexed on paginated results?
On a website I'm working on which has a search feature with paginated results, all of the pages of the search results are set with a canonical tag back to the first page of the search results, however Google is indexing certain random pages within the result set. I can literally do a search in Google and find a deep page in the results, click on it and view source on that page and see that it has a canonical tag leading back to the first page of the set. Has anyone experienced this before? Why would Google not honor a canonical tag if it is set correctly? I've seen several SEO techniques for dealing with pagination, is there another solution that you all recommend?
Intermediate & Advanced SEO | | IrvCo_Interactive0 -
Does a sitemap override Google parameter handling?
This question might seem silly, but I'll ask anyway. We have an eCommerce site with a ton of duplicate content, mostly caused by faceted navigation. In researching ways to reduce the clutter, I've decided to use Google parameter handling to stop Googlebot from crawling pages with certain parameters, like: sort order, page #, etc... Now my question: If I set all of these parameters so that Googlebot doesn't crawl the grids, how will they ever find the individual product pages? We do upload a sitemap with all of the product pages. Does this solve my issue? Or, should I handle the duplicate content with noindex, follow tag? Or, is there an even better way? Thanks
Intermediate & Advanced SEO | | rhoadesjohn0 -
Google bot vs google mobile bot
Hi everyone 🙂 I seriously hope you can come up with an idea to a solution for the problem below, cause I am kinda stuck 😕 Situation: A client of mine has a webshop located on a hosted server. The shop is made in a closed CMS, meaning that I have very limited options for changing the code. Limited access to pagehead and can within the CMS only use JavaScript and HTML. The only place I have access to a server-side language is in the root where a Defualt.asp file redirects the visitor to a specific folder where the webshop is located. The webshop have 2 "languages"/store views. One for normal browsers and google-bot and one for mobile browsers and google-mobile-bot.In the default.asp (asp classic). I do a test for user agent and redirect the user to one domain or the mobile, sub-domain. All good right? unfortunately not. Now we arrive at the core of the problem. Since the mobile shop was added on a later date, Google already had most of the pages from the shop in it's index. and apparently uses them as entrance pages to crawl the site with the mobile bot. Hence it never sees the default.asp (or outright ignores it).. and this causes as you might have guessed a huge pile of "Dub-content" Normally you would just place some user-agent detection in the page head and either throw Google a 301 or a rel-canon. But since I only have access to JavaScript and html in the page head, this cannot be done. I'm kinda running out of options quickly, so if anyone has an idea as to how the BEEP! I get Google to index the right domains for the right devices, please feel free to comment. 🙂 Any and all ideas are more then welcome.
Intermediate & Advanced SEO | | ReneReinholdt0