Canonical or No-index
-
Just a quick question really.
Say I have a Promotions page where I list all current promotions for a product, and update it regularly to reflect the latest offer codes etc.
On top of that I have Offer announcement posts for specific promotions for that product, highlighting very briefly the promotion, but also linking back to the main product promotion page which has a the promotion duplicated. So main page is 1000+ words with half a dozen promotions, the small post might be 200 words, and quickly become irrelevant as it is a limited time news article.
Now, I don't want the promotion page indexed (unless it has a larger news story attached to the promotion, but for this purpose presume it is doesn't). Initially the core essence of the post will be duplicated in the main Promotion page, but later as the offer expires it wouldn't be. Therefore would you Rel Canonical or just simply No-index?
-
But it's the date that makes them different! As in if I was specifically looking for info on 2013 I wouldn't WANT the 2014 page to be served and vice versa.
I would leave them both indexed - assuming the data is entirely different in each.
-
OK, but using Canonical for say:
Black Friday sales Roundup 2013 to Black Friday Sales Roundup 2014
is ok? Or should I leave both indexed. Both are quality pages, but targeting virtually the same keywords., apart from a date.
-
That is interesting thanks. I do actually have links to further information in exactly the way you say.
Including some basic information about the product could work... I will give it some thought, as I will need to make sure it is of sufficient quality.
Well, for definite it looks like I am using "canonical" incorrectly
Work to do...
-
^ I agree with Martijn here. Great point.
-
Hi there
If it were me - leave the promotion indexed because you want that promotion to be promoted and people are always looking for deals. Also, take a look at the Customer Journey from Google to see where opportunities lie in getting that promotion page and circulating - you could be missing some big opportunities.
I would also (from the promotions page) have a "Learn more about this product" sort of button so that the users that do land on that page can get more information - especially if you have more content about the product. Some customers will land there not ready to buy, but will be looking for information - get the the information they need and quickly.
You could bulletpoint the information on these smaller pages so people can quickly read and assess benefits. But in my opinion, I am not seeing a reason to canonicalize these or noindex them. Unless I am misunderstanding - if that's the case, please let me know!
Hope this helps a bit - good luck!
-
I'd say noindex as it's pretty hard to point the canonical to 1 page where there would be multiple promotions.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Canonicals for Splitting up large pagination pages
Hi there, Our dev team are looking at speeding up load times and making pages easier to browse by splitting up our pagination pages to 10 items per page rather than 1000s (exact number to be determined) - sounds like a great idea, but we're little concerned about the canonicals on this one. at the moment we rel canonical (self) and prev and next. so b is rel b, prev a and next c - for each letter continued. Now the url structure will be a1, a(n+), b1, b(n+), c1, c(n+). Should we keep the canonicals to loop through the whole new structure or should we loop each letter within itself? Either b1 rel b1, prev a(n+), next b2 - even though they're not strictly continuing the sequence. Or a1 rel a1, next a2. a2 rel a2, prev a1, next a3 | b1 rel b1, next b2, b2 rel b2, prev b1, next b3 etc. Would love to hear your points of view, hope that all made sense 🙂 I'm leaning towards the first one even though it's not continuing the letter sequence, but because it's looping the alphabetically which is currently working for us already. This is an example of the page we're hoping to split up: https://www.world-airport-codes.com/alphabetical/airport-name/b.html
Intermediate & Advanced SEO | | Fubra0 -
Google does not want to index my page
I have a site that is hundreds of page indexed on Google. But there is a page that I put in the footer section that Google seems does not like and are not indexing that page. I've tried submitting it to their index through google webmaster and it will appear on Google index but then after a few days it's gone again. Before that page had canonical meta to another page, but it is removed now.
Intermediate & Advanced SEO | | odihost0 -
Canonical Confusion
So I have products appearing in several categories, all of which have the correct canonical url. But Moz is flagging up pages I never knew existed, and I don't understand why they exist at all and more so why my canonical fix isn't occurring for them, as below: SEO Friendly URL: http://thespacecollective.com/nasa-pin-sets/nasa-shuttle-mission-pin-set-no2 Weird URL to same product: http://thespacecollective.com/index.php?route=themecontrol/product&product_id=159 Is this a developer problem rather than an SEO problem?
Intermediate & Advanced SEO | | moon-boots0 -
Mass Removal Request from Google Index
Hi, I am trying to cleanse a news website. When this website was first made, the people that set it up copied all kinds of articles they had as a newspaper, including tests, internal communication, and drafts. This site has lots of junk, but this kind of junk was on the initial backup, aka before 1st-June-2012. So, removing all mixed content prior to that date, we can have pure articles starting June 1st, 2012! Therefore My dynamic sitemap now contains only articles with release date between 1st-June-2012 and now Any article that has release date prior to 1st-June-2012 returns a custom 404 page with "noindex" metatag, instead of the actual content of the article. The question is how I can remove from the google index all this junk as fast as possible that is not on the site anymore, but still appears in google results? I know that for individual URLs I need to request removal from this link
Intermediate & Advanced SEO | | ioannisa
https://www.google.com/webmasters/tools/removals The problem is doing this in bulk, as there are tens of thousands of URLs I want to remove. Should I put the articles back to the sitemap so the search engines crawl the sitemap and see all the 404? I believe this is very wrong. As far as I know this will cause problems because search engines will try to access non existent content that is declared as existent by the sitemap, and return errors on the webmasters tools. Should I submit a DELETED ITEMS SITEMAP using the <expires>tag? I think this is for custom search engines only, and not for the generic google search engine.
https://developers.google.com/custom-search/docs/indexing#on-demand-indexing</expires> The site unfortunatelly doesn't use any kind of "folder" hierarchy in its URLs, but instead the ugly GET params, and a kind of folder based pattern is impossible since all articles (removed junk and actual articles) are of the form:
http://www.example.com/docid=123456 So, how can I bulk remove from the google index all the junk... relatively fast?0 -
301 canonical'd pages?
I have an ecommerce site with many different URLs with the same product. Let's say the product is a hat. It's in: a a) mysite.com/products/hat b) mysite.com/collections/head-ware/hat c) mysite.com/collections/stuff-to-wear-on-your-head/hat Right now, A is the canonical page for B and C. I want to clean up my site, so that every product only has ONE unique URL, which is linked to from all the collections. So B and C URL will be broken. Is it necessary that I 301 them if they were already canonical'd? Based on the number of products I have, I would have to 301 1000+ URLs. I'm just trying to figure out what I need to do to avoid getting penalized. thanks
Intermediate & Advanced SEO | | birchlore0 -
Issue: Rel Canonical
seomoz give me notices about rel canonical issues, how can i resolve it. any one can help me, what is rel canonical and how can i remove it
Intermediate & Advanced SEO | | learningall0 -
Why is a page with a noindex code being indexed?
I was looking through the pages indexed by Google (with site:www.mywebsite.com) and one of the results was a page with "noindex, follow" in the code that seems to be a page generated by blog searches. Any ideas why it seems to be indexed or how to de-index it?
Intermediate & Advanced SEO | | theLotter0 -
How to deal with old, indexed hashbang URLs?
I inherited a site that used to be in Flash and used hashbang URLs (i.e. www.example.com/#!page-name-here). We're now off of Flash and have a "normal" URL structure that looks something like this: www.example.com/page-name-here Here's the problem: Google still has thousands of the old hashbang (#!) URLs in its index. These URLs still work because the web server doesn't actually read anything that comes after the hash. So, when the web server sees this URL www.example.com/#!page-name-here, it basically renders this page www.example.com/# while keeping the full URL structure intact (www.example.com/#!page-name-here). Hopefully, that makes sense. So, in Google you'll see this URL indexed (www.example.com/#!page-name-here), but if you click it you essentially are taken to our homepage content (even though the URL isn't exactly the canonical homepage URL...which s/b www.example.com/). My big fear here is a duplicate content penalty for our homepage. Essentially, I'm afraid that Google is seeing thousands of versions of our homepage. Even though the hashbang URLs are different, the content (ie. title, meta descrip, page content) is exactly the same for all of them. Obviously, this is a typical SEO no-no. And, I've recently seen the homepage drop like a rock for a search of our brand name which has ranked #1 for months. Now, admittedly we've made a bunch of changes during this whole site migration, but this #! URL problem just bothers me. I think it could be a major cause of our homepage tanking for brand queries. So, why not just 301 redirect all of the #! URLs? Well, the server won't accept traditional 301s for the #! URLs because the # seems to screw everything up (server doesn't acknowledge what comes after the #). I "think" our only option here is to try and add some 301 redirects via Javascript. Yeah, I know that spiders have a love/hate (well, mostly hate) relationship w/ Javascript, but I think that's our only resort.....unless, someone here has a better way? If you've dealt with hashbang URLs before, I'd LOVE to hear your advice on how to deal w/ this issue. Best, -G
Intermediate & Advanced SEO | | Celts180