Should I canonicalize URLs with no query params even though query params are always automatically appended?
-
There's a section of my client's website that presents quarterly government financial data. Users can filter results to see different years and quarters of financial info.
If a user navigates to those pages, the URLs automatically append the latest query parameters. It's not a redirect...when I asked a developer what the mechanism was for this happening, he said "magic." He honestly didn't know how to describe it.
So my question is, is it ok to canonicalize the URL without any query parameters, knowing that the user will always be served a page that does have query parameters? I need to figure out how to manage all of the various versions of these URLs.
-
This is VERY helpful, thank you so much.
-
I would recommend to canonicalize these to a version of the page without query strings, IF you are not trying to optimize different version of the page for different keyword searches, and/or if the content doesn't change in a way which is significant for purpose of SERP targeting. From what you described, I think those are the case, and so I would canonicalize to a version without the query strings.
An example where you would NOT want to do that would be on an ecommerce site where you have a URL like www.example.com/product-detail.jsp?pid=1234. Here, the query string is highly relevant and each variation should be indexed uniquely for different keywords, assuming the values of "pid" each represent unique products. Another example would be a site of state-by-state info pages like www.example.com/locations?state=WA. Once again, this is an example where the query strings are relevant, and should be part of the canonical.
But, in any case a canonical should still be used, to remove extraneous query strings, even in the cases above. For example, in addition to the "pid" or "state" query strings, you might also find links which add tracking data like "utm_source", etc. And you want to make sure to canonicalize just to the level of the page which you want in the search engine's index.
You wrote that the query strings and page content vary based on years and quarters. If we assume that you aren't trying to target search terms with the year and quarter in them, then I would canonicalize to the URL without those strings (or to a default set). But if you are trying to target searches for different years and quarters in the user's search phrase, then not only would you include those in the canonical URL, but you would also need to vary enough page content (meta data, title, and on-page content) to avoid being flagged as duplicates.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
WP URL issue - Concatenated URLs (LOTS of them)
WP is doing this somehow, and creating URLs for hundreds of pages that don't exist. HOW is this happening, and how do I stop It? I have many, many URLS like this: https://www.atouchofrust.com/terms-of-use/atouchofrust.com/vendor-news. Of note, atouchofrust.com/terms-of-use, and atouchofrust.com/vendor-news are both legit pages on the site. Why they are being concatenated is beyond my limited understanding of WP. Please, somebody, help. Cori
Technical SEO | | FlyingC0 -
Canonical sitemap URL different to website URL architecture
Hi, This may or may not be be an issue, but would like some SEO advice from someone who has a deeper understanding. I'm currently working on a clients site that has a bespoke CMS built by another development agency. The website currently has a sitemap with one link - EG: www.example.com/category/page. This is obviously the page that is indexed in search engines. However the website structure uses www.example.com/page, this isn't indexed in search engines as the links are canonical. The client is also using the second URL structure in all it's off and online advertising, internal links and it's also been picked up by referral sites. I suspect this is not good practice... however I'd like to understand whether there are any negative SEO effectives from this structure? Does Google look at both pages with regard to visits, pageviews, bounce rate, etc. and combine the data OR just use the indexed version? www.example.com/category/page - 63.5% of total pageviews
Technical SEO | | MikeSutcliffe
www.example.com/page - 34.31% of total pageviews Thanks
Mike0 -
Which url should i use? Thanks!
I have a question regarding how to use my url, we are a Swedish-based website which have the url, http://interimslösning.se/ (that contains the Swedish letter “ö”) so the url can also be written as http://xn--interimslsning-3pb.se/. Which of the following url should I use for my backlinks, http://interimslösning.se/ or http://xn--interimslsning-3pb.se/ ? What is the difference between them regarding SEO? And is it good or bad to use letter like "ö" or other characters like that in your url? I was thinking that maybe it is good to use the letter "ö" for local search optimization in sweden, but i don't know.. Thanks in advance! Greetings,
Technical SEO | | Kiwibananlime
Paul Linderoth0 -
Spider Indexed Disallowed URLs
Hi there, In order to reduce the huge amount of duplicate content and titles for a cliënt, we have disallowed all spiders for some areas of the site in August via the robots.txt-file. This was followed by a huge decrease in errors in our SEOmoz crawl report, which, of course, made us satisfied. In the meanwhile, we haven't changed anything in the back-end, robots.txt-file, FTP, website or anything. But our crawl report came in this November and all of a sudden all the errors where back. We've checked the errors and noticed URLs that are definitly disallowed. The disallowment of these URLs is also verified by our Google Webmaster Tools, other robots.txt-checkers and when we search for a disallowed URL in Google, it says that it's blocked for spiders. Where did these errors came from? Was it the SEOmoz spider that broke our disallowment or something? You can see the drop and the increase in errors in the attached image. Thanks in advance. [](<a href=)" target="_blank">a> [](<a href=)" target="_blank">a> LAAFj.jpg
Technical SEO | | ooseoo0 -
Issues with trailing slash url
Recently, we have changed our website to www.example.com/super-rentals/ (example) and we have done a 301 redirection to the new urls from the old one. We have noticed in Google webmaster tool that urls without trailing slash as 404 error. www.example.com/super-rentals. Please let us know how to fix this issue as soon as possible. Note: Our previous urls are not the urls without trailing slash. It is a different url (www.example.com/super-rentals.htm) we have rewritten in to www.example.com/super-rentals/ only. I would like to know why GWT pulls out the urls without trailing slash and shows in 404 error. Thanks for your time
Technical SEO | | massimobrogi0 -
Urls with or without .html ending
Hello, Can anyone show me some authority info on wheher links are better with or without a .html ending? Thanks is advance
Technical SEO | | sesertin0 -
Magento URL Question
Calling all Magento Kings out there! I'm working on a client' site - powered by magento. I'm looking to rewrite a lot of the URLs. I know there is the URL rewrite tool, but I think what I need to do may go beyond this. Typical example would be: Old URL - http://www.xxxxxxxx.co.uk/fabric/product/product-black-screen-print-and-silver-fabric.html New URL - http://www.xxxxxx.co.uk/fabric/product/silver I know that magento's URLs seem to be created through categories so wanted to double check with someone the best way to do this. Also, I've heard that 301 redirects of non www to www in the .htaccess has a knock on effect on discounts? All comments greatly appreciated.
Technical SEO | | PerchDigital0 -
Canonicalization - duplicate homepage issues
I'm trying to work out the best way to resolve an issue where Google is seeing duplicate versions of a homepage, i.e. http://www.home.co.uk/Home.aspx and http://www.home.co.uk/ The site runs on Windows servers. I've tried implementing redirects for homepages before (for a different site on a linux server) and ended up with a loop, so although I know I can read lots of info (as I have been doing) and try again, I am really concerned about getting it wrong. Can anyone give me some advice on the best way to make Google take just one version of the page? Obviously link juice is also being diluted so I need to get this sorted asap. Thanks.
Technical SEO | | travelinnovations0