Trailing Slashes on Home Pages
-
I do not think I have a problem here, but a second opinion would be welcomed...
I have a site which has a the rel=canonical tag with the trailing slash displayed. ie www.example.com/
The sitemap has it without the trailing slash. www.example.com
Google has it's cached copy with the trailing slash but the browser displays it without.
I want to say it's perfectly fine (for the home page) as I tend to think they are treated (with/without trailing slashes) as the same canonical URL.
-
Totally agree, it's kind of a non issue, improve the canonical if you can but really, don't sweat it.
-
Oh yes, thanks for that. I've read that page a few times. :S
Apologies for the confusion Alex.
Don't have a crisis of confidence anyway! If there's a canonical 99 times out of 100 (probably more) I'm sure Google would get this right whether it's the homepage or not.
What server is the site hosted on Alex? Or are the URLs controlled by a CMS?
-
That is certainly my understanding - the homepage is a special case.
This pretty much details it in full:
-
Hi Alex
Ah, crisis of confidence again!
I didn't think that this was the case though for the index page. I thought normalisation meant they were treated as the same page. As Marcus said, I can't 301 the example.com page to example.com/ .
-
Hey,
in an ideal world, make sure it is has no trailing slash. But, as per the Google specific recommendations, make sure both resolve as a 200 OK rather than redirecting / to non /.
Think about it -
The browser removes the trailing slash. Also, go to any big site, Google, SEOMoz - the all have no slash. But.. check it in webbug and they resolve on both.
For me, having a trailing slash on the root or anywhere is just something else for folks to forget to add if they are linking or some such.
Here I would just remove the trailing slash in your canonical if you can just to be sure but the usual rules don't apply on the homepage and www.example.com & www.example.com/ are regarded as the same thing.
I have constant crisis of confidence - i often wonder if I am making it up as I go along or somewhere down the history of all the hundreds of SEO audits I have done I actually learned something along the way! I have actually googled something that I was unsure about and found my own blog post about it before. I think, much like Homer Simpson, every new thing I learn now pushes out an older thing!
Hope that helps!
Marcus -
Hi Marcus
I agree out outside of the home page it's an issue (& good answer btw) but it's only the index page I'm worried about.
It's that crisis of confidence that I'm sure we all get from time to time as to whether something rather simple/fundamental is actually as we believe it to be.
I've been re-reading this document http://tools.ietf.org/html/rfc3986 and I think it's section 3.2.6 (if I remember right) that covers normalization of the root URL's.
-
The two versions you speak of are treated as duplicate content. Ideally you should make sure the URL is the same everywhere, and 301 redirect to your preferred version. Are you sure the browser itself isn't removing the trailing slash? I know Chrome does on non-directory pages.
Saying that, if you have a canonical tag it shouldn't cause a massive problem, but it will help to do everything properly. Do everything you can to make sure all links under your control are the same version.
-
Hey Alex
There is a good overview of this here:
http://googlewebmastercentral.blogspot.co.uk/2010/04/to-slash-or-not-to-slash.html
Outside of the homepage, a slash url and a non slash URL are regarded as two seperate pages so are technically duplicates. Now, Google will generally deal with this but it is not optimal (which is what we are all about eh) so you should make a call and either go / or no / and then 301 the other version to the default.
The homepage should resolve on both and 200 for both and not redirect to the non slash. The browser will generally remove the slash on a root URL.
This is from the above link:
Rest assured that for your root URL specifically, http://example.com is equivalent to http://example.com/ and can’t be redirected even if you’re Chuck Norris.
If you are using a CMS there are usually plugins or configuration options to enforce a slash if that is your preferred option.
The big deal here is to
A - be consistent
B - 301 the alternative to the preferred for crawl optimisation and to ensure no daft duplication issues crop up.
Hope that helps!
Marcus
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why are crawlers not picking up these pages?
Hi there, I've been asked to audit a new subdomain for a travel company. It's all a bit messy, so it's going to take some time to remedy. However, one thing I couldn't understand was the low number of pages appearing in certain crawlers. The subdomain has many pages. A homepage, category pages then product pages. Unfortunately, tools like Screaming Frog and xml-sitemaps.com are only picking up 19 pages and I can't figure out why. Google has so far indexed around 90 pages - this is by no means all of them, but that's probably because of the new domain and lack of sitemap etc. After looking at the crawl results, only the homepage and category (continent pages) are showing. So all the product pages are not. for example, tours.statravel.co.uk/trip/Amsterdam_Kings_Day_(Start_London_end_London)-COCCKDM11 is not appearing in the crawl results. After reviewing the source code, I can't see anything that would prevent this page being crawled. Am I missing something? At the moment, the crawl should be picking up around 400+ product pages, but it's not picking up any. Thanks
Technical SEO | | PeaSoupDigital0 -
Linking Pages - 404s
Hello, I have noticed that we have recently managed to accrue a large number of 404s that are listed as Page Title/URL of Linking Page in Moz (e.g. http://www.onexamination.com/international/) but I do not know which site they are coming from, is there an easy why to find out or shall we just create redirects for them all? Thanks in advance for your help. Rose
Technical SEO | | bmjcai1 -
Duplicate Page Content
Hello, After crawling our site Moz is detecting high priority duplicate page content for our product and article listing pages, For example http://store.bmiresearch.com/bangladesh/power and http://store.bmiresearch.com/newzealand/power are being listed as duplicate pages although they have seperate URLs, page titles and H1 tags. They have the same product listed but I would have thought the differentiation in other areas would be sufficient for these to not be deemed as duplicate pages. Is it likely this issue will be impacting on our search rankings? If so are there any recommendations as to how this issue can be overcome. Thanks
Technical SEO | | carlsutherland0 -
Home Page Blog Snippets - Duplicate Content Help?
Afternoon Folks- I have been asked to contribute to a new site that has a blogfeed prominently displayed on the home page. It's laid out like this: Logo | Menu HOME PAGE SLIDER Blog 1 Title about 100 words of blog 1 Text Blog 2 Title about 100 words of blog 2 Text Blog 3 Title about 100 words of blog 3 Text Footer: -- This seems like an obvious duplicate content situation but also a way I have seen a lot of blogs laid out. (I.E. With blog content snippets being a significant portion of the home page content) I want the blogs to rank and I want the home page to rank, so I don't feel like a rel canonical on the blog post's is the correct option unless I have misunderstood their purpose. Anyone have any ideas or know how this is usually handled?
Technical SEO | | CRO_first0 -
Crawl Test Report only shows home page and no inner site pages?
Hi, My site is [removed] When I first tried to set up a new campaign for the site, I received the error: Roger has detected a problem: We have detected that the root domain [removed] does not respond to web requests. Using this domain, we will be unable to crawl your site or present accurate SERP information. I then ran a Crawl Test per the FAQ. The SEOmoz crawl report only shows my home page URL and does not have any inner site pages. This is a Joomla site. What is the problem? Thanks! Dave
Technical SEO | | crave810 -
Dynamic page
I have few pages on my site that are with this nature /locator/find?radius=60&zip=&state=FL I read at Google webmaster that they suggest not to change URL's like this "According to Google's Blog (link below) they are able to crawl the simplified dynamic URL just fine, and it is even encouraged to use a simple dynamic URL ( " It's much safer to serve us the original dynamic URL and let us handle the problem of detecting and avoiding problematic parameters. " ) _http://googlewebmastercentral.blogspot.com/2008/09/dynamic-urls-vs-static-urls.html _It can also actually lead to a decrease as per this line: " We might have problems crawling and ranking your dynamic URLs if you try to make your urls look static and in the process hide parameters which offer the Googlebot valuable information. "The URLs are already simplified without any extra parameters, which is the recommended structure from Google:"Does that mean I should avoid rewriting dynamic URLs at all?
Technical SEO | | ciznerguy
That's our recommendation, unless your rewrites are limited to removing unnecessary parameters, or you are very diligent in removing all parameters that could cause problems" I would love to get some opinions on this also please consider that those pages are not cached by Google for some reason.0 -
Removing pages from website
Hello all, I am fairly new to the SEOmoz community. But i am working for a company which organizes exhibitons, events and training in Holland. A lot of these events are only given ones ore twice and then we do not organise them any more because they are no longer relevant. Every event has its own few webpages which provide information about the event and are being indexed by Google. In the past we did not remove any of these events. I was looking in the CMS and saw a lot of events of 2008 and older which are being indexed. To clean the website and the CMS i am thinking of removing these pages of old events. The risk is that these pages have some links to them and are getting some traffic, so if i remove them there is a risk of losing traffic and rankings. What would be the wise thing to do? Make a folder with archive or something? Regards, Ruud
Technical SEO | | RuudHeijnen0 -
301 lots of old pages to home page
Will it hurt me if i redirect a few hundred old pages to my home page? I currently have a mess on my hands with many 404's showing up after moving my site to a new ecommerce server. We have been at the new server for 2 years but still have 337 404s showing up in google webmaster tools. I don't think it would affect users as very few people woudl find those old links but I don't want to mess with google. Also, how much are those 404s hurting my rank?
Technical SEO | | bhsiao1