Differing numbers of pages indexed with and without the trailing slash
-
I noticed today that a site: query in Google (UK) for a certain domain I'm looking at returns different numbers depending on whether or not the trailing slash is added at the end. With the trailing slash the numbers are significantly different. This is a domain with a few duplicate content issues.
It seems very rare but I've managed to replicate it for a couple of other well known domains, so this is the phenomenon I'm referring to:
site:travelsupermarket.com - 16'300 results
site:travelsupermarket.com/ - 45'500 resultssite:guardian.co.uk - 120'000'000 results
site:guardian.co.uk/ - 121'000'000 resultsFor the particular domain I'm looking at the numbers are 19'000 without the trailing slash and 800'000 with it! As mentioned, there are a few duplicate content issues at the moment that I'm trying to tidy up, but how should I interpret this? Has anyone seen this before and can advise what it could indicate?
Thanks in advance for any answers.
-
"There is an XML sitemap submitted and GWMT shows a total number of indexed pages in the 800'000 region."
Brilliant. That's the number I would trust.
Incidentally, I see different numbers than what you see for all 4 site: queries you mentioned. Variances are pretty normal in my experience.
I've never noticed it, I would be intrigued to hear if someone else has correlated such variances to a technical issue or penalty.
-
Hi Adam, thanks for your response.
There is an XML sitemap submitted and GWMT shows a total number of indexed pages in the 800'000 region.
While I appreciate site: is not a precise tool, the fact that the site: numbers between trailing slash and no trailing slash match for virtually every other domain I try this with, and the numbers are so different in this example, suggests to me that this could be an indication of something amiss.
-
The site: query on Google isn't a precise tool. It's not uncommon to see strange variances like that.
For a more accurate count, submit an XML Sitemap via Google Webmaster Tools and Google will give you a more precise count of which pages it has indexed.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google keeps marking different pages as duplicates
My website has many pages like this: mywebsite/company1/valuation mywebsite/company2/valuation mywebsite/company3/valuation mywebsite/company4/valuation ... These pages describe the valuation of each company. These pages were never identical but initially, I included a few generic paragraphs like what is valuation, what is a valuation model, etc... in all the pages so some parts of these pages' content were identical. Google marked many of these pages as duplicated (in Google Search Console) so I modified the content of these pages: I removed those generic paragraphs and added other information that is unique to each company. As a result, these pages are extremely different from each other now and have little similarities. Although it has been more than 1 month since I made the modification, Google still marks the majority of these pages as duplicates, even though Google has already crawled their new modified version. I wonder whether there is anything else I can do in this situation? Thanks
Technical SEO | | TuanDo96270 -
Is my page being indexed?
To put you all in context, here is the situation, I have pages that are only accessible via an intern search tool that shows the best results for the request. Let's say i want to see the result on page 2, the page 2 will have a request in the url like this: ?p=2&s=12&lang=1&seed=3688 The situation is that we've disallowed every URL's that contains a "?" in the robots.txt file which means that Google doesn't crawl the page 2,3,4 and so on. If a page is only accessible via page 2, do you think Google will be able to access it? The url of the page is included in the sitemap. Thank you in advance for the help!
Technical SEO | | alexrbrg0 -
Discrepancy in actual indexed pages vs search console
Hi support, I checked my search console. It said that 8344 pages from www.printcious.com/au/sitemap.xml are indexed by google. however, if i search for site:www.printcious.com/au it only returned me 79 results. See http://imgur.com/a/FUOY2 https://www.google.com/search?num=100&safe=off&biw=1366&bih=638&q=site%3Awww.printcious.com%2Fau&oq=site%3Awww.printcious.com%2Fau&gs_l=serp.3...109843.110225.0.110430.4.4.0.0.0.0.102.275.1j2.3.0....0...1c.1.64.serp..1.0.0.htlbSGrS8p8 Could you please advise why there is discrepancy? Thanks.
Technical SEO | | Printcious0 -
How long after disallowing Googlebot from crawling a domain until those pages drop out of their index?
We recently had Google crawl a version of the site we that we had thought we had disallowed already. We have corrected the issue of them crawling the site, but pages from that version are still appearing in the search results (the version we want them to not index and serve up is our .us domain which should have been blocked to them). My question is this: How long should I expect that domain (the .us we don't want to appear) to stay in their index after disallowing their bot? Is this a matter of days, weeks, or months?
Technical SEO | | TLM0 -
Duplicate page errors from pages don't even exist
Hi, I am having this issue within SEOmoz's Crawl Diagnosis report. There are a lot of crawl errors happening with pages don't even exist. My website has around 40-50 pages but SEO report shows that 375 pages have been crawled. My guess is that the errors have something to do with my recent htaccess configuration. I recently configured my htaccess to add trailing slash at the end of URLs. There is no internal linking issue such as infinite loop when navigating the website but the looping is reported in the SEOmoz's report. Here is an example of a reported link: http://www.mywebsite.com/Door/Doors/GlassNow-Services/GlassNow-Services/Glass-Compliance-Audit/GlassNow-Services/GlassNow-Services/Glass-Compliance-Audit/ btw there is no issue such as crawl error in my Google webmaster tool. Any help appreciated
Technical SEO | | mmoezzi0 -
Drastic increase of indexed pages correlated to rankings loss?
Our ecommerce website has had a drastic increase in indexed pages, and equal loss of Google organic traffic. After 10/1 the number of indexed pages jumped from 240k to 5.7 million by the end of the year, according to GWT. Coincidentally, the sitemap tops at 14,192 pages, with 13,324 indexed. Organic traffic on some top keyphrases began declining by half after 10/26 and ranking (previously placing in the top 5 spots) has dropped to the fifth page of results. This website does produce session id's (/c=) so we been blocking /c=/ in the robots.txt file. We also have a rel=canonical on all pages pointing at the correct url. With all of this in place, traffic hasn't recovered. Is there a correlation between this spike of indexed pages and the lost keyword ranking? Any advice to investigate and correct this further would be greatly appreciated. Thanks.
Technical SEO | | marketing_zoovy.com0 -
Will rel=canonical cause a page to be indexed?
Say I have 2 pages with duplicate content: One of them is: http://www.originalsite.com/originalpage This page is the one I want to be indexed on google (domain rank already built, etc.) http://www.originalpage.com is more of an ease of use domain, primarily for printed material. If both of these sites are identical, will rel=canonical pointing to "http://www.originalsite.com/originalpage" cause it to be indexed? I do not plan on having any links on my site going to "http://www.originalsite.com/originalpage", they would instead go to "http://www.originalpage.com".
Technical SEO | | jgower0 -
Pages not Indexed after a successful Google Fetch
I am trying to understand why google isn't indexing key content on my site. www.BeyondTransition.com is indexed and new pages show up in a couple of hours. My key content is 6 pages of information for each of 3000 events (driven by mySQL on a wordpress platform). These pages are reached via a search page, but no direct navigation from the home page. When I link to an event page from an indexed page it doesn't show up in search results. When I use fetch on webmaster tools the fetch is successful but is then not indexed - or if it does appear in results it's directed to the internal search page e.g. http://www.beyondtransition.com/site/races/course/race110003/ has been fetched and submitted with links but when I search for BeyondTransition Ironman Cozumel I get these results.... So what have I done wrong and how do I go about fixing it? All thoughts and advice appreciated Thanks Denis
Technical SEO | | beyondtransition0