Moz Crawl shows over 100 times more pages than my site has?
-
The latest crawl stats are attached. My site has just over 300 pages?
Wondering what I have done wrong?
-
total pages is higher you are right Keri but still only 581
-
I believe this image looks at what's indexed that's a subset of your sitemap that you submitted. You may want to look at Google Index -> Index Status in GWT to see what it shows there.
-
latest Moz crawl
-
latest webmaster tools crawl
-
I will definetly be paying attention to those numbers Keri. Webmaster tools is showing the right number of pages (something over 300 with 90% of those indexed)
-
It's not going to be a penalty, but it'll be good to have a bit less of a load on your server (bots no longer crawling thousands of pages) and just have your real pages in the index.
Places to look for interesting changes in site metrics would be your organic traffic in analytics and taking a look at your Google Webmaster Tools account to see your impressions, pages crawled, etc.
-
Thanks Keri, I will update asap.
could you let me know how big an issue would this be? (When you have the time of course;))
-
You're welcome! I may have opened a can of worms, however. That sitemap is generated by an automated tool (based on the footer at the bottom), so somehow it's finding that page 28 as well.
You may also want to ask the developer if you should be indexing the categories in the blog archives. There are resources on Moz about the best way to set that up in Wordpress, but I don't have them at my fingertips at the moment (I have a snuggly baby sleeping on my lap instead that's slowing me down a tad).
To answer your next question, after you figure out where the page 28 is being linked from and cure that, yes, you can do a one-time crawl from Research Tools. It won't overwrite your campaign info, but you can at least see if Moz is seeing thousands of pages or just a few hundred to see if stuff was fixed. Again, happy to provide more detail if/when you need it (and others will likely jump in with help on the thread, too).
I'd love to also see a little update a few weeks down the line of any changes you've noticed on your site metrics after getting this fixed.
-
You rock:)
-
And I found it. The sitemap at http://www.nineclouds.ca/sitemap includes a page /28, which is where the crawlers are finding the non-existent pages.
-
If you look at http://www.nineclouds.ca/blog/page/23, you'll see that there's a double arrow in the pagination at the right that goes to page 24, even though the last page is page 21. Google somehow has found the pages greater than 21 (which I'm not sure how they found), and once they found one of those, they keep seeing the link there with the double arrows to go to another page. Same happened with Rogerbot. I'm not sure where the bad originating link is (what legit page on your site is linking to something over page 21), but that's the loop that's happening and causing a ton of pages to be indexed. Get rid of those, and you'll also get rid of most of your errors.
-
Not shy about that at all thanks Keri.
any help you can provide is greatly appreciated.
-
Hi Bill,
Using my admin powers, I took a peek at your account. I'm still trying to figure out where it's coming from, but you have thousands of empty pages of your blog indexed. I'll dig around a little more and see if I can figure out what's up.
If you're comfortable with sharing your URL here in a public forum, other people can come take a look too. Otherwise, I'm happy to send you a private message with part of what's up and give your developer a place to start looking.
-
Thanks Keri. I am the owner of the site not the programmer so I am looking up the terms you are using as I write this response. If I am using pagination is there a way for the moz not to allow for this? If I understand your question about the calendar correctly I do have one as part of my blog that dates each post? Can I get the bot to not recognize this calendar?
-
My first guess would be parameters or something are being crawled. Do you have pagination? Sorting ascending and descending? A calendar that's getting crawled through the year 2525?
Your next step would be to look into what those duplicate pages are and see if something is amiss that's generating a ton of URLs.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Will noindex pages still get link equity?
We think we get link equity from some large travel domains to white label versions of our main website. These pages are noindex because they're the same URLs and content as our main B2C website and have canonicals to the pages we want indexed. Question is, is there REALLY link equity to pages on our domain which have "noindex,nofollow" on them? Secondly we're looking to put all these white label pages on a separate structure, to better protect our main indexed pages from duplicate content risks. The best bet would be to put them on a sub folder rather than a subdomain, yes? That way, even though the pages are still noindex, we'd get link equity from these big domains to www.ourdomain.com/subfolder where we wouldn't to subdomain.ourdomain.com? Thank you!
Reporting & Analytics | | HTXSEO0 -
What's more accurate? GA queries data or Moz/SEMRush keyword data for rankings
What do you guys think? What's more accurate? GA queries data or Moz/SEMRush keyword data for rankings? Any thoughts appreciated.
Reporting & Analytics | | znotes0 -
Why only a few pages of my website are being indexed by google
Our website www.navisyachts.com has in its sitemap over 3000 pages of information, and this is all unique content written by our team. Now Google Webmaster central shows only 100 urls indexed from 3500 submitted. Can you help me understand why and how I can fix this issue? The website has 4 years old, is a Joomla 3.3 up to date. It has part of the content in the Joomla core content systems and part in K2. Thank you. Pablo
Reporting & Analytics | | FWC_SEO0 -
Enchance Ecommerce Tag Fired on every page i visit why?
Hello Experts, I have added given below code (A) on my website and given below (B) setting in GTM. Now whenever i visit the site then two tags fired that is google analytic and enchance ecommerce. GA tag fired that is fine but why enhance ecommerce tag fired even if i visit any page? I think enhane ecommerce tag should be fired when i click on addtocart right?A)```
Reporting & Analytics | | bkmitesh
dataLayer.push({ 'event':'addToCart', 'ecommerce':{ 'currencyCode':'EUR', 'add':{ // 'add' actionFieldObject measures. 'products':[{ // adding a product to a shopping cart. 'name':'Triblend Android T-Shirt', 'id':'12345', 'price':'15.25', 'brand':'Google', 'category':'Apparel', 'variant':'Gray', 'quantity':1 }] } } }); Track type : Event Event Category: `Ecommerce` Event Action: `Add to Cart` Enable Enhanced Ecommerce Features: `true` Use Data Layer: `true` Basic Settings - Document Path: `{{url path}}` Firing Rule: `{{event}}` equals `addToCart``Thanks!`0 -
Google Analytics shows most referrers as "Direct" -- What are some better tools?
Very often Google Analytics will show 50-90% of our referrers as (direct) which is not very helpful. Are there other tools out there that will provide a clearer breakdown of what other websites are sending us our traffic? Specifically, I want to be able to be able to tell who are the top traffic referrers to my top performing pages on my site for the last 30 days. (I want to be able to study this on a per-page basis.) Thanks in advance!
Reporting & Analytics | | Brand_Psychic0 -
Subdomain and relative link paths cause crawl errors
I have a Wordpress blog on our subdomain and we use relative paths on our domain. It appears as though Google bot is crawling from the subdomain categories back to the domain relative paths. This of course results in hundreds of 404 pages. Any suggestions as to how to resolve this issue without changing the relative path structure of our domain? I can provide more information if need be. While I realize these issues are not that pressing, I'd obviously like to remove as many errors as possible. If anyone has encountered this problem, especially in Wordpress I'd really like to hear your solution or lack there of. Thank you in advance.
Reporting & Analytics | | BethA0 -
Number of Visitor Entries to page via search engine
Hi, I wanted to figure out the most optimal way to track the number of visitors that comes to a specific page on my blog via search engine only. I know Google Analytics has a "top landing page" filter, but that includes all visitors that comes in directly or other channels. Is there a way to figure out how many visitors a certain page received that was generated through only search engine? Does SEOmoz have this capability?
Reporting & Analytics | | kevinyu10290 -
Overall site traffic - 3 quick questions
3 things : 1. Does Google factor in overall site traffic in rankings? So for 2 sites, all other things being pretty much equal, the one with higher traffic will be listed higher? 2. Does this logically imply that sites with lower traffic overall face an uphill struggle to be ranked highly??? 3. As an extension to this, would it be true to say that by increasing site traffic, say with Google Adwords or other online or offiline or whatever advertising, that might help get higher SEO rankings??? Thanks so much for your responses. This forum is great!
Reporting & Analytics | | inhouseninja0