Increase in pages crawled per day
-
What does it mean when GWT abruptly jump from 15k to 30k pages crawled per day?
I am used to see spikes, like 10k average and a couple of time per month 50k pages crawled.
But in this case 10 days ago moved from 15k to 30k per day and it's staying there. I know it's a good sign, the crawler is crawling more pages per day, so it's picking up changes more often, but I have no idea of why is doing it, what good signals usually drive google crawler to choose to increase the number of pages crawled per day?
Anyone knows?
-
Nice find Ryan.
-
Agreed. Especially since Google's own Gary Illyes respond to the following with:
How long is the delay between making it mobile friendly and it being reflected in the search results?
Illyes says “As soon as we discover it is mobile friendly, on a URL by URL basis, it will be updated.
Sounds like when you went responsive they double checked each URL to confirm. From: http://www.thesempost.com/googles-gary-illyes-qa-upcoming-mobile-ranking-signal-change/. Cheers!
-
I usually analyze backlinks with both gwt and ahrefs, and ahrefs also doesn't show any abnormally high DA backlink either.
Agree the responsive change is the most probable candidate, I have a couple of other websites I want to turn responsive before April 21st, that's an opportunity to test and see if that is the reason.
-
Ah, the responsive change could be a big part of it. You're probably getting crawls from the mobile crawler. GWT wouldn't be the best source for the recency on backlinks. I'd actually look for spikes via referrers in Analytics. GWT isn't always that responsive when reporting links. Still, it looks like the responsive redesign is a likely candidate for this, especially with Google's looming April 21st deadline.
-
Tw things I forgot to mention are:
- something like 2 weeks ago we turned the website responsive, could it be google mobile crawler is increasing the number of crawled pages, I have to analyze the logs to see if the requests are coming from google mobile crawler
- the total number of indexed pages didn't change, which make me wonder if a rise in the number of crawled pages per day is all that relevant
-
Hi Ryan,
- GWT (Search Traffic->Search Queries) shows a drop of 6% in impressions for brand based searches (google trends shows a similar pattern).
- GWT is not showing any recent backlink with an abnormally high DA.
- we actually had a couple of unusually high traffic from Facebook thanks to a couple of particularly successful post, but we are talking about a couple of spikes of just 5k visits and they both started after the rise of pages crawled per day.
If you have any other idea it's more than welcome, I wish I could understand the source of that change to be able to replicate it on other websites.
-
I am not sure I understand what you mean, that website has a total of 35k pages submitted through sitemap to GWT, of which only 8k are indexed. The total number of pages indexed have always been slowly increasing through time, it moved from 6k to 8k in the last couple of months, slowly with no spikes.
That's not the total number of pages served by the site, since dynamics search results page amount to around 150k total pages, we do not submit all of them in the sitemap on purpose, and GWT shows 70k pages as the total number of indexed pages.
I analyzed Google crawler activity through server logs in the past, it does pick a set of (apparently) random pages every night and does crawl them. I actually never analyzed what percentage of those pages are in the sitemap or not.
Internal link structure was built on purpose to try to favor ranking of pages we considered more important.
The point is we didn't change anything in the website structure recently. User generated content have been lowering duplicate pages count, slowly, through time, without any recent spike. We have a PR campaign which is increasing backlinks with an average rate of around 3 links per week, and we didn't have any high DA backlinks appearing in the last few weeks.
So I am wondering what made google crawler start crawling much more pages per day.
-
yes, I updated to parameters just before you posted
-
When you say URL variables do you mean query string variables like ?key=value
That is really good advice. You can check in your GWT. If you let google crawl and it runs in to a loop it will not index that section of your site. It would be costly for them.
-
I would also check you have not got a spike of URL parameters becoming available. I recently had a similar issue and although I had these set up in GWT the crawler was actively wasting its time on them. Once I added to robots the crawl level went back to 'normal'.
-
There could be several factors... maybe your brand based search is prompting Google to capture more of your site. Maybe you got a link from a very high authority site that prompts higher crawl volumes. Queries that prompt freshness related to your site could also spur on Google. It is a lot of guesswork, but can be whittled down some by a close look at Analytics and perhaps tomorrows OSE update (Fresh Web Explorer might provide some clue's in the meantime.) At least you're moving in the right direction. Cheers!
-
There are two variables in play and you are picking up on one.
If there are 1,000 pages on your website then Google may index all 1,000 if they are aware of all the pages. As you indicated, it is also Google's decision how many of your pages to index.
The second factor which is most likely the case in your situation is that Google only has two ways to index your pages. One is to submit a sitemap in GWT to all of your known pages. So Google would then have a choice to index all 1,000 as it would then be aware of their existence. However, it sounds like your website is relying on links. If you have 1,000 pages and a home page with one link leading to an about us page then Google is only aware of two pages on your entire website. Your website has to have a internal link structure that Google can crawl.
Imagine your website like a tree root structure. For Google to get to every page and index it then it has to have clear, defined, and easy access. Websites with a home page that links to a page A that then links to page B that then links to page C that then links to page D that then links to 500 pages can easily lose 500 pages if there is an obstruction between any of the pages that lead to page D. Because google can't crawl to page D to see all the pages on it.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unable to site crawl
Hi there, our website was revamped last year and Moz is unable to crawl the site since then. Could you please check what is the issue? @siteaudits @Crawlinfo gleneagles.com.my
Technical SEO | | helensohdg380 -
Redesigned and Migrated Website - Lost Almost All Organic Traffic - Mobile Pages Indexing over Normal Pages
We recently redesigned and migrated our site from www.jmacsupply.com to https://www.jmac.com It has been over 2 weeks since implementing 301 redirects, and we have lost over 90% of our organic traffic. Google seems to be indexing the mobile versions of our pages over our website pages. We hired a designer to redesign the site, and we are confident the code is doing something that is harmful for ranking our website. F or Example: If you google "KEEDEX-K-DS-FLX38" You should see our mobile page ranking: http://www.jmac.com/mobile/Product.aspx?ProductCode=KEEDEX-K-DS-FLX38 but the page that we want ranked (and we think should be, is https://www.jmac.com/Keedex_K_DS_FLX38_p/keedex-k-ds-flx38.htm) That second page isn't even indexed. (When you search for: "site:jmac.com Keedex K-DS-FLX38") We have implemented rel canonical, and rel alternate both ways. What are we doing wrong??? Thank you in advance for any help - it is much appreciated.
Technical SEO | | jmaccom0 -
What to do with temporary empty pages?
I have a website listing real estate in different areas that are for sale. In small villages, towns, and areas, sometimes there is nothing for sale and therefore the page is completely empty with no content except a and some footer text. I have thousand of landing pages for different areas. For example "Apartments in Tibro" or "Houses in Ljusdahl" and Moz Pro gives me some warnings for "Duplicate Content" on the empty ones (I think it does so because the pages are so empty that they are quite similar). I guess Google could also think bad of my site if I have hundreds or thousands of empty pages even if my total amount of pages are 100,000. So, what to do with these pages for these small cities, towns and villages where there is not always houses for sale? Should I remove them completely? Should I make a 404 when no houses for sale and a 200 OK when there is? Please note that I have totally 100,000+ pages and this is only about 5% of all my pages.
Technical SEO | | marcuslind900 -
Duplicate Page Content but where?
Hi All Moz is telling me I have duplicate page content and sure enough the PA MR mT are all 0 but it doesnt give me a link to this content! This is the page: http://www.orsgroup.com/index.php?page=Scanning-services But I cant find where the duplicate content is other than on our own youtube page which I will get removed here: http://www.youtube.com/watch?v=Pnjh9jkAWuA Can anyone help please? Andy
Technical SEO | | ORS-Group0 -
My site is not being regularly crawled?
My site used to be crawled regularly, but not anymore. My pages aren't showing up in the index months after they've been up. I've added them to the sitemap and everything. I now have to submit them through webmaster tools to get them to index. And then they don't really rank? Before you go spouting off the standard SEO resolutions... Yes, I checked for crawl errors on Google Webmaster and no, there aren't any issues No, the pages are not noindex. These pages are index,follow No, the pages are not canonical No, the robots.txt does not block any of these pages No, there is nothing funky going on in my .htaccess. The pages load fine No, I don't have any URL parameters set What else would be interfereing? Here is one of the URLs that wasn't crawled for over a month: http://www.howlatthemoon.com/locations/location-st-louis
Technical SEO | | howlusa0 -
Can you noindex a page, but still index an image on that page?
If a blog is centered around visual images, and we have specific pages with high quality content that we plan to index and drive our traffic, but we have many pages with our images...what is the best way to go about getting these images indexed? We want to noindex all the pages with just images because they are thin content... Can you noindex,follow a page, but still index the images on that page? Please explain how to go about this concept.....
Technical SEO | | WebServiceConsulting.com0 -
How to determine which pages are not indexed
Is there a way to determine which pages of a website are not being indexed by the search engines? I know Google Webmasters has a sitemap area where it tells you how many urls have been submitted and how many are indexed out of those submitted. However, it doesn't necessarily show which urls aren't being indexed.
Technical SEO | | priceseo1 -
What Are The Page Linking Options?
Okay, so I'm working with a site that has pretty good domain authority, but the interior pages are not well linked or optimized. So, it ranks for some terms, but everything goes to the home page. So, I'd like to increase the authority of the interior pages. The client is not wild about spreading targeted juice via something like a footer. They also don't like a "Popular Searches" style link list. The objection is that it's not attractive. They currently use cute euphemisms as the linking text, so like "cool stuff" instead of "sheepskin seat covers." In that made up example, they'd like to rank for the latter, but insist on using the former. What about a slide show with alt text/links? Would that increase the authority of those pages in a term-targeted kinda way? Are there other options? Does it matter how prominent those links are, like footers not as good as something higher up the page? They currently use a pull-down kind of thing that still results in some pages having no authority. Do bots use that equally well? Thanks!
Technical SEO | | 945010