Would you rate-control Googlebot? How much crawling is too much crawling?
-
One of our sites is very large - over 500M pages. Google has indexed 1/8th of the site - and they tend to crawl between 800k and 1M pages per day.
A few times a year, Google will significantly increase their crawl rate - overnight hitting 2M pages per day or more. This creates big problems for us, because at 1M pages per day Google is consuming 70% of our API capacity, and the API overall is at 90% capacity. At 2M pages per day, 20% of our page requests are 500 errors.
I've lobbied for an investment / overhaul of the API configuration to allow for more Google bandwidth without compromising user experience. My tech team counters that it's a wasted investment - as Google will crawl to our capacity whatever that capacity is.
Questions to Enterprise SEOs:
*Is there any validity to the tech team's claim? I thought Google's crawl rate was based on a combination of PageRank and the frequency of page updates. This indicates there is some upper limit - which we perhaps haven't reached - but which would stabilize once reached.
*We've asked Google to rate-limit our crawl rate in the past. Is that harmful? I've always looked at a robust crawl rate as a good problem to have.
- Is 1.5M Googlebot API calls a day desirable, or something any reasonable Enterprise SEO would seek to throttle back?
*What about setting a longer refresh rate in the sitemaps? Would that reduce the daily crawl demand? We could set increase it to a month, but at 500M pages Google could still have a ball at the 2M pages/day rate.
Thanks
-
I agree with Matt that there can probably be a reduction of pages, but that aside, how much of an issue this is comes down to what pages aren't being indexed. It's hard to advise without the site, are you able to share the domain? If the site has been around for a long time, that seems a low level of indexation. Is this a site where the age of the content matters? For example Craigslist?
Craig
-
Thanks for your response. I get where you're going with that. (Ecomm store gone bad.) It's not actually an Ecomm FWIW. And I do restrict parameters - the list is about a page and a half long. It's a legitimately large site.
You're correct - I don't want Google to crawl the full 500M. But I do want them to crawl 100M. At the current crawl rate we limit them to, it's going to take Google more than 3 months to get to each page a single time. I'd actually like to let them crawl 3M pages a day. Is that an insane amount of Googlebot bandwidth? Does anyone else have a similar situation?
-
Gosh, that's a HUGE site. Are you having Google crawl parameter pages with that? If so, that's a bigger issue.
I can't imagine the crawl issues with 500M pages. A site:amazon.com search only returns 200M. Ebay.com returns 800M so your site is somewhere in between these two? (I understand both probably have a lot more - but not returning as indexed.)
You always WANT a full site crawl - but your techs do have a point. Unless there's an absolutely necessary reason to have 500M indexed pages, I'd also seek to cut that to what you want indexed. That sounds like a nightmare ecommerce store gone bad.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why my subcategories google index much faster than my head categorienames
I have categories whirlpools . saunen . dampfduschn . etc and got sub categories Ts- Serie Whirlpool Modelle T15 Serie Whirlpool Modelle I have changed the title of the head categories and also the sub categories, but google change the title of my subcatgegoreis very quick and now 4 days later still the head navigigation not changed what does that mean ? google index my head navigation bad ? regards
Intermediate & Advanced SEO | | HolgerL
Marcel0 -
Will Reduced Bounce Rate, Increased Pages/Session, Increased Session Duration-RESULT IN BETTER RANKING?
Our relaunched website has a much lower bounce rate (66% before, now 58%) increased pages per session (1.89 before, now 3.47) and increased session duration (1:33 before, now 3:47). The relaunch was December 20th. Should these improvements result in an improvement in Google rank? How about in MOZ authority? We have not significantly changed the content of the site but the UX has been greatly improved. Thanks, Alan
Intermediate & Advanced SEO | | Kingalan11 -
Mobile Googlebot vs Desktop Googlebot - GWT reports - Crawl errors
Hi Everyone, I have a very specific SEO question. I am doing a site audit and one of the crawl reports is showing tons of 404's for the "smartphone" bot and with very recent crawl dates. If our website is responsive, and we do not have a mobile version of the website I do not understand why the desktop report version has tons of 404's and yet the smartphone does not. I think I am not understanding something conceptually. I think it has something to do with this little message in the Mobile crawl report. "Errors that occurred only when your site was crawled by Googlebot (errors didn't appear for desktop)." If I understand correctly, the "smartphone" report will only show URL's that are not on the desktop report. Is this correct?
Intermediate & Advanced SEO | | Carla_Dawson0 -
How can Google index a page that it can't crawl completely?
I recently posted a question regarding a product page that appeared to have no content. [http://www.seomoz.org/q/why-is-ose-showing-now-data-for-this-url] What puzzles me is that this page got indexed anyway. Was it indexed based on Google knowing that there was once content on the page? Was it indexed based on the trust level of our root domain? What are your thoughts? I'm asking not only because I don't know the answer, but because I know the argument is going to be made that if Google indexed the page then it must have been crawlable...therefore we didn't really have a crawlability problem. Why Google index a page it can't crawl?
Intermediate & Advanced SEO | | danatanseo0 -
Www vs. non-www differences in crawl errors in Webmaster tools...
Hey All, I have been working on an eCommerce site for a while that to no avail, continues to make me want to hang myself. To make things worth the developers just do not understand SEO and it seems every change they make just messes up work we've already done. Job security I guess. Anywho,most recently we realized they had some major sitemap issues as almost 3000 pages were submitted by only 20 or so were indexed. Well, they updated the sitemap and although all the pages are properly indexing, I now have 5000+ "not found" crawl errors in the non-www version of WMT and almost none in the www version of the WMT account. Anyone have insight as to why this would be?
Intermediate & Advanced SEO | | RossFruin0 -
Page Crawling Check after Modification Done without staying 7 days
Page Crawling Check after Modification Done without staying 7 days. I have dome modification to my site and uploaded .so i wanna check remaining errors but Moz Crawl web site once per 7 days ,is there any way to check before that . Thank you
Intermediate & Advanced SEO | | innofidelity0 -
Does Bing(and Yahoo)Crawl AJAX Based Content?
I found this article and Bing appeared to at this time have a checkbox for the option for AJAX handling although looking at the Bing Webmaster Tools it doesnt appear that this option is available. Has it just been completely integrated now, relieving webmaster from needing to check the option or is it no longer supported? http://searchengineland.com/bing-now-supports-googles-crawlable-ajax-standard-84149
Intermediate & Advanced SEO | | imiJoe0 -
Old pages still crawled by SE returning 404s. Better to put 301 or block with robots.txt ?
Hello guys, A client of ours has thousand of pages returning 404 visibile on googl webmaster tools. These are all old pages which don't exist anymore but Google keeps on detecting them. These pages belong to sections of the site which don't exist anymore. They are not linked externally and didn't provide much value even when they existed What do u suggest us to do: (a) do nothing (b) redirect all these URL/folders to the homepage through a 301 (c) block these pages through the robots.txt. Are we inappropriately using part of the crawling budget set by Search Engines by not doing anything ? thx
Intermediate & Advanced SEO | | H-FARM0