What is the best tool to crawl a site with millions of pages?
-
I want to crawl a site that has so many pages that Xenu and Screaming Frog keep crashing at some point after 200,000 pages.
What tools will allow me to crawl a site with millions of pages without crashing?
-
Don't forget to exclude pages that don't contain the information you are looking for - exclude query parameters which just result in duplicate content, system files, etc. That may help to bring the amount down.
-
Only basic stuff: URL, Title, Description, and a few HTML elements.
I am aware that building a crawler would be fairly easy, but is there one out there that already does it without consuming too many resources?
-
For what purpose do you want to crawl the site?
A web crawler isn't really hard to write. In 100 lines of code you can probably code one. The question is of course: what do you want out of the crawl?
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How can a Page indexed without crawled?
Hey moz fans,
Intermediate & Advanced SEO | | atakala
In the google getting started guide it says **"
Note: **Pages may be indexed despite never having been crawled: the two processes are independent of each other. If enough information is available about a page, and the page is deemed relevant to users, search engine algorithms may decide to include it in the search results despite never having had access to the content directly. That said, there are simple mechanisms such as robots meta tags to make sure that pages are not indexed.
" How can it happen, I dont really get the point.
Thank you0 -
Would be the network site map page considered link spam
In the course of the last 18 months my sites have lost from 50 to 70 percent of traffic. Never have used any tricks, just simple white-hat SEO. Anyway, I am now trying to fix things that hadn't been a problem before all those Google updates, but apparently now are. Would appreciate any help.. I used to have a network site map page on everyone of my sites (about 30 sites). It basically would be a page called 'our network' and it'll show a list of links to all of my other sites. These pages were indexed, had decent PR and didn't seem to cause any problem. Here's an example of one of them:
Intermediate & Advanced SEO | | romanbond
http://www.psoriasisguide.ca/psoriasis_scg.html In the light of Panda and Penguin and all these 'bad links' I decided to get rid of most of them. My traffic didn't recover at all, it actually went further down. Not sure if there is any connection to what I'd done. So, the question is: In your opinion/experience, do you think such network sitemap pages could be causing penalties for link spam?0 -
Would spiders successfully crawl a page with two distinct sets of content?
Hello all and thank you in advance for the help. I have a coffee company that sell both retail and wholesale products. These are typically the same product, just at different prices. We are planning on having a pop up for users to help them self identify upon their first visit asking if they are retail or wholesale clients. So if someone clicks retail, the cookie will show them retail pricing throughout the site and vice versa for those that identify themselves as wholesale. I can talk to our programmer to find out how he actually plans on doing this from a technical standpoint if it would be of assistance. My question is, how will a spider crawl this site? I am assuming (probably incorrectly) that whatever the "default" selection is (for example, right now now people see retail pricing and then opt into wholesale) will be the information/pricing that they index. So long story short, how would a spider crawl a page that has two sets of distinct pricing information displayed based on user self identification? Thanks again!
Intermediate & Advanced SEO | | ClayPotCreative0 -
Retail Store Detail Page and Local SEO Best Practices
We are working with a large retailer that has specific pages for each store they run. We are interested in leveraging the best practices that are out their specifically for local search. Our current issue is around URL design for the stores pages themselves. Currently, we have store URL's such as: /store/12584 The number is a GUID like character that means nothing to search engines or, frankly, humans. Is there a better way we could model this URL for increased relevancy for local retail search? For example: adding store name:
Intermediate & Advanced SEO | | mongillo
www.domain.com/store/1st-and-denny-new-york-city/23421
(example http://www.apple.com/retail/universityvillage/) fully explicit URI www.domain.com/store/us/new-york/new-york-city/10027/bronx/23421
(example http://www.patagonia.com/us/patagonia-san-diego-2185-san-elijo-avenue-cardiff-by-the-sea-california-92007?assetid=5172) the idea with this second version is that we'd make the URL structure more rich and detailed which might help for local search. Would there be a best practice or recommendation as to how we should model this URL? We are also working to create an on-page optimization but we're specifically interested in local seo strategy and URL design.0 -
Should I let Google crawl my production server if the site is still under development?
I am building out a brand new site. It's built on Wordpress so I've been tinkering with the themes and plug-ins on the production server. To my surprise, less than a week after installing Wordpress, I have pages in the index. I've seen advice in this forum about blocking search bots from dev servers to prevent duplicate content, but this is my production server so it seems like a bad idea. Any advice on the best way to proceed? Block or no block? Or something else? (I know how to block, so I'm not looking for instructions). We're around 3 months from officially launching (possibly less). We'll start to have real content on the site some time in June, even though we aren't planning to launch. We should have a development environment ready in the next couple of weeks. Thanks!
Intermediate & Advanced SEO | | DoItHappy0 -
Stop Google crawling a site at set times
Hi All I know I can use robots.txt to block Google from pages on my site but is there a way to stop Google crawling my site at set times of the day? Or to request that they crawl at other times? Thanks Sean
Intermediate & Advanced SEO | | ske110 -
What's the best internal linking strategy for articles and on-site resources?
We recently added an education center to our site with articles and information about our products and industry. What is the best way to link to and from that content? There are two options I'm considering: Link to articles from category and subcategory pages under a section called "related articles" and link back to these category and subcategory pages from the articles: category page <<--------->> education center article education center article <<---------->> subcategory page Only link from the articles to the category and subcategory pages: education center article ---------->> category page education center article ---------->> subcategory page Would #1 dilute the SEO value of the category and subcategory pages? I want to offer shoppers links to more information if they need it, but this may also take them away from the products. Has anyone tested this? Thanks!
Intermediate & Advanced SEO | | pbhatt0 -
Best tool to calculate link distribution?
What is the best tool to calculate the total link distribution throughout a site? I know opensiteexplorer.com's "top pages" breaks down the numbers for you? Are there any others?
Intermediate & Advanced SEO | | nicole.healthline0