What is crawl budget?

Crawl budget is the upper limit on how many of your pages Google's crawler, Googlebot, fetches in a given period. The exact number depends on two things: how fast your server responds, called the crawl rate limit, and how much value Google sees in crawling your URLs, called crawl demand. For most small sites, crawl budget is a non-issue. For large sites with hundreds of thousands of URLs, it becomes a hard ceiling: if Google never crawls a page, that page never ranks.

Small site: 5,000 pages, crawled multiple times per week, no budget concerns

Medium site: 50,000 pages, crawled weekly, edge cases worth optimizing

Large site: 500,000 plus pages, partial crawls common, every wasted URL hurts

Symptom: New product pages take weeks to appear in search results

Why does crawl budget matter?

Crawl budget matters most for ecommerce stores, news sites, marketplaces and any site with auto-generated pages from filters or parameters. When 80 percent of your crawl budget is burned on duplicate or low-value URLs, the 20 percent left over may not include the new product or article you just shipped. Tightening crawl budget by blocking, canonicalizing or removing junk URLs lets Google spend its time on the pages that earn revenue.

How do you use crawl budget?

  1. Use robots.txt to block crawl of low-value URL patterns like internal search results, faceted navigation or session parameters.

  2. Consolidate duplicates with canonical tags or 301 redirects, so Google does not waste crawls on multiple versions of the same page.

  3. Watch the crawl stats report in Google Search Console for sudden drops or spikes, which often signal a server, sitemap or robots issue.

Share this glossary term

Was this helpful?

Learn more with AI

Category

Discoverability

Updated