Crawl budget

Crawl budget refers to the number of pages Googlebot and other search engine crawlers are willing and able to crawl on a website within a given period. It is determined by the site’s crawl capacity (how many requests a server can handle) and crawl demand (how frequently Google wants to crawl the pages).
The crawl budget ensures that Google indexes the most important pages efficiently without overloading the website’s server. Managing crawl budget effectively helps large or frequently updated websites maintain visibility in search results.
Advanced
Crawl budget optimization involves balancing server performance with crawl efficiency. Google calculates it based on two primary factors: Crawl Rate Limit and Crawl Demand. The rate limit defines how many requests per second Googlebot can make, while demand depends on a page’s popularity and freshness.
Advanced technical SEO strategies such as improving site speed, fixing broken links, consolidating duplicate pages, and updating sitemaps help maximize crawl efficiency. Tools like Google Search Console provide crawl stats reports that reveal crawl frequency, response times, and URL types crawled. For enterprise-scale sites, managing crawl budget ensures that critical content is discovered and indexed first.
Relevance
Applications
Metrics
Issues
Example
A large retail site noticed that many filter-based URLs were consuming crawl budget. By using canonical tags and disallowing redundant parameters in robots.txt, the site improved crawl efficiency and ensured faster indexing of important product pages.