Rate limiting

Check’s API uses rate limits to protect platform stability. These limits also surface integration mistakes early—patterns like infinite loops, hot polling, or overly aggressive concurrency—so you can fix them before they cause trouble.

The API enforces 25 requests per second and 100 concurrent in‑flight requests per partner across all keys. When you exceed either, you’ll receive HTTP 429 with a Retry-After header that tells you when it’s safe to try again. If you need higher throughput for legitimate workloads such as a backfill or migration, contact developer support. Note, the concurrency limit is not configurable.

Why you might be rate limited

Most throttling comes from a few predictable patterns.

  • Bursts. Fan‑outs, scheduled jobs that all run at the same time, or a “bulk” button in a UI can spike your request rate above 25 per second even when the average is low.
  • High concurrency. Many long‑running requests such as synchronous calculations or large syncs consume in‑flight slots and reduce headroom for other work.
  • Hot polling and contention. Tight polling loops for status plus concurrent writes to the same resource add latency and multiply retries.

429s are backpressure, not a bug—they’re the API telling your client to slow down. Handle them gracefully and you won’t need to page on every 429. Instrument your API integration using standard observability tooling to identify hot spots to optimize.

Handling rate limits

Handling throttled requests follows the same pattern. Once you receive HTTP 429 and a Retry-After header, respect that header upon retry. For writes, include an idempotency key (see Idempotent Requests) so retries don’t duplicate work. Bound the overall retry window to avoid unbounded work buildup. Rate limits change over time, so programmatically preparing for 429s will save maintenance in the future.

Response shape and headers

A limited request returns HTTP 429 and JSON like:

{
  "error": {
    "type": "throttled", 
    "message": "Request was throttled. Expected available in 1 second."
  }
}

You’ll also receive Retry-After: <seconds>. Wait at least that long before retrying. Responses use our standard Errors envelope; the type will be throttled. 429s can come from either the requests per second or concurrency limit.

Working with many resources

Most Check endpoints operate on a single resource at a time. That keeps behavior clear, makes idempotency straightforward, and protects platform health. When you need to move quickly across many resources, these patterns work well:

  • Process bulk work in batches. Chunk large volumes of updates into bounded batches and run a limited number of batches in parallel.
  • Serialize writes per resource. Queue updates to the same resource so mutations happen in order.
  • Stagger scheduled jobs. Randomize start times to avoid top‑of‑minute pileups.
  • Prefer webhooks to polling. Subscribe to events and react to state changes.

Updating many payments on a payroll. Where supported, use payroll‑scoped batch endpoints for payroll items and contractor payments. For larger payrolls, chunk updates and serialize writes per payroll to avoid contention on the payroll.