SDK npm package is not published yet and API environments may be unavailable.View status
API Reference

Rate limits

Rate limiting model, response headers, and backoff strategies.

Rate limit model

The API enforces rate limits per API key using a sliding-window algorithm. Limits apply independently to each API key — if your application uses multiple keys, each has its own quota.

Verification endpoints (/v1/trust/verify/* and /v1/tcodes/verify/*) use separate, more generous limits since they are often called from public-facing pages.

Response headers

Every response includes rate limit headers:

HeaderDescription
X-RateLimit-LimitMaximum requests allowed in the current window.
X-RateLimit-RemainingRequests remaining in the current window.
X-RateLimit-ResetUTC epoch seconds when the window resets.
Retry-AfterSeconds to wait before retrying (only on 429 responses).

Limits by tier

TierStandard endpointsVerification endpointsBulk jobs
Free60 req/min120 req/min5 concurrent
Pro300 req/min600 req/min20 concurrent
EnterpriseCustomCustomCustom

Contact sales for Enterprise rate limit customization.

Handling 429 responses

When you exceed the rate limit, the API returns 429 Too Many Requests with a Retry-After header. Respect this value to avoid being temporarily blocked.

Respecting Retry-After
async function callWithBackoff(fn: () => Promise<Response>) {
  const response = await fn();

  if (response.status === 429) {
    const retryAfter = parseInt(
      response.headers.get("Retry-After") ?? "5",
      10
    );
    await new Promise((r) => setTimeout(r, retryAfter * 1000));
    return fn(); // Retry once
  }

  return response;
}

The SDK handles this automatically — it reads Retry-After and retries up to 3 times with exponential backoff.

Best practices

  • Monitor headers. Check X-RateLimit-Remaining and throttle proactively before hitting the limit.
  • Use bulk endpoints. For batch issuance, use /v1/trust/bulk-jobs instead of issuing one-by-one.
  • Cache verification results. Public verification results can be safely cached for short durations (60–300 s) to reduce repeated calls.
  • Separate keys. Use different API keys for different integration points so a spike in one doesn't starve another.
  • Exponential backoff. On 429 or 5xx, wait 1 s → 2 s → 4 s before retrying. Never tight-loop retries.