Docs
    GuidesAPI ReferenceBlog
    Sign inCreate account
    Overview

    Getting started

    Sign upAPI keysQuickstartLoad your catalog

    Integration

    Tracking EventsIdentity StitchingPersonalisation

    Production

    Errors & status codesRetries & rate limitsTypeScript SDKTroubleshooting

    Reference

    API ReferenceVersioningChangelog
    HomeDocumentationRetries & rate limits
    Previous
    Errors & status codes
    Next
    TypeScript SDK

    Skip the ML, Ship the Revenue

    Product

    • How It Works
    • Features
    • For Startups
    • For Developers

    Developers

    • Documentation

    Company

    • Blog
    • Contact

    © 2026 Lehnz, Inc. All rights reserved.

    Production

    Retries & rate limits

    A few rules and a small backoff helper turn a brittle integration into one that survives traffic spikes and transient failures without dropping data.

    The limits

    ScopeLimitEndpoints
    Per IP (auth)10 requests / 15 minutes/auth/register, /auth/login
    Per organization1 000 requests / 15 minutesAll authenticated endpoints
    Org limit is shared across all your keys

    If your servers and your front-end both call lehnz, they share the same bucket.

    Reading rate-limit headers

    Every authenticated response includes RFC draft-7 RateLimit headers — use them to throttle proactively, not just react to 429s:

    response-headers.txt
    RateLimit-Limit: 1000
    RateLimit-Remaining: 312
    RateLimit-Reset: 482
    HeaderMeaning
    RateLimit-LimitMaximum requests in the current window.
    RateLimit-RemainingRequests left before throttling.
    RateLimit-ResetSeconds until the window resets and the counter restarts.

    When to retry

    StatusRetry?Strategy
    429YesHonor RateLimit-Reset, then exponential backoff with jitter.
    500, 502, 503, 504YesExponential backoff with jitter. Cap at ~4 attempts.
    Other 4xxNoFix the request — retrying will fail the same way.
    Network error / timeoutYesSame as 5xx. Watch for partial submission on idempotent endpoints.

    Backoff with jitter

    A naive retry loop without jitter creates a thundering herd — all your clients retry at the same instant. Add randomness:

    withRetry.ts
    // Capped exponential backoff with jitter
    async function withRetry<T>(
    fn: () => Promise<T>,
    { maxRetries = 4, baseMs = 250 } = {},
    ): Promise<T> {
    for (let attempt = 0; attempt <= maxRetries; attempt++) {
    try {
    return await fn();
    } catch (err) {
    if (!isRetryable(err) || attempt === maxRetries) throw err;
    const backoff = Math.min(baseMs * 2 ** attempt, 8000);
    const jitter = Math.random() * 250;
    await new Promise(r => setTimeout(r, backoff + jitter));
    }
    }
    throw new Error('unreachable');
    }
    function isRetryable(err: unknown): boolean {
    if (!(err instanceof Response)) return false;
    return err.status === 429 || err.status >= 500;
    }

    Honoring RateLimit-Reset

    When the server tells you when to come back, listen:

    respectRetryAfter.ts
    async function callWithRetry(url: string, init?: RequestInit) {
    for (let attempt = 0; attempt < 4; attempt++) {
    const res = await fetch(url, init);
    if (res.status !== 429) return res;
    const retryAfter = Number(res.headers.get('RateLimit-Reset')) || 2 ** attempt;
    await new Promise(r => setTimeout(r, retryAfter * 1000));
    }
    throw new Error('rate-limited after retries');
    }

    Batch instead of retry

    The single best way to avoid 429s is to send fewer, larger requests. Every event endpoint accepts an array — fill it up before posting:

    batching.ts
    // Don't do this — one HTTP request per event
    for (const event of events) {
    await fetch('/api/v1/events/ingest', {
    method: 'POST',
    headers: { 'X-API-KEY': key, 'Content-Type': 'application/json' },
    body: JSON.stringify(event),
    });
    }
    // Do this — one request, many events
    await fetch('/api/v1/events/ingest', {
    method: 'POST',
    headers: { 'X-API-KEY': key, 'Content-Type': 'application/json' },
    body: JSON.stringify(events),
    });
    EndpointRecommended batchHard limit
    POST /events/ingest50–500 events1 MB body
    POST /items/upsert100–1 000 items1 MB body
    POST /users/upsert100–1 000 users1 MB body

    Idempotency

    lehnz does not currently support an Idempotency-Key header, but every write endpoint is designed to be safely retried:

    • Upserts (/items/upsert, /users/upsert) are keyed on item_id / user_id. Sending the same payload twice produces the same final state.
    • Events can duplicate if you retry — event ingestion has no natural primary key. To stay safe, generate a stable client-side ID and include it in the event's context; lehnz preserves it and you can deduplicate downstream if needed.
    • File uploads are accepted independently — re-uploading the same CSV produces a new stored copy. The recommendation engine treats the most recent upload as authoritative, so duplicates are eventually overwritten.

    Native idempotency keys are on the roadmap. Until then, design your retry path to be safe with the upsert semantics above.

    What's next

    Errors & status codes

    Full error catalog with response shapes and fixes.

    TypeScript SDK

    Reference client with retries, idempotency, and types.