Rate Limits
Limits are per-workspace, evaluated as a sliding 60-second window. They apply across all server keys in the workspace. Hard ceilings can be raised on enterprise tiers.
Default limits
| Endpoint group | Default (req/min/workspace) | Hard ceiling |
|---|---|---|
Ingest (POST /events, POST /users) | 600 | 5,000 |
Inform trigger (POST /inform/trigger) | 300 | 3,000 |
Allyvate decisions (POST /allyvate/decide) | 600 | 5,000 |
Reads (GET endpoints) | 60 | 300 |
| Webhooks management | 30 | 120 |
Response headers
Every successful response includes:
X-Appice-RateLimit-Limit: 600 X-Appice-RateLimit-Remaining: 594 X-Appice-RateLimit-Reset: 1746529231
When you hit the limit you'll receive 429 Too Many Requests with:
HTTP/1.1 429 Too Many Requests
Retry-After: 12
X-Appice-RateLimit-Remaining: 0
{ "error": "rate_limited", "code": "RATE_LIMITED", "requestId": "req_8f3a..." }
Recommended retry pattern
Exponential backoff with jitter, honoring Retry-After:
async function appiceRequest(url, body, attempt = 0) {
const res = await fetch(url, { method: 'POST', body });
if (res.status !== 429) return res;
const retryAfter = parseInt(res.headers.get('Retry-After') || '1', 10) * 1000;
const jitter = Math.random() * 500;
const wait = Math.min(retryAfter + jitter, 30_000); // cap at 30s
if (attempt >= 5) throw new Error('Exhausted retries');
await new Promise(r => setTimeout(r, wait));
return appiceRequest(url, body, attempt + 1);
}
Reduce 429s by batching
POST /events accepts up to 1,000 events per request. Batching is the cheapest way to multiply your effective throughput by 1000×. Build a queue in your backend and flush every 5–30 seconds, or whenever it hits 1000.
// Pseudocode
const buffer = [];
function track(event) { buffer.push(event); if (buffer.length >= 1000) flush(); }
setInterval(flush, 30_000);
async function flush() {
if (buffer.length === 0) return;
const batch = buffer.splice(0, 1000);
await appiceRequest('/v1/events', { events: batch });
}
Elevated tiers
Need more? Talk to your account team — Enterprise tiers can be raised to 5,000 req/min/workspace and beyond on dedicated infrastructure. Contact dev@appice.ai with: expected peak QPS, regional split, and whether the workload is steady or bursty.