Rate Limits
Unizo implements rate limiting to ensure fair usage and maintain service quality for all users. This guide explains how rate limits work and how to handle them in your applications.
Rate Limit Overview
Rate limits are applied per API key and vary by endpoint and subscription plan:
Plan | Requests per minute | Burst limit |
---|---|---|
Free | 60 | 100 |
Pro | 600 | 1,000 |
Enterprise | 6,000 | 10,000 |
Rate Limit Headers
Every API response includes rate limit information in the headers:
HTTP/1.1 200 OK
X-RateLimit-Limit: 600
X-RateLimit-Remaining: 599
X-RateLimit-Reset: 1609459200
X-RateLimit-Reset-After: 60
Header Descriptions
- X-RateLimit-Limit: Maximum requests allowed in the current window
- X-RateLimit-Remaining: Number of requests remaining in the current window
- X-RateLimit-Reset: Unix timestamp when the rate limit resets
- X-RateLimit-Reset-After: Seconds until the rate limit resets
Rate Limit Response
When you exceed the rate limit, you'll receive a 429 Too Many Requests
response:
{
"error": {
"code": "rate_limit_exceeded",
"message": "Rate limit exceeded. Try again in 60 seconds.",
"retry_after": 60
}
}
Handling Rate Limits
Exponential Backoff
Implement exponential backoff when you receive a 429 response:
async function makeRequestWithRetry(url, options, maxRetries = 3) {
for (let attempt = 0; attempt <= maxRetries; attempt++) {
try {
const response = await fetch(url, options);
if (response.status === 429) {
if (attempt === maxRetries) {
throw new Error('Max retries exceeded');
}
const retryAfter = response.headers.get('X-RateLimit-Reset-After');
const delay = Math.min(Math.pow(2, attempt) * 1000, parseInt(retryAfter) * 1000);
await new Promise(resolve => setTimeout(resolve, delay));
continue;
}
return response;
} catch (error) {
if (attempt === maxRetries) throw error;
}
}
}
Rate Limit Monitoring
Monitor your rate limit usage to avoid hitting limits:
function checkRateLimit(response) {
const remaining = parseInt(response.headers.get('X-RateLimit-Remaining'));
const limit = parseInt(response.headers.get('X-RateLimit-Limit'));
const resetAfter = parseInt(response.headers.get('X-RateLimit-Reset-After'));
if (remaining < limit * 0.1) { // Less than 10% remaining
console.warn(`Rate limit warning: ${remaining}/${limit} requests remaining`);
console.warn(`Rate limit resets in ${resetAfter} seconds`);
}
}
Best Practices
1. Cache Responses
Cache API responses when possible to reduce the number of requests:
const cache = new Map();
async function getCachedData(url, ttl = 300) { // 5 minutes TTL
const cached = cache.get(url);
if (cached && Date.now() - cached.timestamp < ttl * 1000) {
return cached.data;
}
const response = await fetch(url);
const data = await response.json();
cache.set(url, {
data,
timestamp: Date.now()
});
return data;
}
2. Use Webhooks
Replace polling with webhooks for real-time updates:
// Instead of polling every minute
setInterval(async () => {
const vulnerabilities = await fetch('/api/vulnerabilities');
// Process vulnerabilities
}, 60000);
// Use webhooks for real-time updates
app.post('/webhooks/vulnerabilities', (req, res) => {
const vulnerability = req.body;
// Process new vulnerability immediately
res.status(200).send('OK');
});
3. Batch Requests
Combine multiple operations into single requests when possible:
// Instead of multiple individual requests
const promises = ids.map(id => fetch(`/api/assets/${id}`));
const responses = await Promise.all(promises);
// Use batch endpoint
const response = await fetch('/api/assets/batch', {
method: 'POST',
body: JSON.stringify({ ids }),
headers: { 'Content-Type': 'application/json' }
});
4. Request Queuing
Implement a request queue to control request rate:
class RateLimitedQueue {
constructor(requestsPerMinute = 60) {
this.queue = [];
this.processing = false;
this.interval = 60000 / requestsPerMinute; // ms between requests
}
async add(requestFn) {
return new Promise((resolve, reject) => {
this.queue.push({ requestFn, resolve, reject });
this.process();
});
}
async process() {
if (this.processing || this.queue.length === 0) return;
this.processing = true;
while (this.queue.length > 0) {
const { requestFn, resolve, reject } = this.queue.shift();
try {
const result = await requestFn();
resolve(result);
} catch (error) {
reject(error);
}
if (this.queue.length > 0) {
await new Promise(resolve => setTimeout(resolve, this.interval));
}
}
this.processing = false;
}
}
// Usage
const queue = new RateLimitedQueue(60); // 60 requests per minute
const result = await queue.add(() =>
fetch('/api/vulnerabilities').then(r => r.json())
);
Endpoint-Specific Limits
Some endpoints have specific rate limits:
Search Endpoints
GET /api/search/* - 30 requests per minute
Batch Operations
POST /api/*/batch - 10 requests per minute
Webhook Testing
POST /api/webhooks/test - 5 requests per minute
Increasing Rate Limits
To increase your rate limits:
- Upgrade your plan: Higher plans come with increased limits
- Contact support: Enterprise customers can request custom limits
- Optimize usage: Implement caching and webhooks to reduce requests
Monitoring Usage
Track your API usage in the Unizo Console:
- Navigate to Account → API Usage
- View real-time and historical usage data
- Set up alerts for approaching limits
Error Handling Example
Complete example with proper error handling:
class UnizoClient {
constructor(apiKey) {
this.apiKey = apiKey;
this.baseURL = 'https://api.unizo.ai/v1';
}
async request(endpoint, options = {}) {
const url = `${this.baseURL}${endpoint}`;
const config = {
...options,
headers: {
'Authorization': `Bearer ${this.apiKey}`,
'Content-Type': 'application/json',
...options.headers
}
};
let retries = 0;
const maxRetries = 3;
while (retries <= maxRetries) {
try {
const response = await fetch(url, config);
// Check rate limit headers
this.checkRateLimit(response);
if (response.status === 429) {
const retryAfter = parseInt(response.headers.get('X-RateLimit-Reset-After'));
if (retries === maxRetries) {
throw new Error(`Rate limit exceeded. Retry after ${retryAfter} seconds.`);
}
console.log(`Rate limited. Waiting ${retryAfter} seconds...`);
await this.sleep(retryAfter * 1000);
retries++;
continue;
}
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
return await response.json();
} catch (error) {
if (retries === maxRetries) throw error;
retries++;
await this.sleep(Math.pow(2, retries) * 1000);
}
}
}
checkRateLimit(response) {
const remaining = parseInt(response.headers.get('X-RateLimit-Remaining'));
const limit = parseInt(response.headers.get('X-RateLimit-Limit'));
if (remaining < limit * 0.1) {
console.warn(`Rate limit warning: ${remaining}/${limit} requests remaining`);
}
}
sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
}