Overview
The Inbox API enforces rate limits to ensure fair usage and platform stability. This guide covers the limits, how to detect them, and strategies for handling them gracefully.
Current Limits
Category Limit Window Description Read operations 100 requests per minute GET requests (list, get, lookup) Write operations 30 requests per minute POST, PATCH, DELETE requests Message sending 10 messages per minute Per account link
Limits are per API token. Different tokens for the same team have separate limits.
Every response includes rate limit information:
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 95
X-RateLimit-Reset: 1705312800
Header Description X-RateLimit-LimitMaximum requests allowed in the window X-RateLimit-RemainingRequests remaining in current window X-RateLimit-ResetUnix timestamp when the window resets
Rate Limit Responses
When you exceed the limit:
Status: 429 Too Many Requests
{
"error" : {
"code" : "RATE_LIMITED" ,
"message" : "Rate limit exceeded. Try again in 60 seconds." ,
"status" : 429
}
}
Headers:
Exponential Backoff
The recommended pattern for handling rate limits:
import axios , { AxiosError } from 'axios' ;
async function withRetry < T >(
fn : () => Promise < T >,
options : {
maxRetries ?: number ;
baseDelay ?: number ;
maxDelay ?: number ;
} = {}
) : Promise < T > {
const {
maxRetries = 5 ,
baseDelay = 1000 ,
maxDelay = 60000
} = options ;
let attempt = 0 ;
while ( true ) {
try {
return await fn ();
} catch ( error ) {
if ( ! axios . isAxiosError ( error )) throw error ;
const status = error . response ?. status ;
// Only retry on rate limits and server errors
if ( status !== 429 && ( status < 500 || status >= 600 )) {
throw error ;
}
attempt ++ ;
if ( attempt >= maxRetries ) {
throw new Error ( `Max retries ( ${ maxRetries } ) exceeded` );
}
// Calculate delay
let delay : number ;
if ( status === 429 ) {
// Use Retry-After header if available
const retryAfter = error . response ?. headers [ 'retry-after' ];
delay = retryAfter ? parseInt ( retryAfter , 10 ) * 1000 : baseDelay * Math . pow ( 2 , attempt );
} else {
// Exponential backoff for server errors
delay = Math . min ( baseDelay * Math . pow ( 2 , attempt ), maxDelay );
}
console . log ( `Retry ${ attempt } / ${ maxRetries } after ${ delay } ms` );
await new Promise ( resolve => setTimeout ( resolve , delay ));
}
}
}
// Usage
const thread = await withRetry (() => client . get ( `/threads/ ${ threadId } ` ));
Bulk Operations
For operations that process many items, implement rate-aware batching:
async function bulkSendMessages (
threads : Array <{ id : string ; message : string }>,
accountLinkId : string
) {
const results = [];
const delayMs = 6000 ; // 10 messages/minute = 1 every 6 seconds
for ( const thread of threads ) {
try {
const { data } = await withRetry (() =>
client . post ( `/threads/ ${ thread . id } /messages` , {
text: thread . message
})
);
results . push ({
threadId: thread . id ,
success: true ,
messageId: data . message . id
});
} catch ( error ) {
results . push ({
threadId: thread . id ,
success: false ,
error: error . message
});
}
// Wait between messages to avoid hitting limit
if ( threads . indexOf ( thread ) < threads . length - 1 ) {
await new Promise ( resolve => setTimeout ( resolve , delayMs ));
}
}
return results ;
}
Proactive Rate Limit Tracking
Track your usage to avoid hitting limits:
class RateLimitTracker {
private remaining : number ;
private resetTime : number ;
constructor () {
this . remaining = 100 ;
this . resetTime = Date . now () + 60000 ;
}
update ( headers : Record < string , string >) {
this . remaining = parseInt ( headers [ 'x-ratelimit-remaining' ] || '100' , 10 );
this . resetTime = parseInt ( headers [ 'x-ratelimit-reset' ] || '0' , 10 ) * 1000 ;
}
async waitIfNeeded () {
if ( this . remaining <= 5 ) {
const waitTime = Math . max ( 0 , this . resetTime - Date . now ());
if ( waitTime > 0 ) {
console . log ( `Approaching rate limit. Waiting ${ waitTime } ms` );
await new Promise ( resolve => setTimeout ( resolve , waitTime ));
}
}
}
canMakeRequest () : boolean {
return this . remaining > 0 || Date . now () > this . resetTime ;
}
}
// Usage with axios interceptor
const tracker = new RateLimitTracker ();
client . interceptors . response . use (
response => {
tracker . update ( response . headers );
return response ;
},
error => {
if ( error . response ?. headers ) {
tracker . update ( error . response . headers );
}
return Promise . reject ( error );
}
);
// Before each request
async function makeRequest ( fn : () => Promise < any >) {
await tracker . waitIfNeeded ();
return fn ();
}
When paginating through large datasets:
async function getAllThreadsWithRateLimits () {
const allThreads = [];
let cursor = undefined ;
let requestCount = 0 ;
const maxRequestsBeforePause = 80 ; // Pause before hitting 100 limit
do {
const { data , headers } = await client . get ( '/threads' , {
params: {
limit: 100 , // Max per page
... ( cursor && {
'cursor[id]' : cursor . id ,
'cursor[timestamp]' : cursor . timestamp
})
}
});
allThreads . push ( ... data . threads );
cursor = data . nextCursor ;
requestCount ++ ;
// Check remaining requests
const remaining = parseInt ( headers [ 'x-ratelimit-remaining' ] || '100' , 10 );
if ( remaining < 10 && cursor ) {
const resetTime = parseInt ( headers [ 'x-ratelimit-reset' ] || '0' , 10 ) * 1000 ;
const waitTime = Math . max ( 0 , resetTime - Date . now ());
console . log ( `Rate limit approaching. Waiting ${ waitTime } ms` );
await new Promise ( resolve => setTimeout ( resolve , waitTime ));
}
} while ( cursor );
return allThreads ;
}
Best Practices
Fetch 100 items per page instead of 50 to halve your request count: const { data } = await client . get ( '/threads' , {
params: { limit: 100 }
});
Group operations and spread them over time: // Instead of immediate parallel requests
const results = await Promise . all ( ids . map ( id => getThread ( id )));
// Use sequential with delays
for ( const id of ids ) {
await getThread ( id );
await delay ( 100 ); // Small delay between requests
}
Cache read results to avoid repeated requests: const cache = new Map < string , { data : any ; expiry : number }>();
async function getCachedThread ( threadId : string ) {
const cached = cache . get ( threadId );
if ( cached && cached . expiry > Date . now ()) {
return cached . data ;
}
const { data } = await client . get ( `/threads/ ${ threadId } ` );
cache . set ( threadId , {
data: data . thread ,
expiry: Date . now () + 60000 // 1 minute cache
});
return data . thread ;
}
Monitor rate limit headers
Implement circuit breakers
Stop making requests temporarily when consistently hitting limits.
Message Sending Limits
Message sending has special limits per account link to respect platform constraints:
Platform Limit Notes X (Twitter) 10 per minute Per account link
Sending too many messages too quickly may trigger platform-level restrictions on your X account, separate from API rate limits. Space out messages appropriately.
class MessageRateLimiter {
private timestamps : number [] = [];
private readonly windowMs = 60000 ;
private readonly maxMessages = 10 ;
async waitToSend () : Promise < void > {
const now = Date . now ();
// Remove timestamps outside the window
this . timestamps = this . timestamps . filter ( t => t > now - this . windowMs );
if ( this . timestamps . length >= this . maxMessages ) {
const oldestInWindow = this . timestamps [ 0 ];
const waitTime = oldestInWindow + this . windowMs - now ;
if ( waitTime > 0 ) {
console . log ( `Message rate limit. Waiting ${ waitTime } ms` );
await new Promise ( resolve => setTimeout ( resolve , waitTime ));
}
}
this . timestamps . push ( Date . now ());
}
}
// Usage
const limiter = new MessageRateLimiter ();
async function sendMessage ( threadId : string , text : string ) {
await limiter . waitToSend ();
return client . post ( `/threads/ ${ threadId } /messages` , { text });
}