Skip to main content

Overview

The Inbox API uses cursor-based pagination for list endpoints that can return large datasets. This approach provides consistent results even when data changes between requests.
Only GET /threads and GET /threads/{threadId}/messages are paginated. Other list endpoints (GET /account-links, GET /members, GET /tags, GET /statuses) return all results as a flat array with no pagination.

Response structure

Paginated endpoints return items and an optional nextCursor:
{
  "threads": [
    { "id": "l44e15irdq4db30i77cgphhx", "..." },
    { "id": "m55f26jsep5ec41j88dhqiiy", "..." }
  ],
  "nextCursor": {
    "id": "m55f26jsep5ec41j88dhqiiy",
    "timestamp": "2025-01-15T10:30:00.000Z"
  }
}
FieldDescription
threads or messagesArray of results (key varies by endpoint)
nextCursorObject containing id and timestamp. null when no more pages.

Basic usage

Pass cursorId and cursorTimestamp from the previous response’s nextCursor to fetch the next page:
import axios from 'axios';

const client = axios.create({
  baseURL: 'https://inboxapp.com/api/v1',
  headers: {
    'Authorization': `Bearer ${process.env.INBOX_API_TOKEN}`,
    'Content-Type': 'application/json'
  }
});

async function getAllThreads() {
  const allThreads: any[] = [];
  let cursorId: string | undefined;
  let cursorTimestamp: string | undefined;

  do {
    const { data } = await client.get('/threads', {
      params: {
        limit: 100,
        ...(cursorId && { cursorId, cursorTimestamp })
      }
    });

    allThreads.push(...data.threads);
    cursorId = data.nextCursor?.id;
    cursorTimestamp = data.nextCursor?.timestamp;
  } while (cursorId);

  return allThreads;
}

const threads = await getAllThreads();
console.log(`Fetched ${threads.length} threads`);
Use the flat cursorId and cursorTimestamp parameters. The bracket notation form (cursor[id], cursor[timestamp]) is also supported but deprecated.

Page size

EndpointDefaultMaximum
GET /threads20100
GET /threads/{id}/messages20100
Request more items per page to reduce API calls:
const { data } = await client.get('/threads', {
  params: { limit: 100 }
});

Streaming pattern

For real-time processing without loading everything into memory:
async function* streamThreads() {
  let cursorId: string | undefined;
  let cursorTimestamp: string | undefined;

  do {
    const { data } = await client.get('/threads', {
      params: {
        limit: 100,
        ...(cursorId && { cursorId, cursorTimestamp })
      }
    });

    for (const thread of data.threads) {
      yield thread;
    }

    cursorId = data.nextCursor?.id;
    cursorTimestamp = data.nextCursor?.timestamp;
  } while (cursorId);
}

for await (const thread of streamThreads()) {
  await processThread(thread);
}

Pagination with filters

Filters apply before pagination. The cursor tracks position within filtered results:
async function getFilteredThreads(tagId: string) {
  const allThreads: any[] = [];
  let cursorId: string | undefined;
  let cursorTimestamp: string | undefined;

  do {
    const { data } = await client.get('/threads', {
      params: {
        'filters[tags][selectedIds][0]': tagId,
        limit: 100,
        ...(cursorId && { cursorId, cursorTimestamp })
      }
    });

    allThreads.push(...data.threads);
    cursorId = data.nextCursor?.id;
    cursorTimestamp = data.nextCursor?.timestamp;
  } while (cursorId);

  return allThreads;
}

Error handling

Handle rate limits and transient errors during pagination:
async function getAllThreadsWithRetry() {
  const allThreads: any[] = [];
  let cursorId: string | undefined;
  let cursorTimestamp: string | undefined;
  let retries = 0;
  const maxRetries = 3;

  while (true) {
    try {
      const { data } = await client.get('/threads', {
        params: {
          limit: 100,
          ...(cursorId && { cursorId, cursorTimestamp })
        }
      });

      allThreads.push(...data.threads);
      cursorId = data.nextCursor?.id;
      cursorTimestamp = data.nextCursor?.timestamp;
      retries = 0;

      if (!cursorId) break;

    } catch (error) {
      if (axios.isAxiosError(error) && error.response?.status === 429) {
        const retryAfter = error.response.headers['retry-after'] || 60;
        await new Promise(r => setTimeout(r, retryAfter * 1000));
        continue;
      }

      if (retries < maxRetries) {
        retries++;
        await new Promise(r => setTimeout(r, 1000 * retries));
        continue;
      }

      throw error;
    }
  }

  return allThreads;
}

Best practices

Use the generator pattern to process items as they arrive instead of loading everything into memory.
Always include rate limit handling in pagination code. Bulk fetches can hit the 300 requests/minute limit quickly.
Treat cursor values as opaque. Don’t modify or construct them manually — always use values returned from the API.