Databases 8 min read

Efficient API Pagination: Offset vs Cursor and Performance Considerations

This article explains the challenges of returning large result sets in APIs, compares traditional offset‑based pagination with cursor‑based approaches, shows performance differences through SQL examples, and discusses why cursor pagination is more efficient for deep paging and concurrent data modifications.

High Availability Architecture
High Availability Architecture
High Availability Architecture
Efficient API Pagination: Offset vs Cursor and Performance Considerations

When designing APIs that may return thousands of records, returning all results at once puts unnecessary load on the server, client, and network, so pagination is used.

Typical pagination uses an offset or page number, for example:

GET /api/products?page=10
{"items": [...100 products]}

To access subsequent data, the pagination parameter is changed:

GET /api/products?page=11
{"items": [...another 100 products]}

Using offset often involves queries like ?offset=1000 or OFFSET 1000 LIMIT 100 in SQL, which is sub‑optimal because the database must skip the preceding rows, incurring extra overhead in PostgreSQL, Elasticsearch, MongoDB, etc.

Although simple, offset pagination is inefficient.

A better method uses a database cursor, a pointer to a row that allows the database to return the next N rows without scanning and discarding earlier rows, typically leveraging an indexed field.

Example API response with a cursor token:

GET /api/products
{"items": [...100 products],
 "cursor": "qWe"}

The next request uses the cursor to retrieve the following page:

GET /api/products?cursor=qWe
{"items": [...100 products],
 "cursor": "qWr"}

Implementation often encodes an ID as the cursor and translates it into a query such as WHERE id > :cursor LIMIT 100 .

Performance comparison shows the offset query taking about 39 ms, while the cursor‑based query completes in roughly 0.09 ms for the same dataset:

=# explain analyze select id from product offset 10000 limit 100;
   Limit  (cost=1114.26..1125.40 rows=100 width=4) (actual time=39.431..39.561 rows=100 loops=1)
   ->  Seq Scan on product  (cost=0.00..1274406.22 rows=11437243 width=4) (actual time=0.015..39.123 rows=10100 loops=1)
   Planning Time: 0.117 ms
   Execution Time: 39.589 ms
=# explain analyze select id from product where id > 10000 limit 100;
   Limit  (cost=0.00..11.40 rows=100 width=4) (actual time=0.016..0.067 rows=100 loops=1)
   ->  Seq Scan on product  (cost=0.00..1302999.32 rows=11429082 width=4) (actual time=0.015..0.052 rows=100 loops=1)
        Filter: (id > 10000)
   Planning Time: 0.164 ms
   Execution Time: 0.094 ms

The difference spans several orders of magnitude; actual impact depends on table size and storage implementation.

Cursor pagination also avoids issues caused by concurrent edits, such as duplicate or missing items when records are inserted or deleted while paging through offset‑based pages.

Hacker News comments highlight additional advantages of cursor pagination, including stable ordering, the ability to resume after data changes, and combining position‑based queries.

References include a detailed article on no‑offset pagination (https://use-the-index-luke.com/no-offset) and Google API design guidelines for page tokens (https://google.aip.dev/158).

Overall, cursor‑based pagination provides significant performance improvements for large tables and deep paging scenarios.

performanceSQLpaginationAPICursoroffset
High Availability Architecture
Written by

High Availability Architecture

Official account for High Availability Architecture.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.