Eliminate Unstable Pagination Anchors: Reliable Cursor and Timestamp Strategies for Backend Systems

This article explains why traditional offset‑based pagination can cause duplicate or missing records when data changes, compares three practical solutions—including cursor pagination with timestamp + unique key, fixed‑window timestamp filtering, and Elasticsearch’s search_after/scroll methods—detailing their implementation, pros, cons, and suitable scenarios.

ITPUB
ITPUB
ITPUB
Eliminate Unstable Pagination Anchors: Reliable Cursor and Timestamp Strategies for Backend Systems

Introduction

In backend development, pagination queries are one of the most basic and frequently used features. Whether it is a list in an operations backend, a product list, post list, or follower/fan list on an app, they all rely on "time‑descending + pagination". However, this seemingly simple requirement hides a trap of "unstable pagination anchors" that can cause data duplication or loss, affecting user experience and even causing financial loss.

Typical Problems (Cases)

Social feed: posts are ordered by newest time. When a new post is published while a user is paging, the same post may appear on multiple pages.

Coupon distribution: after batch issuing coupons, newly added coupons cause some users to receive duplicate coupons, leading to loss.

Payment flow query: non‑unique sorting fields cause disorder and even missing data.

Root‑Cause Analysis

Using LIMIT offset, size (MySQL) or from + size (Elasticsearch) means the pagination basis is the row offset of the current result set. Inserting new rows or deleting existing rows changes the result set size, making the next page offset invalid.

Example walk‑through (MySQL):

Step 1 – Load first page:

select * from t order by create_time desc LIMIT 0,10;

Result: posts P1‑P10 (P1 newest).

Step 2 – Load second page while a new post P0 is inserted:

select * from t order by create_time desc LIMIT 10,10;

Expected P11‑P20, but because a new row shifted the offset, the query returns P10‑P19, causing duplicate P10 on both pages. If many rows are inserted, the same data can appear on consecutive pages or later pages may become unreachable.

Solution 1 – "Timestamp + Unique Key" Cursor Pagination

Core idea: abandon offset and use the last record of the previous page as the pagination anchor.

Implementation Steps

Define a unique sorting key that combines a time field (e.g., create_time) and a unique identifier (e.g., id) to guarantee deterministic ordering.

When paging, do not pass offset. Instead, pass the create_time and id of the last record from the previous page.

For the next page, query with WHERE (create_time, id) < (last_create_time, last_id) (or > for ascending) and keep the same ORDER BY create_time DESC, id DESC with LIMIT size.

SQL Example (MySQL)

First page (no anchor):

SELECT id, title, create_time FROM posts ORDER BY create_time DESC, id DESC LIMIT 10;

Assume the last row of page 1 has create_time='2024-05-20 14:30:00' and id=100.

Second page (using anchor):

SELECT id, title, create_time FROM posts WHERE create_time <= '2024-05-20 14:30:00' AND id < 100 ORDER BY create_time DESC, id DESC LIMIT 10;

Advantages

Eliminates duplicate/skip issues because the anchor is a concrete data marker unaffected by inserts or deletes.

High performance: the WHERE clause can use a composite index on (create_time, id), avoiding full‑table scans.

Strong compatibility: also solves non‑unique sorting problems.

Disadvantages

Does not support arbitrary page jumps; only next/previous or infinite‑scroll patterns are feasible.

Applicable Scenarios

All C‑side infinite‑scroll lists (posts, products, comments, etc.).

Large data volumes (hundreds of thousands) where pagination performance matters.

Batch jobs that need to traverse the whole table (e.g., historical data back‑fill).

Solution 2 – Fixed‑Window Timestamp Filtering

When direct page jumps are required, keep a fixed time window for the whole pagination session.

Implementation Steps

On the first request, record the current time as max_create_time.

All subsequent pages add WHERE create_time <= max_create_time, so newly inserted rows are excluded.

If the user refreshes, obtain a new max_create_time to update the window.

SQL Example (MySQL)

First page (record window):

SELECT id, title, create_time FROM posts WHERE create_time <= '2024-05-20 15:00:00' ORDER BY create_time DESC, id DESC LIMIT 0,10;

Second page (reuse same window):

SELECT id, title, create_time FROM posts WHERE create_time <= '2024-05-20 15:00:00' ORDER BY create_time DESC, id DESC LIMIT 10,10;

Advantages

Simple to implement; only need to pass the initial timestamp.

Effectively prevents new data from appearing in the current pagination, eliminating duplication.

Disadvantages

Cannot handle data deletions that cause skips.

Users will not see newly added data until they refresh.

Deep pagination still suffers from LIMIT offset, size performance issues.

Applicable Scenarios

All C‑side infinite‑scroll lists where data is only appended (e.g., visitor logs).

Situations where deep pagination is rare.

Solution 3 – Elasticsearch‑Specific Optimizations

Both cursor pagination and timestamp filtering can be adapted to Elasticsearch, but deep pagination faces the Result window is too large limit (default 10 000). Elasticsearch provides two mechanisms:

search_after : Requires sorting; the next page uses the sort values of the last hit as a cursor, avoiding from offsets.

scroll : Takes a snapshot of sorted document IDs and pages through the snapshot; now deprecated in favor of search_after.

Java Code Example (search_after)

SearchRequest request = new SearchRequest("posts");
SearchSourceBuilder sourceBuilder = new SearchSourceBuilder();
sourceBuilder.query(QueryBuilders.matchAllQuery());
sourceBuilder.sort("create_time", SortOrder.DESC);
sourceBuilder.sort("id", SortOrder.DESC);
sourceBuilder.size(10);
request.source(sourceBuilder);
SearchResponse response = client.search(request, RequestOptions.DEFAULT);

// Get sort values of the last hit (cursor)
SearchHit lastHit = response.getHits().getHits()[response.getHits().getHits().length - 1];
Object[] lastSortValues = lastHit.getSortValues();

// Next page using searchAfter
sourceBuilder.searchAfter(lastSortValues);
sourceBuilder.size(10);
request.source(sourceBuilder);
SearchResponse page2Response = client.search(request, RequestOptions.DEFAULT);

Advantages

Suitable for massive ES datasets; avoids the performance penalty of large from values.

Strong compatibility: also resolves non‑unique sorting issues.

Disadvantages

Does not support arbitrary page jumps; only sequential navigation.

Applicable Scenarios

Full‑export of large ES data (e.g., one‑month logs, bulk Excel export).

All scenarios listed for Solution 1.

Conclusion

The core issue is that offset‑based pagination is unstable because offsets change with inserts/deletes and sorting may be non‑unique. The remedy is to use a data‑based anchor (cursor) combined with a unique sorting key.

Choose a solution based on whether page jumps are needed, data volume, and update frequency: B‑side with small data can use LIMIT + timestamp; C‑side large‑volume infinite scroll should use cursor pagination; ES bulk export should use search_after.

Standardizing these techniques into engineering guidelines (requirement → implementation → testing → monitoring) turns individual experience into team standards, ensuring pagination stability, better user experience, and protecting business revenue.

About the author: Zhang Congcong, Java Development Engineer at XianKeHui.
Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

backendperformanceSQLElasticsearchPaginationCursor
ITPUB
Written by

ITPUB

Official ITPUB account sharing technical insights, community news, and exciting events.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.