Understanding Pagination: Traditional vs. Infinite Scrolling and How to Implement Them
This article explains the differences between traditional page‑number pagination and infinite‑scroll (streaming) pagination, compares their characteristics, and provides detailed front‑end and back‑end implementation methods along with common pitfalls and optimization techniques such as caching, cursor‑based paging, and client‑side deduplication.
Pagination Types
Pagination splits a long list into discrete pages. Two common models are:
Traditional pagination
Navigation by explicit page numbers.
"Previous"/"Next" buttons.
Direct jump to any page.
Typical on desktop search results (e.g., Google, Baidu, JD).
Rare on mobile because of limited click area.
Streaming (infinite) pagination
New items are loaded by scrolling, pull‑up, or click.
No page numbers or previous/next controls.
Cannot jump to a specific page.
Used on both desktop and mobile (e.g., JD home page, Tencent News).
Both models have trade‑offs; see "Infinite Scrolling vs. Pagination" for a deeper comparison.
Implementation Approaches
Streaming pagination can be realized either on the front end or the back end, depending on data volume and performance requirements.
Front‑end pagination
The client requests the entire data set in a single call, computes the total number of pages locally, and then slices the array for each load event (scroll or click).
Example JD API (returns 100 items):
diviner.jd.com/diviner?p=610009&callback=jsonpCallbackMoreGood&lid=1&lim=100&ec=utf-8lim / limit : number of items returned (set by client or default by server).
The client splits the 100‑item array into two batches for incremental rendering.
Corresponding MongoDB query used by the back end: Model.find().limit(lim) Front‑end pagination is suitable when the total data set is small.
Back‑end pagination
The client sends a page number; the server returns only that slice. An empty array signals the last page.
Example JD API:
https://ai.jd.com/index_new.php?app=Discovergoods&action=getDiscZdmGoodsList&callback=listCallback&page=1page : current page number (provided by client).
pageSize / limit : items per page (provided by client or default).
Typical non‑empty response contains an array of items; an empty array means no more data.
MongoDB query for offset‑based pagination:
const offset = (page - 1) * pageSize;
Model.find().skip(offset).limit(pageSize);Back‑end pagination is appropriate for large data sets with many pages.
Back‑end Pagination Issues and Optimizations
Common Problems
Data loss
When records are deleted between page requests, offset‑based pagination can skip items. Example: requesting page 1 (items 20‑11) then a deletion of item 17 causes page 2 (offset based on original count) to return items 9‑1, missing item 10.
Data duplication
When new records are inserted between page requests, the offset shifts forward, causing overlap. Example: after inserting item 21, page 2 returns items 11‑2, duplicating item 11.
Optimization Strategies
1. Cache‑based pagination
Introduce a timestamp parameter that identifies a snapshot of data.
First request: timestamp=0 → server creates a cache (e.g., data_1498705088000) and returns the timestamp.
Subsequent requests: send the same timestamp to fetch consistent pages from the cache.
page : current page number.
pageSize : items per page.
timestamp : cache identifier generated by the server.
Server logic:
If timestamp is 0, generate a new cache and return data.
If timestamp is non‑zero but the cache is missing, ask the client to refresh.
If the cache exists, serve data from it.
2. Cursor‑based pagination
The client records the ID of the last item on the current page and requests the next page starting after that ID.
cursor : ID of the last item returned.
pageSize : items per page.
MongoDB query:
Model.find({id: {$gt: cursor}}).limit(pageSize);Advantages:
Avoids missing or duplicate records.
No offset calculation; performance remains stable.
Disadvantages:
Only works for simple time‑ordered or monotonic ID sequences.
3. One‑time ID list distribution
Before pagination begins, the server returns the complete list of item IDs. Subsequent page requests include only the subset of IDs needed for that page.
Initial request (example from QQ News):
http://xw.qq.com/service/api/proxy?key=Xw@2017Mmd&charset=GBK&url=http://openapi.inews.qq.com/getQQNewsIndexAndItems?chlid=news_news_top&refer=mobilewwwqqcom&otype=jsonp&t=1498706343475Detail request using the ID list:
http://xw.qq.com/service/api/proxy?key=Xw@2017Mmd&charset=GBK&url=http://openapi.inews.qq.com/getQQNewsNormalContent?ids=20170604A063AG00,20170604A05SKQ00,...&refer=mobilewwwqqcom&otype=jsonp&t=1496603487427Suitable when the total ID list is modest (hundreds of items).
4. Client‑side deduplication
Maintain a set of IDs already loaded.
After each fetch, filter out duplicates before rendering.
If many duplicates are removed, request the next page.
Pros:
Prevents duplicate entries without server changes.
Cons:
Effective only when new items are added infrequently.
References
"浅谈APP流式分页服务端设计"
"浅谈单页应用中前端分页的实现方案"
"APP后端分页设计"
"Infinite Scrolling vs. Pagination"
"瀑布流下拉加载更多导致数据重复怎么办"
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Aotu Lab
Aotu Lab, founded in October 2015, is a front-end engineering team serving multi-platform products. The articles in this public account are intended to share and discuss technology, reflecting only the personal views of Aotu Lab members and not the official stance of JD.com Technology.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
