Mobile Development 11 min read

How Does Taro’s Performance Stack Up Against Native JD and WeChat Mini‑Programs?

This article evaluates Taro’s conversion to JD mini‑programs by comparing package size, long‑list rendering benchmarks, and development experience against native JD and WeChat mini‑programs, revealing where Taro excels, where it lags, and the optimizations it provides.

Aotu Lab
Aotu Lab
Aotu Lab
How Does Taro’s Performance Stack Up Against Native JD and WeChat Mini‑Programs?

Performance Comparison

Taro supports conversion to JD mini‑programs from version 1.3.20. Two key performance aspects were measured: the size of a blank Taro project and rendering performance in long‑list scenarios.

Taro Project Package Size

Mini‑program platforms limit the main package size (JD: 5 MB, WeChat: 2 MB). After compression, Taro’s runtime is only 84 KB, leaving ample space for business logic.

Long‑List Rendering Benchmark

The benchmark follows the js‑framework‑benchmark methodology and measures six operations:

Initialization – render 40 items on page entry.

Creation – create 40 items after onLoad.

Partial update – update every 10th item in a list of 400.

Swap – exchange two items in a 400‑item list.

Select – click an item to change its text color.

Add – insert 20 new items into an existing 40‑item list.

Timing points differ between Taro ( setState callback) and native mini‑programs ( setData callback).

Benchmark Repository

GitHub: https://github.com/NervJS/taro-benchmark

Test Environment

Taro version 1.3.21, device: Meizu Note. Each test run averages 10 measurements after discarding the highest and lowest values.

Results – JD Mini‑Program

Initialization: Taro 150 ms, native 123 ms

Creation: Taro 87 ms, native 85 ms

Partial update: Taro 125 ms, native 235 ms

Swap: Taro 140 ms, native 213 ms

Select: Taro 131 ms, native 155 ms

Results – WeChat Mini‑Program

Initialization: Taro 1155 ms, native 1223 ms

Creation: Taro 500 ms, native 408 ms

Partial update: Taro 167 ms, native 307 ms

Swap: Taro 252 ms, native 309 ms

Select: Taro 193 ms, native 178 ms

Interpretation

Creation : Taro processes data before rendering, making it slightly slower than native.

Initialization : Includes page construction time; therefore Taro’s init time = page‑construction + creation.

Select : Wrapper around the native callback yields comparable speed.

Partial update, swap, add : Taro outperforms native because it diffs new data against current state, reducing the payload sent to setData. Larger data sets amplify this benefit.

Performance Optimizations in Taro

setData Diff

Taro’s setState follows React’s async model; multiple calls within one event loop are coalesced into a single setData, avoiding redundant updates.

Preload Hook

The componentWillPreload hook runs immediately after a navigation request, allowing data fetching before the target page’s onLoad. This can save 300–400 ms of latency.

PureComponent & shouldComponentUpdate

Class components can extend Taro.PureComponent for shallow prop/state comparison, or implement shouldComponentUpdate manually to skip unnecessary renders.

Taro.memo

Function components can use Taro.memo to achieve the same memoization effect.

Development Experience Comparison

Syntax

Native JD and WeChat mini‑programs use a class‑based MVVM syntax with a CSS subset (rpx units). Taro adopts React‑style JSX, enabling developers with React background to start immediately. Taro also auto‑converts px to rpx and supports configurable CSS preprocessors.

Project Structure

Native pages consist of four files (js, jxml, jxss, json). Taro pages/components are a single JavaScript file plus an optional style file, simplifying management.

Ecosystem

WeChat mini‑programs support plugins and npm packages; JD mini‑programs lack these features and have a sparse community. Taro can freely import npm modules and leverages the broader React ecosystem (e.g., react‑redux, mobx‑react).

Tooling

JD native development does not support TypeScript and offers limited IDE assistance. Taro provides full TypeScript support, intelligent code hints, and real‑time checks, boosting productivity.

Conclusion

Taro is not universally faster than native code—each scenario can be optimized natively—but it delivers a good balance of development efficiency, rich ecosystem, and acceptable performance. It is especially valuable when project speed and maintainability outweigh the need for absolute raw performance.

References

js‑framework‑benchmark: https://github.com/krausest/js-framework-benchmark

Taro benchmark repository: https://github.com/NervJS/taro-benchmark

Diff optimization guide: https://nervjs.github.io/taro/docs/optimized-practice.html#%E5%B0%8F%E7%A8%8B%E5%BA%8F%E6%95%B0%E6%8D%AE-diff

Taro forum: https://taro-club.jd.com/

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

WeChat mini-programPerformance BenchmarkFrontend Optimizationmini-programCross‑platform developmentTaroJD Mini-Program
Aotu Lab
Written by

Aotu Lab

Aotu Lab, founded in October 2015, is a front-end engineering team serving multi-platform products. The articles in this public account are intended to share and discuss technology, reflecting only the personal views of Aotu Lab members and not the official stance of JD.com Technology.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.