Optimizing Build Package Compression for Faster Deployment
By integrating SIMD‑accelerated ISA‑L and parallel Zstandard (Pzstd) into Meituan’s build‑and‑deployment pipeline, compression time for typical 200 MiB–1 GiB packages dropped from over a minute to about one second—a 90%‑plus speedup that preserves gzip compatibility while dramatically accelerating overall build latency.
Compression is critical for data transfer and storage; improving its efficiency can save time and reduce storage costs. This article presents an optimization of compression algorithms used in a build‑and‑deployment platform to accelerate R&D and delivery.
Background : The typical build pipeline (sync code → compile → package → upload) and deployment steps (download → unpack → restart) include a "pack" stage that can become a bottleneck. In Meituan’s internal Plus platform the pack step consumed 1 min 23 s. Large machine‑learning/NLP datasets and Java services often produce packages of several hundred megabytes to a few gigabytes, causing median compression times of >13 s for Java and Node.js services.
The goal is to speed up the pack compression step to improve overall build time.
Scenario data : Analysis of 2020 build packages showed compressed sizes mostly under 200 MiB (original up to 516 MiB). 99 % of packages were <1 GiB, so a ~1 GiB package was chosen for benchmark tests.
Compression algorithms compared :
gzip – based on DEFLATE (LZ77 + Huffman).
Brotli – LZ77 + Huffman + second‑order context modeling.
Zstd (Zstandard) – fast, high‑ratio, uses Finite State Entropy.
LZ4 – focuses on ultra‑fast decompression.
Pigz – parallel implementation of gzip.
ISA‑L – Intel’s SIMD‑accelerated library optimizing CRC, DEFLATE and Huffman.
Pzstd – parallel C++11 implementation of Zstandard.
Typical commands are tar -czf for compression and tar -xf for extraction.
Benchmark results (median values from multiple runs):
ISA‑L achieved ~5× speedup over zlib.
Zstd provided a good balance of speed and ratio, improving compression time by 88.7 % and decompression by 70.2 %.
Pigz reduced compression time but lagged behind ISA‑L.
Pzstd was the fastest, cutting compression time by up to 98.5 %.
LZ4 offered the fastest decompression but lower compression ratios.
Overall, the fastest solutions (Pzstd, ISA‑L, Pigz) reduced pack time from over a minute to roughly 1 s, saving up to 98.5 % of compression time.
Evaluation criteria :
Maximum overall time reduction.
Maintain compatibility with existing gzip‑based workflows.
Minimize impact on target machines (no extra runtime dependencies).
Align with Meituan’s physical‑machine environment.
Considering stability, compatibility, and speed, the first phase adopts ISA‑L acceleration. Pzstd may be introduced later after further validation.
Optimization impact : Average compression time decreased by ~90 %. Packages that previously took 27–72 s now compress within ~10 s, even when multiple tasks run concurrently. The overall build pipeline sees a substantial reduction in latency.
Practical notes :
Keep cluster environments consistent to reap SIMD benefits.
When building RPMs on CentOS, ensure the compilation environment matches the test setup.
Java JAR/WAR files can also be compressed, yielding speed gains though limited ratio improvement.
real 0m1.372s
user 0m5.512s
sys 0m1.791sSigned-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Meituan Technology Team
Over 10,000 engineers powering China’s leading lifestyle services e‑commerce platform. Supporting hundreds of millions of consumers, millions of merchants across 2,000+ industries. This is the public channel for the tech teams behind Meituan, Dianping, Meituan Waimai, Meituan Select, and related services.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
