Image Optimization for ISV Pages: Offline Compression, WebP Conversion, and Batch Processing
This article details a systematic approach to reducing image sizes for ISV‑generated pages, covering offline compression, WebP conversion, data structure design, batch processing pipelines, monitoring, and fallback strategies, while providing code examples and performance comparisons.
The document begins with terminology definitions (ISV, imageX, TOS, DevBox, B‑end, C‑end) and explains the motivation: as ISV page customizations grew, image performance became a bottleneck, prompting a dedicated image‑optimization project.
Optimization comparison shows before‑and‑after screenshots where compressed images achieve size reductions of up to 94 % (e.g., PNG → WebP). Converting large PNGs to JPG or WebP yields significant savings, especially when combined with offline compression.
Why large images exist – product constraints prevent strict upload size limits; different roles (operations, designers) handle images inconsistently, leading to oversized assets.
Why not use WebP directly – compatibility requires runtime SDK checks; legacy TOS storage lacks imageX support; fragmented ISV module versions hinder a universal upgrade.
Why not intercept requests on the client – per‑request interception degrades network performance and adds complexity for older client versions.
Why not replace images during B‑end page creation – existing pages would need republishing, limiting coverage for legacy assets.
Conclusion – handle existing (stock) images via database replacement (download → offline compress → upload → URL rewrite) and limit new uploads to ≤ 500 KB, later relaxing after B‑end compression is available.
Implementation details :
Data structure interface IImageMinyfyMappingJSON { width: number; height: number; hash: string; rawExtname: '.jpg' | '.png' | '.gif'; rawImageHash: string; rawSize: number; rawUrl: string; newExtname: '.jpg' | '.png' | '.gif'; newImageHash: string; newSize: number; newUrl: string; } stored in TOS for low‑frequency access.
Download images using wget with a temporary filename, compute MD5, infer extension via file-type , and rename accordingly.
Compression algorithm selection: use MozJPEG for JPGs, OxiPNG for PNGs with transparency, otherwise MozJPEG; skip if compressed size is larger.
PNG transparency detection via node-canvas reading pixel alpha values.
JSON traversal using jsonuri.walk to replace image URLs without needing to understand business‑specific JSON structures.
Upload via imageX HTTP API and write back mapping to TOS.
Database rewrite performed carefully with timestamp checks to avoid race conditions.
Monitoring & safeguards include UI tools for side‑by‑side image comparison, SSIM similarity scoring (using ssim.js ), and a staged rollout with rollback capability via saved snapshots.
Engineering architecture employs a client‑server batch processing model: a HTTP /api/minify-image endpoint performs download‑compress‑upload, while a client script drives concurrency (using async ), handles retries, and persists progress. Puppeteer automates UI actions for legacy workflows.
Future work covers stricter upload size limits, query‑parameter‑enhanced image URLs, handling ultra‑wide images, reducing PNG overuse, and exploring newer formats (HEIC/AVIF) and WebAssembly‑based compression.
ByteFE
Cutting‑edge tech, article sharing, and practical insights from the ByteDance frontend team.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.