How Texture Size, Format, and Compression Affect WebGL Performance

This article examines how texture dimensions, data formats, and compression techniques influence creation time, memory usage, and rendering performance in WebGL, offering practical guidelines for optimizing texture handling in web graphics applications.

Baidu Maps Tech Team
Baidu Maps Tech Team
Baidu Maps Tech Team
How Texture Size, Format, and Compression Affect WebGL Performance

Creation Time of Textures

The most time‑consuming part of texture creation is the call to gl.texImage2D or gl.subTexImage2D, and the cost grows linearly with the number of pixels.

Measurements on a Windows laptop with integrated graphics and a MacBook Pro with a discrete GPU show that larger textures take proportionally longer to create.

Mobile browsers exhibit the same trend; as texture size increases, the time gap widens.

In practice, large textures should be created ahead of time or during idle periods, not inside animation loops that must stay under 16 ms per frame.

Impact of Texture Data Format on Execution Time

WebGL defines many overloads of texImage2D; they differ only in the last parameter, which can be a DOM element (e.g., Image) or raw binary pixel data.

Using a DOM element requires the browser to decode the image, adding noticeable latency, whereas binary data is faster.

Size       Image (ms)   Binary (ms)
128x128      0.33          0.09
256x256      0.66          0.25
512x512      3.52          2.03
1024x1024    11.6          6.43

Cost of Texture Switching

Frequent texture switches during a frame also hurt performance; grouping objects that share the same texture reduces switches.

When many textures are needed, they can be merged into a texture atlas, similar to CSS sprites.

In a map rendering scenario, 20‑30 text tiles were combined into a single 4096×2048 (or 4096×4096 on Retina) atlas, cutting switch count to about a quarter.

Be aware of color bleeding at atlas edges; a small UV offset can mitigate but not fully eliminate the artifact.

Memory Consumption of Textures

Different formats use different amounts of memory. Using an img element stores the original file, a decoded bitmap in the Image Cache, and another copy in GPU memory.

Binary pixel data occupies one bitmap in JavaScript and another copy in GPU memory; size is the same for RGBA or RGB.

Compressed textures store only the compressed data, offering the best memory efficiency.

Using Compressed Textures in WebGL

WebGL 1.0 supports compressed textures via extensions. The following code lists available extensions: var availableExtensions = gl.getSupportedExtensions(); Common compressed formats include S3TC/DXTn/BCn (desktop), PVRTC/PVRTC2 (iOS), ETC/ETC2 (OpenGL ES 2.0), ASTC (newer), and ATC (Adreno).

GPU support varies; for example, AMD and Intel support S3TC, Apple supports PVRTC, and Adreno supports ETC1 and ATC.

Compressed textures reduce memory usage and creation time but increase network transfer size; they are beneficial when many textures are used, such as in games.

Key Takeaways

Prefer the smallest texture size that meets visual quality requirements.

Avoid creating large textures inside the render loop; pre‑load or create during idle time.

Minimize texture switches by merging textures or using a texture atlas.

Compressed textures save memory and creation time but may cost more bandwidth.

WebGLcompressiontexture performance
Baidu Maps Tech Team
Written by

Baidu Maps Tech Team

Want to see the Baidu Maps team's technical insights, learn how top engineers tackle tough problems, or join the team? Follow the Baidu Maps Tech Team to get the answers you need.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.