How Android 14’s Ultra HDR Image Works: Inside the JPEG‑R Format
This article provides a detailed technical analysis of Android 14’s Ultra HDR Image feature, explaining its JPEG‑R file structure, the five‑step encoding pipeline, decoding process, and rendering technique, while illustrating each step with diagrams and code excerpts.
What Is Ultra HDR Image?
In early October 2023 Google released Android 14, introducing Ultra HDR Image as a new way to capture and display high‑dynamic‑range photos. The format, referred to here as JPEG‑R, combines two 8‑bit JPEG images—a standard SDR primary image and a gain‑map that stores per‑pixel brightness differences—to reconstruct HDR content while remaining backward compatible with existing JPEG decoders.
File Structure
Ultra HDR Image builds on the traditional JPEG container by embedding additional metadata and a gain‑map. The primary JPEG holds the SDR image, while the gain‑map JPEG stores the HDR‑reconstruction data. Both files are listed in an XMP‑based GContainer (RDF/XML) under Container:Directory, which records each component’s offset, length, and MIME type. The gain‑map metadata (GainMapMin/Max, HDRCapacityMin/Max) is stored logarithmically to compress dynamic range before rendering.
Encoding Process
Android 14 offers five APIs for generating Ultra HDR Images; the article focuses on API‑0. The encoding pipeline consists of five steps:
Camera HAL captures HDR data (P010) and applies a tone‑mapping function to produce an 8‑bit SDR YUV420 image.
Using both HDR and SDR data, an uncompressed gain‑map (brightness‑difference) is generated.
The gain‑map is compressed into a single‑channel grayscale JPEG.
The SDR image is compressed into the primary JPEG.
Metadata describing the gain‑map is created and appended to the primary JPEG.
The article then dives into the first two steps with code snippets: the tone‑mapping simply right‑shifts the 10‑bit P010 values by two bits, and the gain‑map generation involves sampling YUV420 data, converting it to linear RGB, applying inverse OETF, computing luminance, and finally encoding the per‑pixel gain using encodeGain. The resulting grayscale JPEG is concatenated after the primary JPEG.
Decoding Process
Decoding reverses the encoding flow. The primary JPEG is decoded first; if the image follows the JPEG‑R spec, the embedded gain‑map is extracted via getAndroidGainmap(). Metadata from the XMP block provides gain‑map parameters, and decodeGainmap() restores the grayscale image. The gain‑map is then attached to the primary bitmap via setGainmap(). Standard JPEG decoding is used for both components.
Rendering HDR Content
During rendering, each pixel’s HDR value is reconstructed by applying the HDR/SDR ratio (derived from the gain‑map) to the SDR pixel data. The function trfn_apply_gain() performs this interpolation, after which the image is rendered normally on an HDR‑capable display.
Conclusion
Ultra HDR Image unifies HDR capture and display across Android devices, simplifying third‑party integration. While the article concentrates on the native encoding/decoding modules, the format’s long‑term impact on social‑media image sharing and cross‑vendor HDR consistency remains to be fully realized.
OPPO Kernel Craftsman
Sharing Linux kernel-related cutting-edge technology, technical articles, technical news, and curated tutorials
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
