Trainable HVI Color Space Turns Dark Photos into Cinematic Images – CVPR 2025

The paper introduces a globally first trainable HVI color space and a lightweight CIDNet network that jointly model intensity and chrominance, eliminating color bias and brightness artifacts in low‑light image enhancement and achieving state‑of‑the‑art results on ten benchmark datasets.

AIWalker
AIWalker
AIWalker
Trainable HVI Color Space Turns Dark Photos into Cinematic Images – CVPR 2025

Motivation

Low‑light image enhancement (LLIE) aims to improve visual quality of images captured in dark environments while suppressing noise and color distortion. Existing methods based on the sRGB color space suffer from strong coupling of color and brightness, leading to color bias and brightness artifacts. Switching to HSV alleviates some issues but introduces red discontinuity noise and black‑plane noise, especially in red‑dominant or extremely dark scenes.

Problems Identified

Color bias and brightness artifacts : sRGB‑based LLIE methods produce noticeable color shifts and halo effects.

Red discontinuity noise : In HSV, the hue wraps at 0°/360°, causing a split that manifests as red speckle noise after enhancement.

Black‑plane noise : HSV amplifies low‑intensity pixels, dramatically lowering signal‑to‑noise ratio in dark regions.

Proposed HVI Color Space

The authors design a new trainable color space called HVI (Horizontal/Vertical‑Intensity) built on HSV with two key innovations:

Polarized HS mapping : Convert the HS plane to polar coordinates, producing orthogonal components H and V that make adjacent red regions mathematically continuous, thereby removing red discontinuity noise.

Adaptive Intensity Collapse Function : A learnable function compresses the intensity of low‑light regions, suppressing black‑plane noise while preserving highlight details. The function is parameterized by a learnable scalar k to control compression strength and avoid gradient explosion.

Applying these steps to an sRGB image yields an HVI‑Map that can be inversely transformed back to sRGB after processing.

CIDNet: Dual‑Branch Decoupling Network

To exploit HVI, the authors introduce CIDNet, a lightweight dual‑branch encoder‑decoder network tailored for HVI:

HV branch : Models chrominance (color) invariance, separating noise from genuine texture.

I branch : Learns physical illumination constraints to adaptively enhance brightness without over‑exposure.

Lightweight Cross‑Attention (LCA) : Enables bidirectional feature exchange—HV guides I to avoid over‑enhancement in dark regions, while I supplies illumination weights to HV for better denoising in shadows.

The network contains three down‑sampling and three up‑sampling layers, each equipped with LCA, and has only 1.88 M parameters and 7.57 GFLOPs.

Experimental Results

Comprehensive quantitative and qualitative experiments were conducted on ten benchmark datasets (LOLv1, LOLv2‑real, LOLv2‑synthetic, DICM, LIME, NPE, MEF, VV, Sony‑Total‑Dark, SICE) plus an additional LOL‑Blur dataset.

On the LOL benchmark, CIDNet achieves higher PSNR/SSIM/LPIPS than all SOTA methods such as RetinexFormer and GSAD, while maintaining low computational cost.

On Sony‑Total‑Dark, CIDNet improves PSNR by 6.68 dB over the best baseline (22.90 dB total).

Although CIDNet’s BRISQUE on five non‑paired datasets is slightly lower than RetinexNet, visual comparisons show it produces more realistic results.

Cross‑Method Compatibility

When HVI is used as a pre‑processing module for other LLIE methods (e.g., FourLIE, GSAD), average PSNR gains of 1.2–3.5 dB are observed, and GSAD combined with HVI attains the best SSIM and LPIPS scores.

Ablation Studies

Table 4 and Figures 5–6 demonstrate that each component—polarized HS mapping, adaptive intensity collapse, and CIDNet sub‑modules—contributes positively to both quantitative metrics and visual quality.

Conclusion

The HVI color space and CIDNet together address the longstanding color bias and brightness artifact problems in low‑light image enhancement. By decoupling chrominance and intensity in a trainable space and designing a dual‑branch network, the method outperforms existing SOTA approaches across multiple datasets, establishing a robust solution for LLIE.

References

[1] HVI: A New Color Space for Low‑light Image Enhancement.

Illustration of HVI color space
Illustration of HVI color space
Color space transformation process
Color space transformation process
Polarized HS mapping
Polarized HS mapping
HVI‑Map generation
HVI‑Map generation
CIDNet enhancement pipeline
CIDNet enhancement pipeline
computer visiondeep learningCVPR 2025CIDNetHVI color spacelow-light image enhancement
AIWalker
Written by

AIWalker

Focused on computer vision, image processing, color science, and AI algorithms; sharing hardcore tech, engineering practice, and deep insights as a diligent AI technology practitioner.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.