Modeling Indirect Illumination for Inverse Rendering
The CVPR‑2022 paper by Alibaba’s Taobao Tech and Zhejiang University introduces a neural‑radiance‑field‑based method that directly models indirect illumination via a signed‑distance‑field geometry and spherical‑Gaussian visibility, avoiding costly path tracing and enabling more accurate recovery of geometry, material and lighting for realistic free‑viewpoint relighting.
Recently, Alibaba's Taobao Tech 3D modeling & AI design team collaborated with Zhejiang University CAD&CG Lab to publish the paper "Modeling Indirect Illumination for Inverse Rendering" accepted at CVPR 2022. The paper addresses inverse rendering—recovering geometry, material, and lighting from images—under natural illumination, which introduces soft shadows and inter‑reflection.
The authors propose to model indirect illumination directly from the scene’s outgoing radiance field, which can be reconstructed by a neural scene representation from multi‑view images. This avoids costly recursive path tracing and allows indirect lighting to be queried during optimization.
Geometry is represented as a signed distance field (SDF). Direct lighting is split into visibility‑scaled direct component and an indirect component. Visibility is predicted by an MLP that maps surface points and directions to a visibility value. Environment lighting is parameterized by spherical Gaussians (SG). The product of visibility and environment SG is approximated by another SG for efficient integration.
Indirect illumination is obtained by converting the outgoing radiance field into an SG‑based representation and storing it in an MLP conditioned on spatial coordinates. This MLP is supervised by samples drawn from the radiance field.
BRDF is modeled with a simplified Disney BRDF (albedo, roughness) and encoded/decoded by a latent‑code network with KL sparsity and smoothness regularization.
Training proceeds in three stages: (1) optimize SDF and radiance field; (2) sample rays on geometry to train visibility and indirect‑lighting MLPs using cross‑entropy and L1 losses; (3) minimize reconstruction loss between rendered and observed images to refine BRDF and environment lighting.
Experiments on synthetic Blender data and real multi‑view captures show the proposed method outperforms previous approaches, achieving higher‑quality albedo and roughness estimation and realistic free‑viewpoint relighting.
The work demonstrates an effective way to incorporate indirect illumination into inverse rendering pipelines, leveraging neural radiance fields and SG‑based visibility.
DaTaobao Tech
Official account of DaTaobao Technology
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.