Industry Insights 12 min read

CVPR 2025 Awards Unveiled: Best Papers, Young Researchers, and Industry Highlights

The CVPR 2025 conference announced record‑breaking submission numbers, awarded a Best Paper to VGGT, honored young researchers and student papers, listed multiple honorary nominations, and highlighted the strong presence of Chinese institutions across the award candidates.

AI Frontier Lectures
AI Frontier Lectures
AI Frontier Lectures
CVPR 2025 Awards Unveiled: Best Papers, Young Researchers, and Industry Highlights

CVPR 2025 Statistics

CVPR 2025 received 13,008 submissions from over 40,000 authors, a 13% increase over the previous year, and accepted 2,872 papers (22.1% acceptance rate). Among the accepted papers, 96 (3.3%) were selected for oral presentation and 387 (13.7%) were highlighted. The conference attracted more than 9,000 attendees from 70+ countries, setting new records for authors, reviewers, and area chairs.

CVPR 2025 submission statistics
CVPR 2025 submission statistics

Best Paper – VGGT: Visual Geometry Grounded Transformer

Authors: Jianyuan Wang, Minghao Chen, Nikita Karaev, Andrea Vedaldi, Christian Rupprecht, David Novotny

Institutions: University of Oxford, Meta AI

Paper: https://arxiv.org/abs/2503.11651 Code: https://github.com/facebookresearch/vggt VGGT introduces a feed‑forward neural network that directly infers full 3D scene attributes—including camera parameters, point clouds, depth maps, and point trajectories—from a single or a few views. The model runs in under one second per scene and outperforms traditional geometry‑optimization pipelines on tasks such as camera estimation, multi‑view depth estimation, dense reconstruction, and 3‑D point tracking. When used as a pretrained backbone, VGGT substantially improves downstream tasks like non‑rigid tracking and feed‑forward novel‑view synthesis.

VGGT illustration
VGGT illustration

Best Student Paper – Neural Inverse Rendering from Propagating Light

Authors: Anagh Malik, Benjamin Attal, Andrew Xie, Matthew O’Toole, David B. Lindell

Institutions: University of Toronto, Vector Institute, Carnegie Mellon University

Paper: https://arxiv.org/pdf/2506.05347 The work proposes a physics‑based neural inverse rendering pipeline that leverages a temporally‑extended neural radiance cache to accelerate light‑transport simulation. This enables accurate modeling of direct and indirect illumination, multi‑view time‑resolved relighting, and high‑quality 3‑D reconstruction even under strong indirect lighting. The method also supports view synthesis, automatic decomposition of captured data into direct and indirect components, and multi‑view time‑resolved relighting.

Neural Inverse Rendering illustration
Neural Inverse Rendering illustration

Honorary Nominations (selected)

MegaSaM: Accurate, Fast and Robust Structure and Motion from Casual Dynamic Videos – Google DeepMind, UC Berkeley, University of Michigan. https://arxiv.org/abs/2412.04463 Navigation World Models – Meta, NYU, Berkeley AI Research. https://arxiv.org/abs/2412.03572 Molmo and PixMo: Open Weights and Open Data for State‑of‑the‑Art Vision‑Language Models – Allen Institute AI, University of Washington, University of Pennsylvania. https://arxiv.org/abs/2409.17146 3D Student Splatting and Scooping – University College London.

https://arxiv.org/abs/2503.10148

Young Researcher Awards

Hao Su (UC San Diego) and Xie Saining (NYU) were recognized for outstanding contributions in computer vision, graphics, and machine learning.

computer visionBest PaperCVPR 2025Award Summary
AI Frontier Lectures
Written by

AI Frontier Lectures

Leading AI knowledge platform

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.