Kuaishou Tech
Kuaishou Tech
Dec 3, 2025 · Artificial Intelligence

Can Diffusion Models Be Their Own Reward Model? Latent Reward Modeling & Step-Level Preference Optimization

This article presents a novel paradigm—Latent Reward Model (LRM) and Latent Preference Optimization (LPO)—that repurposes diffusion models as noise‑aware latent reward models for step‑level preference optimization, addressing the shortcomings of pixel‑level reward models, introducing multi‑preference consistent filtering, and demonstrating significant performance and efficiency gains on benchmarks such as PickScore and T2I‑CompBench++.

AI alignmentDiffusion ModelsImage Generation
0 likes · 9 min read
Can Diffusion Models Be Their Own Reward Model? Latent Reward Modeling & Step-Level Preference Optimization