Abstract
PRFL optimizes video generation preferences in latent space, improving alignment with human preferences while reducing memory consumption and training time.
Reward feedback learning (ReFL) has proven effective for aligning image generation with human preferences. However, its extension to video generation faces significant challenges. Existing video reward models rely on vision-language models designed for pixel-space inputs, confining ReFL optimization to near-complete denoising steps after computationally expensive VAE decoding. This pixel-space approach incurs substantial memory overhead and increased training time, and its late-stage optimization lacks early-stage supervision, refining only visual quality rather than fundamental motion dynamics and structural coherence. In this work, we show that pre-trained video generation models are naturally suited for reward modeling in the noisy latent space, as they are explicitly designed to process noisy latent representations at arbitrary timesteps and inherently preserve temporal information through their sequential modeling capabilities. Accordingly, we propose Process Reward Feedback Learning~(PRFL), a framework that conducts preference optimization entirely in latent space, enabling efficient gradient backpropagation throughout the full denoising chain without VAE decoding. Extensive experiments demonstrate that PRFL significantly improves alignment with human preferences, while achieving substantial reductions in memory consumption and training time compared to RGB ReFL.
Community
🎬 PRFL: Efficient Video Generation Alignment in Latent Space
We introduce Process Reward Feedback Learning (PRFL), a novel framework that enables efficient human preference alignment for video generation models—entirely in latent space!
Key Innovation: Instead of relying on expensive pixel-space reward models, we demonstrate that pre-trained video generation models themselves are excellent reward models. They naturally understand noisy latent representations at any timestep and preserve temporal information.
Why it matters:
✨ Full denoising chain optimization without VAE decoding
⚡ Significantly reduced memory & training time vs RGB-based ReFL
🎯 Better alignment with human preferences
This opens up new possibilities for scaling video generation alignment! Check out our paper and project page for demos.
📄 Paper: https://arxiv.org/abs/2511.21541
🌐 Project: https://kululumi.github.io/PRFL/
cool! 👍👍👍
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ID-Crafter: VLM-Grounded Online RL for Compositional Multi-Subject Video Generation (2025)
- Identity-Preserving Image-to-Video Generation via Reward-Guided Optimization (2025)
- MoAlign: Motion-Centric Representation Alignment for Video Diffusion Models (2025)
- PhysCorr: Dual-Reward DPO for Physics-Constrained Text-to-Video Generation with Automated Preference Selection (2025)
- Reg-DPO: SFT-Regularized Direct Preference Optimization with GT-Pair for Improving Video Generation (2025)
- Growing with the Generator: Self-paced GRPO for Video Generation (2025)
- RealDPO: Real or Not Real, that is the Preference (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
I am very interested in your work. I was wondering if you have any plans to open-source the code in the near future.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper