TL;DR: A state-of-the-art inference-time scaling technique for flow models like FLUX, precisely aligning with user preferences—such as text prompts, object quantities, and more—at a compute cost of under $1!

Our inference-time scaling method aligns pretrained flow models with user preferences. In the top row, adding more compute improves alignment, with lower Residual Sum of Squares (RSS) in counting image generation indicating a closer match between generated images and specified object counts. Our method extends the capabilities of pretrained flow models (left side of each case), producing outputs that better align with user preferences (right side of each case, red box).

Abstract

We propose an inference-time scaling approach for pretrained flow models. Recently, inference-time scaling has gained significant attention in LLMs and diffusion models, improving sample quality or better aligning outputs with user preferences by leveraging additional computation. For diffusion models, particle sampling has allowed more efficient scaling due to the stochasticity at intermediate denoising steps. On the contrary, while flow models have gained popularity as an alternative to diffusion models--offering faster generation and high-quality outputs in state-of-the-art image and video generative models--efficient inference-time scaling methods used for diffusion models cannot be directly applied due to their deterministic generative process. To enable efficient inference-time scaling for flow models, we propose three key ideas: 1) SDE-based generation, enabling particle sampling in flow models, 2) Interpolant conversion employs an alternative generative trajectory, broadening the search space and enhancing sample diversity, and 3) Rollover Budget Forcing (RBF), an adaptive allocation of computational resources across timesteps to maximize budget utilization. Our experiments show that SDE-based generation, particularly variance-preserving (VP) interpolant-based generation, improves the performance of particle sampling methods for inference-time scaling in flow models. Additionally, we demonstrate that RBF with VP-SDE achieves the best performance, outperforming all previous inference-time scaling approaches.

💡 Main Idea

Inference-Time SDE Conversion

Particle sampling is a widely used strategy in diffusion models for efficient inference-time scaling. However, it cannot be directly applied to flow models, which rely on deterministic PF-ODE sampling (see Linear ODE in the left figure). This lack of stochasticity limits the flexibility of flow models for inference-time reward alignment (see (a) in the right figure). To address this, we introduce an SDE-based generation process that enables particle sampling in flow models. However, in practice, this alone does not provide sufficient sample diversity for effective exploration (see (b) in the right figure).

Inference-Time Interpolant Conversion

Inspired by the effective use of particle sampling in diffusion models, we identify the choice of interpolant as a key factor. While diffusion models typically adopt a Variance Preserving (VP) interpolant, flow models rely on a linear interpolant. To expand the search space, we propose converting the linear trajectory of flow models into a VP-SDE-based path (see VP-SDE in the left figure). This inference-time interpolant conversion enhances exploration and enables more effective particle sampling (see (c) in the right figure).

Rollover Budget Forcing

Previous particle sampling methods [1-3] use a fixed number of particles at each denoising step. Our analysis reveals that the required number of function evaluations (NFEs) to find a better sample varies across timesteps, making uniform allocation inefficient. Rollover Budget Forcing (RBF) addresses this by adopting a rollover strategy: when a high-reward particle is found early within the allocated quota, the unused NFEs are carried over to the next step—enabling adaptive compute allocation and more effective alignment.

🖼️ Qualitative Results

Without additional training, our method, RBF, can align pretrained flow models with diverse user preferences, including logical relations, spatial relations, and object quantities.

Compositional Text-to-Image Generation

Quantity-Aware Image Generation

SDE/Interpolant Conversion

We observe a consistent improvement in alignment when applying inference-time SDE conversion (linear SDE) and interpolant conversion (VP SDE), as they expand the search space. This enables efficient use of particle sampling in flow models, outperforming other search methods based on linear ODE, such as BoN and SoP [4].

Additional Qualitative Results

Acknowledgement

We thank Seungwoo Yoo and Juil Koo for providing constructive feedback of our manuscript. Thank you to Phillip Y. Lee for helpful discussions on Vision Language Models.

References

[1] Test-time Alignment of Diffusion Models without Reward Over-optimization, Kim et al., ICLR 2025
[2] CoDe: Blockwise Control for Denoising Diffusion Models, Singh et al., arXiv 2025
[3] Derivative-Free Guidance in Continuous and Discrete Diffusion Models with Soft Value-Based Decoding, Li et al., arXiv 2024 [4] Inference-Time Scaling for Diffusion Models beyond Scaling Denoising Steps, Ma et al., arXiv 2025

BibTeX

@article{kim2025inferencetimescalingflowmodels,
        title  = {Inference-Time Scaling for Flow Models via Stochastic Generation and Rollover Budget Forcing},
        author = {Jaihoon Kim and Taehoon Yoon and Jisung Hwang and Minhyuk Sung},
        journal={arXiv preprint arXiv:2503.19385},
        year   = {2025}}