Call for Papers
ReALM–GEN focuses on post-training and inference-time alignment of diffusion- and flow-based generative models—treating controlled generation as sampling from a tilted distribution that incorporates rewards for real-world constraints and preferences. Our goal is to connect theory (e.g., distribution tilting / stochastic optimal control / optimal transport perspectives) with methods (e.g., guidance, reward-based fine-tuning, test-time adaptation) and real deployments (imaging, language/multimodal, molecules, robotics).
Our workshop spans theory, methodology and applications as detailed below:- Theory: distribution tilting and energy-based views; stochastic optimal control; Schrödinger bridges / optimal transport connections; identifiability and steerability of diffusion/flow models.
- Methodology: supervised / reward-based / sampling-based post-training; efficient conditional generation and constraint satisfaction (guidance, adapters, plug-and-play); test-time scaling and inference-time alignment; discrete diffusion guidance and diffusion language models.
- Applications: inverse problems (e.g., medical/scientific imaging), vision and robotics, language and multimodal reasoning, tabular synthesis, protein/molecule design, and safety alignment.
Topics of interest
We welcome submissions on (and beyond):- Inverse Problems: Diffusion priors for inverse problems.
- Conditional Generation: Text–to–image/video/3D, style transfer, in-context editing, counterfactual generation.
- Steering: : Latent editing and representation engineering, score-based guidance, concept engineering.
- Alignment: RL-based sampling & fine-tuning (e.g RLHF, RLVR, optimal control).
- Diffusion LLMs: Guidance for (discrete) diffusion LLMs, constrained decoding & reasoning.