In Depth
Flow models (normalizing flows) generate data by learning a sequence of invertible (reversible) transformations that map a simple probability distribution (like a Gaussian) to the complex distribution of real data. Because each transformation is invertible, the model can both generate new samples (forward direction) and compute exact probabilities of data points (reverse direction).
The invertibility constraint means flow models can compute exact likelihoods, unlike GANs (which don't model probabilities) and VAEs (which approximate them). This property makes flow models theoretically elegant and useful for density estimation, anomaly detection, and applications where knowing the exact probability of an observation matters.
Recent innovations like flow matching and rectified flows have gained significant attention as alternatives to traditional diffusion models for image and video generation. These approaches simplify the training objective and can generate high-quality samples in fewer steps. Stable Diffusion 3 and similar next-generation models incorporate flow-based techniques, making flow models increasingly relevant to the future of generative AI.