🤖 AI Summary
This work addresses the challenge of *targeted forgetting* in generative modeling—i.e., selectively removing specific regions of the data distribution without retraining or accessing the samples to be forgotten. We propose the first continual learning and selective forgetting framework that integrates flow matching with energy-based modeling. Methodologically, we design a proxy energy function to reweight the flow matching loss; theoretically, we prove this reweighting steers the learned vector field toward a *soft mass depletion* objective, enabling seamless, traceless forgetting. Our key contribution is the novel unification of flow matching and energy-based modeling for targeted distribution editing, accompanied by interpretable, visualization-enabled analysis. Experiments on 2D synthetic data and image benchmarks demonstrate that our method precisely excises target modes while preserving overall generation quality, significantly outperforming existing baselines.
📝 Abstract
We introduce ContinualFlow, a principled framework for targeted unlearning in generative models via Flow Matching. Our method leverages an energy-based reweighting loss to softly subtract undesired regions of the data distribution without retraining from scratch or requiring direct access to the samples to be unlearned. Instead, it relies on energy-based proxies to guide the unlearning process. We prove that this induces gradients equivalent to Flow Matching toward a soft mass-subtracted target, and validate the framework through experiments on 2D and image domains, supported by interpretable visualizations and quantitative evaluations.