🤖 AI Summary
Schrödinger Bridge (SB) methods for image generation suffer from high computational cost and poor convergence in global path optimization, and are theoretically misaligned with mainstream diffusion models (e.g., $x_t = f_A(t)x_{ ext{img}} + f_B(t)varepsilon$).
Method: This paper introduces the first Local Diffusion Schrödinger Bridge (LDSB), which strictly enforces SB constraints only within a diffusion-path subspace and replaces heavy self-supervised networks with a lightweight Kolmogorov–Arnold Network (KAN, <0.1 MB) for efficient local path optimization.
Contribution/Results: LDSB significantly improves theoretical consistency between SB and diffusion modeling. Experiments on CelebA show a 48.50% FID reduction over DDIM using only five sampling steps; overall FID improves by over 15%, achieving superior trade-offs between generation quality and inference efficiency.
📝 Abstract
In image generation, Schr""odinger Bridge (SB)-based methods theoretically enhance the efficiency and quality compared to the diffusion models by finding the least costly path between two distributions. However, they are computationally expensive and time-consuming when applied to complex image data. The reason is that they focus on fitting globally optimal paths in high-dimensional spaces, directly generating images as next step on the path using complex networks through self-supervised training, which typically results in a gap with the global optimum. Meanwhile, most diffusion models are in the same path subspace generated by weights $f_A(t)$ and $f_B(t)$, as they follow the paradigm ($x_t = f_A(t)x_{Img} + f_B(t)epsilon$). To address the limitations of SB-based methods, this paper proposes for the first time to find local Diffusion Schr""odinger Bridges (LDSB) in the diffusion path subspace, which strengthens the connection between the SB problem and diffusion models. Specifically, our method optimizes the diffusion paths using Kolmogorov-Arnold Network (KAN), which has the advantage of resistance to forgetting and continuous output. The experiment shows that our LDSB significantly improves the quality and efficiency of image generation using the same pre-trained denoising network and the KAN for optimising is only less than 0.1MB. The FID metric is reduced by extbf{more than 15%}, especially with a reduction of 48.50% when NFE of DDIM is $5$ for the CelebA dataset. Code is available at https://github.com/Qiu-XY/LDSB.