Improving Reconstruction of Representation Autoencoder

📅 2026-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited reconstruction fidelity of existing latent diffusion models, which stems from the absence of low-level visual information—such as color and texture—in their semantic representations. To overcome this limitation, we propose LV-RAE (Low-level Visual Representation AutoEncoder), the first framework to effectively integrate fine-grained visual details into high-level semantic latents while preserving semantic structure. Our approach leverages a vision foundation model as the encoder and enhances decoder robustness through decoder fine-tuning, controlled noise injection, and latent space smoothing, thereby mitigating artifacts caused by perturbations in the latent variables. Experimental results demonstrate that LV-RAE significantly improves the perceptual quality and detail fidelity of generated images without compromising the model’s capacity for semantic abstraction.

Technology Category

Application Category

📝 Abstract
Recent work leverages Vision Foundation Models as image encoders to boost the generative performance of latent diffusion models (LDMs), as their semantic feature distributions are easy to learn. However, such semantic features often lack low-level information (\eg, color and texture), leading to degraded reconstruction fidelity, which has emerged as a primary bottleneck in further scaling LDMs. To address this limitation, we propose LV-RAE, a representation autoencoder that augments semantic features with missing low-level information, enabling high-fidelity reconstruction while remaining highly aligned with the semantic distribution. We further observe that the resulting high-dimensional, information-rich latent make decoders sensitive to latent perturbations, causing severe artifacts when decoding generated latent and consequently degrading generation quality. Our analysis suggests that this sensitivity primarily stems from excessive decoder responses along directions off the data manifold. Building on these insights, we propose fine-tuning the decoder to increase its robustness and smoothing the generated latent via controlled noise injection, thereby enhancing generation quality. Experiments demonstrate that LV-RAE significantly improves reconstruction fidelity while preserving the semantic abstraction and achieving strong generative quality. Our code is available at https://github.com/modyu-liu/LVRAE.
Problem

Research questions and friction points this paper is trying to address.

reconstruction fidelity
latent diffusion models
low-level information
decoder sensitivity
semantic features
Innovation

Methods, ideas, or system contributions that make the work stand out.

representation autoencoder
low-level information
latent diffusion models
decoder robustness
controlled noise injection
🔎 Similar Papers
No similar papers found.