Latent Space Factorization in LoRA

📅 2025-10-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LoRA variants lack explicit mechanisms for modeling task-relevant information, limiting the representational capacity of low-rank subspaces. To address this, we propose FVAE-LoRA—the first integration of variational autoencoders (VAEs) into the low-rank adaptation framework. By designing a novel evidence lower bound (ELBO) objective, FVAE-LoRA enforces learning of two disentangled latent spaces: one dedicated to task-specific features and the other capturing residual structural information. This enables explicit separation of semantic signals from noise and bias within low-rank updates. Extensive experiments demonstrate that FVAE-LoRA consistently outperforms standard LoRA across diverse multimodal downstream tasks—including text, audio, and image domains—and exhibits superior generalization and robustness under distributional shift.

Technology Category

Application Category

📝 Abstract
Low-rank adaptation (LoRA) is a widely used method for parameter-efficient finetuning. However, existing LoRA variants lack mechanisms to explicitly disambiguate task-relevant information within the learned low-rank subspace, potentially limiting downstream performance. We propose Factorized Variational Autoencoder LoRA (FVAE-LoRA), which leverages a VAE to learn two distinct latent spaces. Our novel Evidence Lower Bound formulation explicitly promotes factorization between the latent spaces, dedicating one latent space to task-salient features and the other to residual information. Extensive experiments on text, audio, and image tasks demonstrate that FVAE-LoRA consistently outperforms standard LoRA. Moreover, spurious correlation evaluations confirm that FVAE-LoRA better isolates task-relevant signals, leading to improved robustness under distribution shifts. Our code is publicly available at: https://github.com/idiap/FVAE-LoRA
Problem

Research questions and friction points this paper is trying to address.

Factorizing latent spaces to disambiguate task-relevant information
Separating task-salient features from residual information in LoRA
Improving robustness by isolating task-relevant signals under distribution shifts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses VAE to learn two distinct latent spaces
Formulates ELBO to promote latent space factorization
Separates task-salient features from residual information
🔎 Similar Papers
No similar papers found.