๐ค AI Summary
Existing multi-domain LoRA architectures suffer from representational coupling between shared and domain-specific adapters, undermining domain specificity. To address this, we propose a subspace orthogonalization constraint mechanism that strictly confines the shared LoRA to the column space of the pre-trained weight matrix, while constraining domain-specific LoRAs to its left null spaceโachieving geometric decoupling. Our method integrates low-rank adaptation with matrix subspace decomposition and is evaluated on joint training across UCF101, Kinetics-400, and HMDB51 for action recognition. Experiments demonstrate significant improvements in multi-domain generalization and domain discriminability. Furthermore, dimensionality analysis of LoRA weights reveals a more interpretable and well-defined representational division of labor between shared and domain-specific components. This work constitutes the first approach to enforce strict subspace orthogonality between shared and domain-specific LoRA modules, thereby enhancing both parameter efficiency and domain-aware representation learning.
๐ Abstract
Existing architectures of multi-domain learning have two types of adapters: shared LoRA for all domains and domain-specific LoRA for each particular domain. However, it remains unclear whether this structure effectively captures domain-specific information. In this paper, we propose a method that ensures that shared and domain-specific LoRAs exist in different subspaces; specifically, the column and left null subspaces of the pre-trained weights. We apply the proposed method to action recognition with three datasets (UCF101, Kinetics400, and HMDB51) and demonstrate its effectiveness in some cases along with the analysis of the dimensions of LoRA weights.