🤖 AI Summary
During multi-LoRA merging, semantic vectors interfere with each other, undermining composability. A prevalent misconception equates orthogonality with semantic decoupling. Method: We propose Orthogonal Monte Carlo Dropout (OMCD), the first method to achieve *strictly orthogonal*, sparse semantic vectors—guaranteed both theoretically and at runtime—without inference overhead. OMCD integrates LoRA fine-tuning, sparsity-inducing modeling, and explicit orthogonality constraints, leveraging Monte Carlo Dropout for efficient orthogonal merging. Results: Experiments show OMCD significantly suppresses direct interference among modules. However, it reveals that orthogonality alone is insufficient for semantic composability—challenging the implicit “orthogonality implies decoupling” assumption in adapter fusion. This work provides new theoretical insights and practical guidance for designing composable adapters.
📝 Abstract
We propose Orthogonal Monte Carlo Dropout, a mechanism that enforces strict orthogonality when combining sparse semantic vectors without extra time complexity. LoRA, a popular fine-tuning method for large models, typically trains a module to represent a specific concept such as an object or a style. When multiple LoRAs are merged, for example to generate an object in a particular style, their semantic vectors may interfere with each other. Our method guarantees, at the theoretical and runtime levels, that merged LoRAs remain orthogonal and thus free from direct interference. However, empirical analysis reveals that such orthogonality does not lead to the semantic disentanglement or compositionality highlighted in prior work on compositional adaptation. This finding suggests that inter-LoRA orthogonality alone may be insufficient for achieving true semantic compositionality, prompting a re-examination of its role in adapter merging.