🤖 AI Summary
Existing LLM-based speech translation systems primarily focus on input-output modality alignment, neglecting deep semantic consistency between internal speech and text representations.
Method: We propose an adaptive inner-layer speech-text alignment method that explicitly models cross-modal semantic alignment within LLM hidden layers. Innovatively integrating optimal transport (OT) theory with cross-modal retrieval, we design a hidden-layer selection mechanism that dynamically identifies and jointly optimizes the optimal alignment layers for fine-grained, adaptive internal representation alignment.
Contribution/Results: Our approach significantly improves the performance of large speech-to-text models (LSMs) on speech translation tasks, comprehensively outperforming current state-of-the-art methods across multiple benchmarks. The OT-guided layer selection enables principled, interpretable, and task-aware alignment without architectural modifications or additional inference latency.
📝 Abstract
Recent advancement of large language models (LLMs) has led to significant breakthroughs across various tasks, laying the foundation for the development of LLM-based speech translation systems. Existing methods primarily focus on aligning inputs and outputs across modalities while overlooking deeper semantic alignment within model representations. To address this limitation, we propose an Adaptive Inner Speech-Text Alignment (AI-STA) method to bridge the modality gap by explicitly aligning speech and text representations at selected layers within LLMs. To achieve this, we leverage the optimal transport (OT) theory to quantify fine-grained representation discrepancies between speech and text. Furthermore, we utilize the cross-modal retrieval technique to identify the layers that are best suited for alignment and perform joint training on these layers. Experimental results on speech translation (ST) tasks demonstrate that AI-STA significantly improves the translation performance of large speech-text models (LSMs), outperforming previous state-of-the-art approaches. Our findings highlight the importance of inner-layer speech-text alignment in LLMs and provide new insights into enhancing cross-modal learning.