π€ AI Summary
Multimodal machine translation (MMT) faces two key challenges: visual redundancy introducing noise and difficulty in aligning image regions with linguistic content. To address these, we propose a language-guided visual scene graph pruning mechanism. First, an input image is parsed into a scene graph; then, the source sentenceβs semantic representation drives structural pruning to precisely remove irrelevant nodes. Finally, a graph neural network integrated with attentional gating fuses the pruned graph with textual features, ensuring semantic consistency and visual concision. This work is the first to introduce language-driven scene graph pruning into MMT, overcoming the redundancy limitations inherent in conventional global or region-based visual feature fusion. Our method achieves significant improvements over state-of-the-art approaches on benchmarks including Multi30K. Ablation studies demonstrate that the pruning module alone boosts BLEU by 1.8 points, confirming the critical role of visual concision in enhancing translation accuracy and robustness.
π Abstract
Multimodal machine translation (MMT) seeks to address the challenges posed by linguistic polysemy and ambiguity in translation tasks by incorporating visual information. A key bottleneck in current MMT research is the effective utilization of visual data. Previous approaches have focused on extracting global or region-level image features and using attention or gating mechanisms for multimodal information fusion. However, these methods have not adequately tackled the issue of visual information redundancy in MMT, nor have they proposed effective solutions. In this paper, we introduce a novel approach--multimodal machine translation with visual Scene Graph Pruning (PSG), which leverages language scene graph information to guide the pruning of redundant nodes in visual scene graphs, thereby reducing noise in downstream translation tasks. Through extensive comparative experiments with state-of-the-art methods and ablation studies, we demonstrate the effectiveness of the PSG model. Our results also highlight the promising potential of visual information pruning in advancing the field of MMT.