Multimodal Machine Translation with Visual Scene Graph Pruning

πŸ“… 2025-05-26
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Multimodal machine translation (MMT) faces two key challenges: visual redundancy introducing noise and difficulty in aligning image regions with linguistic content. To address these, we propose a language-guided visual scene graph pruning mechanism. First, an input image is parsed into a scene graph; then, the source sentence’s semantic representation drives structural pruning to precisely remove irrelevant nodes. Finally, a graph neural network integrated with attentional gating fuses the pruned graph with textual features, ensuring semantic consistency and visual concision. This work is the first to introduce language-driven scene graph pruning into MMT, overcoming the redundancy limitations inherent in conventional global or region-based visual feature fusion. Our method achieves significant improvements over state-of-the-art approaches on benchmarks including Multi30K. Ablation studies demonstrate that the pruning module alone boosts BLEU by 1.8 points, confirming the critical role of visual concision in enhancing translation accuracy and robustness.

Technology Category

Application Category

πŸ“ Abstract
Multimodal machine translation (MMT) seeks to address the challenges posed by linguistic polysemy and ambiguity in translation tasks by incorporating visual information. A key bottleneck in current MMT research is the effective utilization of visual data. Previous approaches have focused on extracting global or region-level image features and using attention or gating mechanisms for multimodal information fusion. However, these methods have not adequately tackled the issue of visual information redundancy in MMT, nor have they proposed effective solutions. In this paper, we introduce a novel approach--multimodal machine translation with visual Scene Graph Pruning (PSG), which leverages language scene graph information to guide the pruning of redundant nodes in visual scene graphs, thereby reducing noise in downstream translation tasks. Through extensive comparative experiments with state-of-the-art methods and ablation studies, we demonstrate the effectiveness of the PSG model. Our results also highlight the promising potential of visual information pruning in advancing the field of MMT.
Problem

Research questions and friction points this paper is trying to address.

Addressing linguistic polysemy and ambiguity in multimodal machine translation
Reducing visual information redundancy in translation tasks
Improving multimodal data utilization via visual scene graph pruning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses visual Scene Graph Pruning (PSG)
Leverages language scene graph information
Reduces noise in translation tasks
πŸ”Ž Similar Papers
No similar papers found.
C
Chenyu Lu
East China Normal University
Shiliang Sun
Shiliang Sun
Shanghai Jiao Tong University
Machine LearningArtificial Intelligence
J
Jing Zhao
East China Normal University
N
Nan Zhang
Wenzhou University
Tengfei Song
Tengfei Song
Huawei
Emotion recognitionComputer visionGraph neural network
H
Hao Yang
Huawei Technologies Ltd.