Let's Fuse Step by Step: A Generative Fusion Decoding Algorithm with LLMs for Multi-modal Text Recognition

๐Ÿ“… 2024-05-23
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 3
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the challenge of inefficient collaboration between large language models (LLMs) and cross-modal text recognition systems (e.g., ASR, OCR). We propose Generative Fusion Decoding (GFD), a shallow, plug-and-play integration method that aligns LLMs and recognition models via byte-level token space mappingโ€”requiring no fine-tuning or retraining. Its core contribution is the first zero-shot feature fusion paradigm grounded in byte-space alignment, enabling real-time, context-aware error correction and contextual enhancement by the LLM. Experiments demonstrate state-of-the-art ASR performance on NTUML2021; concurrent improvements in OCR accuracy; and significantly reduced end-to-end latency. Notably, GFD enhances robustness for long-form speech recognition and instruction-driven tasks, validating its effectiveness in practical multimodal inference scenarios.

Technology Category

Application Category

๐Ÿ“ Abstract
We introduce"Generative Fusion Decoding"(GFD), a novel shallow fusion framework, utilized to integrate Large Language Models (LLMs) into multi-modal text recognition systems such as automatic speech recognition (ASR) and optical character recognition (OCR). We derive the formulas necessary to enable GFD to operate across mismatched token spaces of different models by mapping text token space to byte token space, enabling seamless fusion during the decoding process. The framework is plug-and-play, compatible with various auto-regressive models, and does not require re-training for feature alignment, thus overcoming limitations of previous fusion techniques. We highlight three main advantages of GFD: First, by simplifying the complexity of aligning different model sample spaces, GFD allows LLMs to correct errors in tandem with the recognition model, reducing computation latencies. Second, the in-context learning ability of LLMs is fully capitalized by GFD, increasing robustness in long-form speech recognition and instruction aware speech recognition. Third, GFD enables fusing recognition models deficient in Chinese text recognition with LLMs extensively trained on Chinese. Our evaluation demonstrates that GFD significantly improves performance in ASR and OCR tasks, with ASR reaching state-of-the-art in the NTUML2021 benchmark. GFD provides a significant step forward in model integration, offering a unified solution that could be widely applicable to leveraging existing pre-trained models through step by step fusion.
Problem

Research questions and friction points this paper is trying to address.

Integrate LLMs into cross-modal ASR and OCR systems
Enable fusion across mismatched token spaces via byte-level likelihood
Achieve plug-and-play compatibility with auto-regressive models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative Fusion Decoding integrates LLMs for ASR and OCR
Byte-level likelihood enables cross-modal token space fusion
Plug-and-play design compatible with auto-regressive models
๐Ÿ”Ž Similar Papers
No similar papers found.
C
Chan-Jan Hsu
MediaTek Research
Yi-Chang Chen
Yi-Chang Chen
MediaTek Research
machine learning
F
Fengting Liao
MediaTek Research
P
Pei-Chen Ho
Internship at MediaTek Research
Y
Yu-Hsiang Wang
Internship at MediaTek Research
Po-Chun Hsu
Po-Chun Hsu
The University of Chicago
Dynamic MetamaterialsElectrodepositionConductive PolymersSmart TextileHeat Transfer
Da-shan Shiu
Da-shan Shiu
MediaTek
Cellularwirelessneural networksautonomous cars