Col-OLHTR: A Novel Framework for Multimodal Online Handwritten Text Recognition

📅 2025-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing OLHTR methods face a dilemma: single-encoder architectures struggle to jointly capture local trajectory patterns and global spatial structures, while multi-stream models incur prohibitive inference overhead. To address this, we propose a unified single-stream framework based on collaborative learning, centered on the Point-to-Spatial Alignment (P2SA) module. During training, P2SA leverages auxiliary image-stream supervision to explicitly align trajectory representations with spatial features; at inference, only the lightweight P2SA module is retained—enabling multimodal modeling without sacrificing efficiency. The framework comprises a trajectory encoder, the P2SA module (which fuses trajectory features with 2D rotational positional encoding), and an attention-based decoder. It is trained in a dual-stream fashion but operates end-to-end as a single stream during inference. Our method achieves state-of-the-art performance across multiple OLHTR benchmarks, significantly improving robustness on complex samples while reducing both model parameters and inference latency.

Technology Category

Application Category

📝 Abstract
Online Handwritten Text Recognition (OLHTR) has gained considerable attention for its diverse range of applications. Current approaches usually treat OLHTR as a sequence recognition task, employing either a single trajectory or image encoder, or multi-stream encoders, combined with a CTC or attention-based recognition decoder. However, these approaches face several drawbacks: 1) single encoders typically focus on either local trajectories or visual regions, lacking the ability to dynamically capture relevant global features in challenging cases; 2) multi-stream encoders, while more comprehensive, suffer from complex structures and increased inference costs. To tackle this, we propose a Collaborative learning-based OLHTR framework, called Col-OLHTR, that learns multimodal features during training while maintaining a single-stream inference process. Col-OLHTR consists of a trajectory encoder, a Point-to-Spatial Alignment (P2SA) module, and an attention-based decoder. The P2SA module is designed to learn image-level spatial features through trajectory-encoded features and 2D rotary position embeddings. During training, an additional image-stream encoder-decoder is collaboratively trained to provide supervision for P2SA features. At inference, the extra streams are discarded, and only the P2SA module is used and merged before the decoder, simplifying the process while preserving high performance. Extensive experimental results on several OLHTR benchmarks demonstrate the state-of-the-art (SOTA) performance, proving the effectiveness and robustness of our design.
Problem

Research questions and friction points this paper is trying to address.

Enhance multimodal online handwritten text recognition
Simplify complex encoder structures in OLHTR
Maintain single-stream inference with high performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Collaborative learning-based OLHTR framework
Point-to-Spatial Alignment module
Single-stream inference process
🔎 Similar Papers
No similar papers found.
C
Chenyu Liu
University of Science and Technology of China, Hefei, China; iFLYTEK Research, Hefei, China
J
Jinshui Hu
iFLYTEK Research, Hefei, China
Baocai Yin
Baocai Yin
Unknown affiliation
J
Jia Pan
iFLYTEK Research, Hefei, China
Bing Yin
Bing Yin
Amazon.com
NLPInformation RetrievalDeep LearningKnowledge Graphs
J
Jun Du
University of Science and Technology of China, Hefei, China; iFLYTEK Research, Hefei, China
Qingfeng Liu
Qingfeng Liu
Professor, Hosei University
Econometrics