π€ AI Summary
Existing handwriting recognition systems predominantly rely on either offline images or online trajectories in isolation, neglecting the complementary information inherent in dual-modal data. This paper proposes an end-to-end Transformer architecture that achieves early fusion of offline image patches and online stroke sequences within a shared latent spaceβthe first such approach. Specifically, a Patch Encoder extracts visual tokens from offline images, while a lightweight Transformer models temporal dynamics of online trajectories; learnable latent queries drive cross-modal attention to jointly optimize dual-stream representations. This design significantly improves contextual awareness and writer robustness. The method achieves state-of-the-art performance on IAMOn-DB and VNOn-DB, with up to 1.0% absolute accuracy gain, and demonstrates strong generalization on the challenging ISI-Air air-gapped handwriting dataset.
π Abstract
We posit that handwriting recognition benefits from complementary cues carried by the rasterized complex glyph and the pen's trajectory, yet most systems exploit only one modality. We introduce an end-to-end network that performs early fusion of offline images and online stroke data within a shared latent space. A patch encoder converts the grayscale crop into fixed-length visual tokens, while a lightweight transformer embeds the $(x, y, ext{pen})$ sequence. Learnable latent queries attend jointly to both token streams, yielding context-enhanced stroke embeddings that are pooled and decoded under a cross-entropy loss objective. Because integration occurs before any high-level classification, temporal cues reinforce each other during representation learning, producing stronger writer independence. Comprehensive experiments on IAMOn-DB and VNOn-DB demonstrate that our approach achieves state-of-the-art accuracy, exceeding previous bests by up to 1%. Our study also shows adaptation of this pipeline with gesturification on the ISI-Air dataset. Our code can be found here.