Comprehensive Evaluation on Lexical Normalization: Boundary-Aware Approaches for Unsegmented Languages

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Lexical normalization for tokenization-free languages (e.g., Japanese) suffers from a lack of systematic evaluation protocols and strong baselines. Method: We introduce the first large-scale, multi-domain Japanese standardization benchmark; propose a boundary-aware normalization framework compatible with both encoder-only (e.g., BERT) and decoder-only (e.g., LLaMA) pretrained models, explicitly modeling word boundaries via joint sequence labeling and generative modeling; and design a unified, multi-granular evaluation suite assessing accuracy, efficiency, and robustness. Contributions/Results: Experiments show our approach achieves over 12% F1 improvement on cross-domain test sets—substantially outperforming prior methods—while striking a superior trade-off between precision and inference efficiency. This work establishes a reproducible, scalable new baseline for lexical normalization in tokenization-free languages.

Technology Category

Application Category

📝 Abstract
Lexical normalization research has sought to tackle the challenge of processing informal expressions in user-generated text, yet the absence of comprehensive evaluations leaves it unclear which methods excel across multiple perspectives. Focusing on unsegmented languages, we make three key contributions: (1) creating a large-scale, multi-domain Japanese normalization dataset, (2) developing normalization methods based on state-of-the-art pretrained models, and (3) conducting experiments across multiple evaluation perspectives. Our experiments show that both encoder-only and decoder-only approaches achieve promising results in both accuracy and efficiency.
Problem

Research questions and friction points this paper is trying to address.

Evaluating lexical normalization methods for unsegmented languages
Creating a multi-domain Japanese normalization dataset
Developing efficient normalization using pretrained models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale multi-domain Japanese dataset creation
State-of-the-art pretrained model normalization methods
Multi-perspective evaluation experiments
🔎 Similar Papers
No similar papers found.
S
S. Higashiyama
National Institute of Information and Communications Technology, Kyoto, Japan
Masao Utiyama
Masao Utiyama
NICT
Machine Translation