🤖 AI Summary
Lexical normalization for tokenization-free languages (e.g., Japanese) suffers from a lack of systematic evaluation protocols and strong baselines. Method: We introduce the first large-scale, multi-domain Japanese standardization benchmark; propose a boundary-aware normalization framework compatible with both encoder-only (e.g., BERT) and decoder-only (e.g., LLaMA) pretrained models, explicitly modeling word boundaries via joint sequence labeling and generative modeling; and design a unified, multi-granular evaluation suite assessing accuracy, efficiency, and robustness. Contributions/Results: Experiments show our approach achieves over 12% F1 improvement on cross-domain test sets—substantially outperforming prior methods—while striking a superior trade-off between precision and inference efficiency. This work establishes a reproducible, scalable new baseline for lexical normalization in tokenization-free languages.
📝 Abstract
Lexical normalization research has sought to tackle the challenge of processing informal expressions in user-generated text, yet the absence of comprehensive evaluations leaves it unclear which methods excel across multiple perspectives. Focusing on unsegmented languages, we make three key contributions: (1) creating a large-scale, multi-domain Japanese normalization dataset, (2) developing normalization methods based on state-of-the-art pretrained models, and (3) conducting experiments across multiple evaluation perspectives. Our experiments show that both encoder-only and decoder-only approaches achieve promising results in both accuracy and efficiency.