🤖 AI Summary
This work addresses the significant performance gaps of current large reasoning models in multilingual settings, where English-centric reasoning patterns are often erroneously imposed on other languages. The authors propose a de-anglocentric approach that first defines measurable multilingual reasoning features and analyzes their association with answer accuracy via logistic regression. They then employ sparse autoencoders to uncover language-specific latent reasoning concepts, which inform a test-time path selection strategy. Experiments across two mathematical benchmarks, four models, and ten languages reveal that while most reasoning features correlate positively with accuracy, their strength—and sometimes even direction—varies substantially across languages. These findings provide an empirical foundation for developing language-adapted evaluation frameworks and reward mechanisms.
📝 Abstract
Large Reasoning Models (LRMs) still exhibit large performance gaps between English and other languages, yet much current work assumes these gaps can be closed simply by making reasoning in every language resemble English reasoning. This work challenges this assumption by asking instead: what actually characterizes effective reasoning in multilingual settings, and to what extent do English-derived reasoning features genuinely help in other languages? We first define a suite of measurable reasoning features spanning multilingual alignment, reasoning step, and reasoning flow aspects of reasoning traces, and use logistic regression to quantify how each feature associates with final answer accuracy. We further train sparse autoencoders over multilingual traces to automatically discover latent reasoning concepts that instantiate or extend these features. Finally, we use the features as test-time selection policies to examine whether they can steer models toward stronger multilingual reasoning. Across two mathematical reasoning benchmarks, four LRMs, and 10 languages, we find that most features are positively associated with accuracy, but the strength of association varies considerably across languages and can even reverse in some. Our findings challenge English-centric reward designs and point toward adaptive objectives that accommodate language-specific reasoning patterns, with concrete implications for multilingual benchmark and reward design.