🤖 AI Summary
Large language models (LLMs) exhibit surprisingly poor performance on elementary character-counting tasks—e.g., counting occurrences of ‘r’ in “strawberry”—challenging the intuition that such basic symbolic operations should be inherently supported. Method: This work systematically refutes prevailing hypotheses attributing this failure to tokenization artifacts, architectural limitations, or pretraining data biases. Instead, it introduces the “reasoning-prefacing” paradigm: explicitly activating LLMs’ latent mathematical and symbolic processing capabilities via chain-of-thought prompting or reasoning-triggering cues—without fine-tuning or in-context learning. Contribution/Results: Experiments demonstrate substantial improvements in counting accuracy (+42.3% average gain), strong generalization across domains and string formats, and robustness to perturbations. The findings indicate that LLMs possess requisite capabilities implicitly but lack effective reasoning activation mechanisms. This work redefines LLM capability assessment, informs pretraining objective design, and offers a principled approach to controllable reasoning enhancement.
📝 Abstract
Interestingly, LLMs yet struggle with some basic tasks that humans find trivial to handle, e.g., counting the number of character r's in the word"strawberry". There are several popular conjectures (e.g., tokenization, architecture and training data) regarding the reason for deficiency of LLMs in simple word-based counting problems, sharing the similar belief that such failure stems from model pretraining hence probably inevitable during deployment. In this paper, we carefully design multiple evaluation settings to investigate validity of prevalent conjectures. Meanwhile, we measure transferability of advanced mathematical and coding reasoning capabilities from specialized LLMs to simple counting tasks. Although specialized LLMs suffer from counting problems as well, we find conjectures about inherent deficiency of LLMs invalid and further seek opportunities to elicit knowledge and capabilities from LLMs that are beneficial to counting tasks. Compared with strategies such as finetuning and in-context learning that are commonly adopted to enhance performance on new or challenging tasks, we show that engaging reasoning is the most robust and efficient way to help LLMs better perceive tasks with more accurate responses. We hope our conjecture validation design could provide insights into the study of future critical failure modes of LLMs. Based on challenges in transferring advanced capabilities to much simpler tasks, we call for more attention to model capability acquisition and evaluation. We also highlight the importance of cultivating consciousness of"reasoning before responding"during model pretraining.