Evaluating the Formal Reasoning Capabilities of Large Language Models through Chomsky Hierarchy

📅 2026-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models lack systematic evaluation grounded in computational complexity theory, making it difficult to assess their grasp of the hierarchical structure of formal languages. This work proposes ChomskyBench—the first benchmark encompassing the full spectrum of the Chomsky hierarchy—to systematically evaluate models’ formal reasoning capabilities in recognizing and generating languages across all hierarchy levels, integrating natural-language process tracing with symbolically verifiable mechanisms. Experiments reveal a marked decline in model performance as language complexity increases; while larger models and advanced reasoning techniques yield modest improvements, they incur substantial computational costs and remain far less efficient than classical algorithms, indicating that the current bottleneck stems from inefficiency rather than fundamental capability limits.
📝 Abstract
The formal reasoning capabilities of LLMs are crucial for advancing automated software engineering. However, existing benchmarks for LLMs lack systematic evaluation based on computation and complexity, leaving a critical gap in understanding their formal reasoning capabilities. Therefore, it is still unknown whether SOTA LLMs can grasp the structured, hierarchical complexity of formal languages as defined by Computation Theory. To address this, we introduce ChomskyBench, a benchmark for systematically evaluating LLMs through the lens of Chomsky Hierarchy. Unlike prior work that uses vectorized classification for neural networks, ChomskyBench is the first to combine full Chomsky Hierarchy coverage, process-trace evaluation via natural language, and deterministic symbolic verifiability. ChomskyBench is composed of a comprehensive suite of language recognition and generation tasks designed to test capabilities at each level. Extensive experiments indicate a clear performance stratification that correlates with the hierarchy's levels of complexity. Our analysis reveals a direct relationship where increasing task difficulty substantially impacts both inference length and performance. Furthermore, we find that while larger models and advanced inference methods offer notable relative gains, they face severe efficiency barriers: achieving practical reliability would require prohibitive computational costs, revealing that current limitations stem from inefficiency rather than absolute capability bounds. A time complexity analysis further indicates that LLMs are significantly less efficient than traditional algorithmic programs for these formal tasks. These results delineate the practical limits of current LLMs, highlight the indispensability of traditional software tools, and provide insights to guide the development of future LLMs with more powerful formal reasoning capabilities.
Problem

Research questions and friction points this paper is trying to address.

formal reasoning
Chomsky Hierarchy
large language models
computational complexity
language recognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Chomsky Hierarchy
formal reasoning
language model benchmarking
process-trace evaluation
symbolic verifiability
🔎 Similar Papers
No similar papers found.