Code Simulation as a Proxy for High-order Tasks in Large Language Models

📅 2025-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether large language models (LLMs) can reliably perform higher-order reasoning tasks—such as planning and problem solving—via code-based simulation. Method: The authors introduce the first systematic mapping of programming constructs (e.g., straight-line programs, nested loops, critical paths) to natural-language reasoning proxy tasks, enabling a controllable, interpretable synthetic benchmark. Their approach integrates synthetic data generation, program-structure-aware modeling, and fine-grained evaluation of stepwise execution capability. Contribution/Results: Experiments reveal that state-of-the-art LLMs exhibit strong sequential execution proficiency but suffer from brittle generalization, with performance heavily influenced by training-data memorization and superficial pattern matching rather than genuine symbolic manipulation. The study proposes a scalable synthetic evaluation paradigm that exposes fundamental limitations in LLMs’ execution fidelity, establishing a novel benchmark and analytical framework for assessing higher-order reasoning.

Technology Category

Application Category

📝 Abstract
Many reasoning, planning, and problem-solving tasks share an intrinsic algorithmic nature: correctly simulating each step is a sufficient condition to solve them correctly. We collect pairs of naturalistic and synthetic reasoning tasks to assess the capabilities of Large Language Models (LLM). While naturalistic tasks often require careful human handcrafting, we show that synthetic data is, in many cases, a good proxy that is much easier to collect at scale. We leverage common constructs in programming as the counterpart of the building blocks of naturalistic reasoning tasks, such as straight-line programs, code that contains critical paths, and approximate and redundant instructions. We further assess the capabilities of LLMs on sorting problems and repeated operations via sorting algorithms and nested loops. Our synthetic datasets further reveal that while the most powerful LLMs exhibit relatively strong execution capabilities, the process is fragile: it is negatively affected by memorisation and seems to rely heavily on pattern recognition. Our contribution builds upon synthetically testing the reasoning capabilities of LLMs as a scalable complement to handcrafted human-annotated problems.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLM reasoning with synthetic tasks
Exploring code constructs as reasoning proxies
Investigating LLM fragility in task execution
Innovation

Methods, ideas, or system contributions that make the work stand out.

Synthetic data as reasoning task proxy
Programming constructs simulate naturalistic tasks
Assessing LLMs with sorting and loops
🔎 Similar Papers
No similar papers found.