AgentCoMa: A Compositional Benchmark Mixing Commonsense and Mathematical Reasoning in Real-World Scenarios

📅 2025-08-27
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
Existing compositional reasoning benchmarks predominantly target either commonsense or mathematical reasoning in isolation, failing to assess large language models’ (LLMs) ability to jointly leverage both reasoning modalities—critical for real-world tasks. Method: We introduce AgentCoMa, the first hybrid compositional benchmark explicitly designed to evaluate synergistic commonsense and mathematical reasoning through novel tasks requiring concurrent activation of both knowledge types. Contribution/Results: Evaluating 61 LLMs spanning diverse scales, architectures, and training strategies, we observe an average ~30% accuracy drop on mixed-reasoning tasks—substantially exceeding performance gaps on single-modality baselines—revealing a systematic cross-modal reasoning fragility. In contrast, human annotators maintain high accuracy, validating benchmark validity. Complementary interpretability analyses—including neuron activation patterns, attention maps, and membership inference—pinpoint failure mechanisms. This work establishes a new evaluation paradigm and empirical foundation for modeling multi-type reasoning integration.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have achieved high accuracy on complex commonsense and mathematical problems that involve the composition of multiple reasoning steps. However, current compositional benchmarks testing these skills tend to focus on either commonsense or math reasoning, whereas LLM agents solving real-world tasks would require a combination of both. In this work, we introduce an Agentic Commonsense and Math benchmark (AgentCoMa), where each compositional task requires a commonsense reasoning step and a math reasoning step. We test it on 61 LLMs of different sizes, model families, and training strategies. We find that LLMs can usually solve both steps in isolation, yet their accuracy drops by ~30% on average when the two are combined. This is a substantially greater performance gap than the one we observe in prior compositional benchmarks that combine multiple steps of the same reasoning type. In contrast, non-expert human annotators can solve the compositional questions and the individual steps in AgentCoMa with similarly high accuracy. Furthermore, we conduct a series of interpretability studies to better understand the performance gap, examining neuron patterns, attention maps and membership inference. Our work underscores a substantial degree of model brittleness in the context of mixed-type compositional reasoning and offers a test bed for future improvement.
Problem

Research questions and friction points this paper is trying to address.

Combining commonsense and mathematical reasoning in real-world scenarios
Testing LLMs on mixed-type compositional reasoning tasks
Addressing performance gap in multi-step reasoning accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixed commonsense and math reasoning benchmark
Interpretability studies on neuron patterns
Performance gap analysis in compositional tasks