đ¤ AI Summary
Existing compositional reasoning benchmarks predominantly target either commonsense or mathematical reasoning in isolation, failing to assess large language modelsâ (LLMs) ability to jointly leverage both reasoning modalitiesâcritical for real-world tasks. Method: We introduce AgentCoMa, the first hybrid compositional benchmark explicitly designed to evaluate synergistic commonsense and mathematical reasoning through novel tasks requiring concurrent activation of both knowledge types. Contribution/Results: Evaluating 61 LLMs spanning diverse scales, architectures, and training strategies, we observe an average ~30% accuracy drop on mixed-reasoning tasksâsubstantially exceeding performance gaps on single-modality baselinesârevealing a systematic cross-modal reasoning fragility. In contrast, human annotators maintain high accuracy, validating benchmark validity. Complementary interpretability analysesâincluding neuron activation patterns, attention maps, and membership inferenceâpinpoint failure mechanisms. This work establishes a new evaluation paradigm and empirical foundation for modeling multi-type reasoning integration.
đ Abstract
Large Language Models (LLMs) have achieved high accuracy on complex commonsense and mathematical problems that involve the composition of multiple reasoning steps. However, current compositional benchmarks testing these skills tend to focus on either commonsense or math reasoning, whereas LLM agents solving real-world tasks would require a combination of both. In this work, we introduce an Agentic Commonsense and Math benchmark (AgentCoMa), where each compositional task requires a commonsense reasoning step and a math reasoning step. We test it on 61 LLMs of different sizes, model families, and training strategies. We find that LLMs can usually solve both steps in isolation, yet their accuracy drops by ~30% on average when the two are combined. This is a substantially greater performance gap than the one we observe in prior compositional benchmarks that combine multiple steps of the same reasoning type. In contrast, non-expert human annotators can solve the compositional questions and the individual steps in AgentCoMa with similarly high accuracy. Furthermore, we conduct a series of interpretability studies to better understand the performance gap, examining neuron patterns, attention maps and membership inference. Our work underscores a substantial degree of model brittleness in the context of mixed-type compositional reasoning and offers a test bed for future improvement.