🤖 AI Summary
This work addresses the lack of semantic invariance in large language models (LLMs), which often produce inconsistent outputs under semantically equivalent input perturbations during multi-step reasoning. The study presents the first systematic definition and evaluation of semantic invariance for LLM agents, introducing a meta-state testing framework that integrates eight types of meaning-preserving transformations—such as paraphrasing, fact reordering, and context shifting—to conduct cross-domain assessments across seven base models. Experimental results reveal no positive correlation between model scale and reasoning stability; notably, Qwen3-30B-A3B achieves the highest performance with a 79.6% invariant response rate and 0.91 semantic similarity, while some larger models exhibit greater fragility, highlighting a critical limitation in current LLMs’ robustness for reliable reasoning.
📝 Abstract
Large Language Models (LLMs) increasingly serve as autonomous reasoning agents in decision support, scientific problem-solving, and multi-agent coordination systems. However, deploying LLM agents in consequential applications requires assurance that their reasoning remains stable under semantically equivalent input variations, a property we term semantic invariance.Standard benchmark evaluations, which assess accuracy on fixed, canonical problem formulations, fail to capture this critical reliability dimension. To address this shortcoming, in this paper we present a metamorphic testing framework for systematically assessing the robustness of LLM reasoning agents, applying eight semantic-preserving transformations (identity, paraphrase, fact reordering, expansion, contraction, academic context, business context, and contrastive formulation) across seven foundation models spanning four distinct architectural families: Hermes (70B, 405B), Qwen3 (30B-A3B, 235B-A22B), DeepSeek-R1, and gpt-oss (20B, 120B). Our evaluation encompasses 19 multi-step reasoning problems across eight scientific domains. The results reveal that model scale does not predict robustness: the smaller Qwen3-30B-A3B achieves the highest stability (79.6% invariant responses, semantic similarity 0.91), while larger models exhibit greater fragility.