Semantic Invariance in Agentic AI

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of semantic invariance in large language models (LLMs), which often produce inconsistent outputs under semantically equivalent input perturbations during multi-step reasoning. The study presents the first systematic definition and evaluation of semantic invariance for LLM agents, introducing a meta-state testing framework that integrates eight types of meaning-preserving transformations—such as paraphrasing, fact reordering, and context shifting—to conduct cross-domain assessments across seven base models. Experimental results reveal no positive correlation between model scale and reasoning stability; notably, Qwen3-30B-A3B achieves the highest performance with a 79.6% invariant response rate and 0.91 semantic similarity, while some larger models exhibit greater fragility, highlighting a critical limitation in current LLMs’ robustness for reliable reasoning.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) increasingly serve as autonomous reasoning agents in decision support, scientific problem-solving, and multi-agent coordination systems. However, deploying LLM agents in consequential applications requires assurance that their reasoning remains stable under semantically equivalent input variations, a property we term semantic invariance.Standard benchmark evaluations, which assess accuracy on fixed, canonical problem formulations, fail to capture this critical reliability dimension. To address this shortcoming, in this paper we present a metamorphic testing framework for systematically assessing the robustness of LLM reasoning agents, applying eight semantic-preserving transformations (identity, paraphrase, fact reordering, expansion, contraction, academic context, business context, and contrastive formulation) across seven foundation models spanning four distinct architectural families: Hermes (70B, 405B), Qwen3 (30B-A3B, 235B-A22B), DeepSeek-R1, and gpt-oss (20B, 120B). Our evaluation encompasses 19 multi-step reasoning problems across eight scientific domains. The results reveal that model scale does not predict robustness: the smaller Qwen3-30B-A3B achieves the highest stability (79.6% invariant responses, semantic similarity 0.91), while larger models exhibit greater fragility.
Problem

Research questions and friction points this paper is trying to address.

semantic invariance
LLM robustness
metamorphic testing
reasoning stability
input variation
Innovation

Methods, ideas, or system contributions that make the work stand out.

semantic invariance
metamorphic testing
LLM robustness
reasoning agents
semantic-preserving transformations
🔎 Similar Papers
No similar papers found.
I
I. de Zarzà
Human Centered AI, Data & Software, LUXEMBOURG Institute of Science and Technology, L-4362 Esch-sur-Alzette, Luxembourg
J
J. de Curtò
Department of Computer Applications in Science & Engineering, BARCELONA Supercomputing Center, 08034 Barcelona, Spain
Jordi Cabot
Jordi Cabot
Head of the Software Engineering RDI Unit at Luxembourg Institute of Science and Technology (LIST)
software engineeringmodelingopen sourcelow-codeAI
Pietro Manzoni
Pietro Manzoni
Universidad Politécnica de Valencia
Mobile Networks and SystemsInternet of ThingsPublish/Subscribe systems
Carlos T. Calafate
Carlos T. Calafate
Full Professor (UPV)
wireless networksvehicular networksUAVsIoTSmart Cities