The Algebra of Meaning: Why Machines Need Montague More Than Moore's Law

📅 2025-10-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from hallucination, regulatory fragility, and opaque compliance—rooted not in data scarcity or insufficient scale, but in the absence of type-theoretic semantic foundations. Method: We formalize natural language as a typed, compositional algebraic system, integrating Montague semantics with legal ontologies; we identify hallucination as a type error and establish a semantic parsing bridge to legal ontologies. We propose a novel “parse once, map everywhere” cross-jurisdictional compliance reasoning paradigm, implemented via a neurosymbolic architecture: neural modules extract syntactic-semantic structures, while symbolic modules perform type checking, deontic reasoning, and context-aware cross-jurisdictional mapping. Contribution/Results: Experiments demonstrate fine-grained, interpretable compliance risk assessment across multiple jurisdictions (e.g., product liability statements), providing a formal semantic foundation for trustworthy autonomous systems.

Technology Category

Application Category

📝 Abstract
Contemporary language models are fluent yet routinely mis-handle the types of meaning their outputs entail. We argue that hallucination, brittle moderation, and opaque compliance outcomes are symptoms of missing type-theoretic semantics rather than data or scale limitations. Building on Montague's view of language as typed, compositional algebra, we recast alignment as a parsing problem: natural-language inputs must be compiled into structures that make explicit their descriptive, normative, and legal dimensions under context. We present Savassan, a neuro-symbolic architecture that compiles utterances into Montague-style logical forms and maps them to typed ontologies extended with deontic operators and jurisdictional contexts. Neural components extract candidate structures from unstructured inputs; symbolic components perform type checking, constraint reasoning, and cross-jurisdiction mapping to produce compliance-aware guidance rather than binary censorship. In cross-border scenarios, the system "parses once" (e.g., defect claim(product x, company y)) and projects the result into multiple legal ontologies (e.g., defamation risk in KR/JP, protected opinion in US, GDPR checks in EU), composing outcomes into a single, explainable decision. This paper contributes: (i) a diagnosis of hallucination as a type error; (ii) a formal Montague-ontology bridge for business/legal reasoning; and (iii) a production-oriented design that embeds typed interfaces across the pipeline. We outline an evaluation plan using legal reasoning benchmarks and synthetic multi-jurisdiction suites. Our position is that trustworthy autonomy requires compositional typing of meaning, enabling systems to reason about what is described, what is prescribed, and what incurs liability within a unified algebra of meaning.
Problem

Research questions and friction points this paper is trying to address.

Addressing language model hallucinations through type-theoretic semantics
Compiling natural language into logical forms with legal dimensions
Enabling cross-jurisdiction compliance reasoning via typed ontologies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neuro-symbolic architecture compiling utterances into logical forms
Type checking and constraint reasoning for compliance guidance
Parsing once and projecting across multiple legal ontologies
🔎 Similar Papers
No similar papers found.