Integrating Neural and Symbolic Components in a Model of Pragmatic Question-Answering

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing pragmatic-cognitive models rely on manually curated sentence–meaning mappings, severely limiting generalizability. Method: We propose a neuro-symbolic integration framework that systematically incorporates large language models (LLMs) into probabilistic pragmatic modeling for the first time—endowing them with multi-role capabilities including generating alternative questions and constructing goal–utility mappings, thereby replacing hand-crafted rules. Our approach tightly couples LLM-based proposal generation, WebPPL-based probabilistic program modeling, formal semantic parsing, and utility-theoretic reasoning to enable synergistic neural suggestion and symbolic inference. Contribution/Results: Experiments show the hybrid model matches or surpasses purely probabilistic baselines in predicting human answer patterns. Crucially, we empirically delineate the LLM’s applicability boundary: it excels at semantic transformation and hypothesis generation but cannot substitute symbolic components for truth-conditional semantic evaluation.

Technology Category

Application Category

📝 Abstract
Computational models of pragmatic language use have traditionally relied on hand-specified sets of utterances and meanings, limiting their applicability to real-world language use. We propose a neuro-symbolic framework that enhances probabilistic cognitive models by integrating LLM-based modules to propose and evaluate key components in natural language, eliminating the need for manual specification. Through a classic case study of pragmatic question-answering, we systematically examine various approaches to incorporating neural modules into the cognitive model -- from evaluating utilities and literal semantics to generating alternative utterances and goals. We find that hybrid models can match or exceed the performance of traditional probabilistic models in predicting human answer patterns. However, the success of the neuro-symbolic model depends critically on how LLMs are integrated: while they are particularly effective for proposing alternatives and transforming abstract goals into utilities, they face challenges with truth-conditional semantic evaluation. This work charts a path toward more flexible and scalable models of pragmatic language use while illuminating crucial design considerations for balancing neural and symbolic components.
Problem

Research questions and friction points this paper is trying to address.

Enhancing pragmatic language models by integrating neural and symbolic components
Eliminating manual specification in cognitive models using LLM-based modules
Balancing neural-symbolic integration for effective pragmatic question-answering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates LLM-based modules for natural language
Combines neural and symbolic components effectively
Enhances pragmatic language models flexibility
🔎 Similar Papers
No similar papers found.