DAInfer+: Neurosymbolic Inference of API Specifications from Documentation via Embedding Models

📅 2026-03-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of missing semantic specifications for APIs in source-code-absent scenarios by proposing a novel approach that integrates natural language processing with neurosymbolic optimization. Specifically, it leverages sentence embedding models to parse informal semantic descriptions from library documentation and translates them into constrained optimization problems, enabling automatic inference of memory operation abstractions and generation of precise data-flow and aliasing specifications. To the best of our knowledge, this is the first effort to combine sentence embeddings with neurosymbolic reasoning for API specification inference. Evaluated in a zero-shot setting, the method substantially outperforms few-shot prompting with large language models. Experiments on mainstream Java libraries demonstrate 82% recall and 85% precision for data-flow inference, and 88% recall with 79% precision for alias relations, with each inference completing in just a few seconds.
📝 Abstract
Modern software systems heavily rely on various libraries, which require understanding the API semantics in static analysis. However, summarizing API semantics remains challenging due to complex implementations or unavailable library code. This paper presents DAInfer+, a novel approach for inferring API specifications from library documentation. We employ Natural Language Processing (NLP) to interpret informal semantic information provided by the documentation, which enables us to reduce the specification inference to an optimization problem. Specifically, we investigate the effectiveness of sentence embedding models and Large Language Models (LLMs) in deriving memory operation abstractions from API descriptions. These abstractions are used to retrieve data-flow and aliasing relations to generate comprehensive API specifications. To solve the optimization problem efficiently, we propose neurosymbolic optimization, yielding precise data-flow and aliasing specifications. Our evaluation of popular Java libraries shows that zero-shot sentence embedding models outperform few-shot prompted LLMs in robustness, capturing fine-grained semantic nuances more effectively. While our initial attempts using two-stage LLM prompting yielded promising results, we found that the embedding-based approach proved superior. Specifically, these models achieve over 82% recall and 85% precision for data-flow inference and 88% recall and 79% precision for alias relations, all within seconds. These results demonstrate the practical value of DAInfer+ in library-aware static analysis.
Problem

Research questions and friction points this paper is trying to address.

API specification inference
library documentation
data-flow analysis
aliasing relations
static analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

neurosymbolic optimization
sentence embedding models
API specification inference
static analysis
zero-shot learning
🔎 Similar Papers
No similar papers found.