Retrieval-Augmented Generation for Service Discovery: Chunking Strategies and Benchmarking

📅 2025-05-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from input-length limitations and struggle to parse lengthy OpenAPI specifications, leading to low accuracy in service discovery for dynamic service integration. Method: We propose an endpoint-level intelligent chunking strategy and a demand-driven Discovery Agent architecture. Our approach semantically partitions OpenAPI specifications at the endpoint granularity and integrates retrieval-augmented generation (RAG) to enable lightweight, precise specification retrieval. Contribution/Results: To enable systematic evaluation, we introduce SOCBench-D—the first cross-domain service discovery benchmark. Experiments demonstrate that endpoint-level chunking significantly outperforms conventional text-based chunking methods. The Discovery Agent achieves substantial improvements in endpoint retrieval accuracy while maintaining high token efficiency, empirically validating RAG’s effectiveness and scalability for API-driven service discovery.

Technology Category

Application Category

📝 Abstract
Integrating multiple (sub-)systems is essential to create advanced Information Systems. Difficulties mainly arise when integrating dynamic environments, e.g., the integration at design time of not yet existing services. This has been traditionally addressed using a registry that provides the API documentation of the endpoints. Large Language Models have shown to be capable of automatically creating system integrations (e.g., as service composition) based on this documentation but require concise input due to input oken limitations, especially regarding comprehensive API descriptions. Currently, it is unknown how best to preprocess these API descriptions. In the present work, we (i) analyze the usage of Retrieval Augmented Generation for endpoint discovery and the chunking, i.e., preprocessing, of state-of-practice OpenAPIs to reduce the input oken length while preserving the most relevant information. To further reduce the input token length for the composition prompt and improve endpoint retrieval, we propose (ii) a Discovery Agent that only receives a summary of the most relevant endpoints nd retrieves specification details on demand. We evaluate RAG for endpoint discovery using (iii) a proposed novel service discovery benchmark SOCBench-D representing a general setting across numerous domains and the real-world RestBench enchmark, first, for the different chunking possibilities and parameters measuring the endpoint retrieval accuracy. Then, we assess the Discovery Agent using the same test data set. The prototype shows how to successfully employ RAG for endpoint discovery to reduce the token count. Our experiments show that endpoint-based approaches outperform naive chunking methods for preprocessing. Relying on an agent significantly improves precision while being prone to decrease recall, disclosing the need for further reasoning capabilities.
Problem

Research questions and friction points this paper is trying to address.

Optimizing API description preprocessing for LLM input constraints
Enhancing service discovery accuracy via Retrieval-Augmented Generation
Reducing token length while preserving critical endpoint information
Innovation

Methods, ideas, or system contributions that make the work stand out.

Retrieval Augmented Generation for endpoint discovery
Discovery Agent for relevant endpoint summaries
Novel benchmark SOCBench-D for evaluation
🔎 Similar Papers
No similar papers found.