๐ค AI Summary
To address the challenge of efficiently querying and interpreting heterogeneous multimodal data (databases, text, images) via unified natural language, this paper proposes the first interpretable multimodal natural language exploration framework. Built upon an LLM-based agent architecture, it integrates task decomposition, cross-modal orchestration, and inference provenance tracking to jointly invoke text-to-SQL generation, CLIP/ViT-based visual understanding, and a hybrid execution engineโachieving high-confidence answers with low latency and cost. Evaluated on a multimodal relational + image benchmark, our framework outperforms state-of-the-art systems by +12.3% in query accuracy, โ41% in latency, and โ38% in API invocation cost, while generating high-quality, structured explanations. The core contribution is an interpretability-driven paradigm for coordinated multimodal reasoning, enabling transparent, traceable, and efficient cross-modal inference.
๐ Abstract
International enterprises, organizations, or hospitals collect large amounts of multi-modal data stored in databases, text documents, images, and videos. While there has been recent progress in the separate fields of multi-modal data exploration as well as in database systems that automatically translate natural language questions to database query languages, the research challenge of querying database systems combined with other unstructured modalities such as images in natural language is widely unexplored. In this paper, we propose XMODE - a system that enables explainable, multi-modal data exploration in natural language. Our approach is based on the following research contributions: (1) Our system is inspired by a real-world use case that enables users to explore multi-modal information systems. (2) XMODE leverages a LLM-based agentic AI framework to decompose a natural language question into subtasks such as text-to-SQL generation and image analysis. (3) Experimental results on multi-modal datasets over relational data and images demonstrate that our system outperforms state-of-the-art multi-modal exploration systems, excelling not only in accuracy but also in various performance metrics such as query latency, API costs, planning efficiency, and explanation quality, thanks to the more effective utilization of the reasoning capabilities of LLMs.