Explainable Multi-Modal Data Exploration in Natural Language via LLM Agent

๐Ÿ“… 2024-12-24
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the challenge of efficiently querying and interpreting heterogeneous multimodal data (databases, text, images) via unified natural language, this paper proposes the first interpretable multimodal natural language exploration framework. Built upon an LLM-based agent architecture, it integrates task decomposition, cross-modal orchestration, and inference provenance tracking to jointly invoke text-to-SQL generation, CLIP/ViT-based visual understanding, and a hybrid execution engineโ€”achieving high-confidence answers with low latency and cost. Evaluated on a multimodal relational + image benchmark, our framework outperforms state-of-the-art systems by +12.3% in query accuracy, โˆ’41% in latency, and โˆ’38% in API invocation cost, while generating high-quality, structured explanations. The core contribution is an interpretability-driven paradigm for coordinated multimodal reasoning, enabling transparent, traceable, and efficient cross-modal inference.

Technology Category

Application Category

๐Ÿ“ Abstract
International enterprises, organizations, or hospitals collect large amounts of multi-modal data stored in databases, text documents, images, and videos. While there has been recent progress in the separate fields of multi-modal data exploration as well as in database systems that automatically translate natural language questions to database query languages, the research challenge of querying database systems combined with other unstructured modalities such as images in natural language is widely unexplored. In this paper, we propose XMODE - a system that enables explainable, multi-modal data exploration in natural language. Our approach is based on the following research contributions: (1) Our system is inspired by a real-world use case that enables users to explore multi-modal information systems. (2) XMODE leverages a LLM-based agentic AI framework to decompose a natural language question into subtasks such as text-to-SQL generation and image analysis. (3) Experimental results on multi-modal datasets over relational data and images demonstrate that our system outperforms state-of-the-art multi-modal exploration systems, excelling not only in accuracy but also in various performance metrics such as query latency, API costs, planning efficiency, and explanation quality, thanks to the more effective utilization of the reasoning capabilities of LLMs.
Problem

Research questions and friction points this paper is trying to address.

Simple Language Query
Multimodal Information Retrieval
Data Exploration
Innovation

Methods, ideas, or system contributions that make the work stand out.

XMODE
Multimodal Data Processing
Smart Framework
๐Ÿ”Ž Similar Papers
No similar papers found.
F
F. Nooralahzadeh
Zurich University of Applied Sciences, Switzerland
Y
Yi Zhang
Zurich University of Applied Sciences, Switzerland
J
Jonathan Furst
Zurich University of Applied Sciences, Switzerland
Kurt Stockinger
Kurt Stockinger
Professor of Computer Science, Zurich University of Applied Sciences
Data ScienceBig DataDatabase SystemsNatural Language InterfacesQuantum Machine Learning