๐ค AI Summary
Existing spatial services rely on single-point queries, making them inadequate for complex tasks requiring synchronous retrieval of multiple related locations. To address this, we propose Spatial Exemplar Query (SEQ), a novel paradigm wherein users specify multiple representative location examples to drive joint spatial search. We design SEQ-GPTโa LLM-based system integrating dialogue synthesis, multimodal coordination, and a customized adaptation pipelineโto achieve precise alignment from natural language to structured spatial queries. The system further supports interactive clarification and feedback-driven dynamic optimization. End-to-end experiments on real-world datasets and application scenarios demonstrate that SEQ-GPT significantly improves accuracy, interpretability, and user experience for multi-exemplar joint retrieval, while exhibiting strong generalization capability across diverse spatial tasks.
๐ Abstract
Contemporary spatial services such as online maps predominantly rely on user queries for location searches. However, the user experience is limited when performing complex tasks, such as searching for a group of locations simultaneously. In this study, we examine the extended scenario known as Spatial Exemplar Query (SEQ), where multiple relevant locations are jointly searched based on user-specified examples. We introduce SEQ-GPT, a spatial query system powered by Large Language Models (LLMs) towards more versatile SEQ search using natural language. The language capabilities of LLMs enable unique interactive operations in the SEQ process, including asking users to clarify query details and dynamically adjusting the search based on user feedback. We also propose a tailored LLM adaptation pipeline that aligns natural language with structured spatial data and queries through dialogue synthesis and multi-model cooperation. SEQ-GPT offers an end-to-end demonstration for broadening spatial search with realistic data and application scenarios.