🤖 AI Summary
This work addresses the challenges in translating natural language queries into Jira Query Language (JQL), including field ambiguity, missing categorical values, and difficulties in generating complex Boolean logic, compounded by the absence of an execution-validated public benchmark. To this end, the authors introduce Jackal, the first large-scale execution-level text-to-JQL benchmark, and propose Agentic Jackal, a novel agent framework that integrates large language models with real-time JQL execution and a semantic retrieval tool called JiraAnchor to dynamically resolve categorical values and iteratively verify outputs. Experiments reveal that baseline models achieve only 43.4% average execution accuracy on Jackal; with Agentic Jackal, seven out of nine models show improved performance, yielding a relative gain of 9.0%. Notably, JiraAnchor alone boosts component field accuracy from 16.9% to 66.2%.
📝 Abstract
Translating natural language into Jira Query Language (JQL) requires resolving ambiguous field references, instance-specific categorical values, and complex Boolean predicates. Single-pass LLMs cannot discover which categorical values (e.g., component names or fix versions) actually exist in a given Jira instance, nor can they verify generated queries against a live data source, limiting accuracy on paraphrased or ambiguous requests. No open, execution-based benchmark exists for mapping natural language to JQL. We introduce Jackal, the first large-scale, execution-based text-to-JQL benchmark comprising 100,000 validated NL-JQL pairs on a live Jira instance with over 200,000 issues. To establish baselines on Jackal, we propose Agentic Jackal, a tool-augmented agent that equips LLMs with live query execution via the Jira MCP server and JiraAnchor, a semantic retrieval tool that resolves natural-language mentions of categorical values through embedding-based similarity search. Among 9 frontier LLMs evaluated, single-pass models average only 43.4% execution accuracy on short natural-language queries, highlighting that text-to-JQL remains an open challenge. The agentic approach improves 7 of 9 models, with a 9.0% relative gain on the most linguistically challenging variant; in a controlled ablation isolating JiraAnchor, categorical-value accuracy rises from 48.7% to 71.7%, with component-field accuracy jumping from 16.9% to 66.2%. Our analysis identifies inherent semantic ambiguities, such as issue-type disambiguation and text-field selection, as the dominant failure modes rather than value-resolution errors, pointing to concrete directions for future work. We publicly release the benchmark, all agent transcripts, and evaluation code to support reproducibility.