🤖 AI Summary
This work addresses the automatic completion task in large language model (LLM)-based chatbot interactions. We formally define “chat interaction auto-completion” as a novel task and introduce ChaI-TeA—the first open-source, reproducible evaluation framework specifically designed for it. ChaI-TeA comprises a rigorous task formulation, a diverse benchmark dataset reflecting real-user linguistic variability, and a multi-dimensional evaluation protocol assessing accuracy, ranking quality, intent consistency, and natural language diversity. Our method integrates interactive context modeling, LLM-based sequence generation, and a learned re-ranking module, evaluated via a hybrid automated + human annotation protocol. Benchmarking across nine state-of-the-art LLMs reveals that while completion accuracy is reasonably high, ranking performance—i.e., prioritizing the most appropriate completions—is markedly deficient. All components of ChaI-TeA, including code, data, and evaluation scripts, are publicly released to support standardized research and practical deployment in conversational AI.
📝 Abstract
The rise of LLMs has deflected a growing portion of human-computer interactions towards LLM-based chatbots. The remarkable abilities of these models allow users to interact using long, diverse natural language text covering a wide range of topics and styles. Phrasing these messages is a time and effort consuming task, calling for an autocomplete solution to assist users. We introduce the task of chatbot interaction autocomplete. We present ChaI-TeA: CHat InTEraction Autocomplete; An autcomplete evaluation framework for LLM-based chatbot interactions. The framework includes a formal definition of the task, coupled with suitable datasets and metrics. We use the framework to evaluate After formally defining the task along with suitable datasets and metrics, we test 9 models on the defined auto completion task, finding that while current off-the-shelf models perform fairly, there is still much room for improvement, mainly in ranking of the generated suggestions. We provide insights for practitioners working on this task and open new research directions for researchers in the field. We release our framework to serve as a foundation for future research.