🤖 AI Summary
This work proposes a localized intelligent agent platform embedded within a LaTeX editor to address critical challenges in academic writing with large language models (LLMs), including hallucination, integrity risks, and privacy leakage. The platform pioneers the integration of LLM tools directly into local writing environments, coupling dynamic domain-aware retrieval routing, context-aware query generation, and paragraph-level semantic validation to enable efficient coordination with trusted academic repositories. By doing so, it accurately retrieves verifiable, traceable references without transmitting user data externally, thereby significantly enhancing the credibility and efficiency of scholarly writing while preserving data privacy.
📝 Abstract
Large language models (LLMs) have created new opportunities to enhance the efficiency of scholarly activities; however, challenges persist in the ethical deployment of AI assistance, including (1) the trustworthiness of AI-generated content, (2) preservation of academic integrity and intellectual property, and (3) protection of information privacy. In this work, we present CiteLLM, a specialized agentic platform designed to enable trustworthy reference discovery for grounding author-drafted claims and statements. The system introduces a novel interaction paradigm by embedding LLM utilities directly within the LaTeX editor environment, ensuring a seamless user experience and no data transmission outside the local system. To guarantee hallucination-free references, we employ dynamic discipline-aware routing to retrieve candidates exclusively from trusted web-based academic repositories, while leveraging LLMs solely for generating context-aware search queries, ranking candidates by relevance, and validating and explaining support through paragraph-level semantic matching and an integrated chatbot. Evaluation results demonstrate the superior performance of the proposed system in returning valid and highly usable references.