đ¤ AI Summary
AI-mediated communication suffers from a âgeneration inflationâcompression reconstructionâ cycle, wherein neither party engages with authentic content, resulting in information distortion and eroded trust. Method: This paper proposes the LAAC framework, which reconfigures large language models (LLMs) as trustworthy intelligent communication intermediaries. It addresses three core challenges: accurate intent extraction, structural knowledge consistency across multi-turn interactions, and response reliability for recipientsâspecifically mitigating hallucination, source confusion, and fabrication. LAAC introduces a bidirectional interaction paradigm centered on intent understanding, departs from conventional unidirectional generation, and pioneers a three-dimensional evaluation metric for LLM communication trustworthinessâcapturing information fidelity, reproducibility, and query-response completeness. It employs a multi-agent architecture integrating controlled experimentation, intent recognition, structured knowledge representation, and counterfactual comparative analysis. Results: Empirical findings reveal significant trust deficits in current LLMs under high-stakes scenarios, establishing both theoretical foundations and actionable optimization pathways for trustworthy AI communication systems.
đ Abstract
The proliferation of AI-generated content has created an absurd communication theater where senders use LLMs to inflate simple ideas into verbose content, recipients use LLMs to compress them back into summaries, and as a consequence neither party engage with authentic content. LAAC (LLM as a Communicator) proposes a paradigm shift - positioning LLMs as intelligent communication intermediaries that capture the sender's intent through structured dialogue and facilitate genuine knowledge exchange with recipients. Rather than perpetuating cycles of AI-generated inflation and compression, LAAC enables authentic communication across diverse contexts including academic papers, proposals, professional emails, and cross-platform content generation. However, deploying LLMs as trusted communication intermediaries raises critical questions about information fidelity, consistency, and reliability. This position paper systematically evaluates the trustworthiness requirements for LAAC's deployment across multiple communication domains. We investigate three fundamental dimensions: (1) Information Capture Fidelity - accuracy of intent extraction during sender interviews across different communication types, (2) Reproducibility - consistency of structured knowledge across multiple interaction instances, and (3) Query Response Integrity - reliability of recipient-facing responses without hallucination, source conflation, or fabrication. Through controlled experiments spanning multiple LAAC use cases, we assess these trust dimensions using LAAC's multi-agent architecture. Preliminary findings reveal measurable trust gaps that must be addressed before LAAC can be reliably deployed in high-stakes communication scenarios.