🤖 AI Summary
This study reveals the significant impact of user conversational behavior on the energy consumption and carbon footprint of LLM-based chatbots, identifying four key problems: (1) extended dialogues increase token-generation costs; (2) real-time response requirements impede energy-efficient scheduling; (3) habitual daily interactions amplify cumulative computational load; and (4) context expansion escalates memory overhead. Methodologically, we propose an interdisciplinary framework integrating human-computer interaction (HCI) analysis, empirical LLM inference profiling, and fine-grained energy modeling. Our contribution is the novel concept of “dialogue norms as sustainability levers,” positioning human-AI interaction design at the core of AI carbon governance. We systematically identify and quantify four intervention-ready dialogue dimensions—length, latency tolerance, interaction frequency, and context retention—thereby establishing theoretical foundations and practical pathways for low-power interface design, latency-tolerant protocols, and context-aware compression mechanisms. (149 words)
📝 Abstract
LLM based chatbots have become central interfaces in technical, educational, and analytical domains, supporting tasks such as code reasoning, problem solving, and information exploration. As these systems scale, sustainability concerns have intensified, with most assessments focusing on model architecture, hardware efficiency, and deployment infrastructure. However, existing mitigation efforts largely overlook how user interaction practices themselves shape the energy profile of LLM based systems. In this vision paper, we argue that interaction level behavior appears to be an underexamined factor shaping the environmental impact of LLM based systems, and we present this issue across four dimensions. First, extended conversational patterns increase token production and raise the computational cost of inference. Second, expectations of instant responses limit opportunities for energy aware scheduling and workload consolidation. Third, everyday user habits contribute to cumulative operational demand in ways that are rarely quantified. Fourth, the accumulation of context affects memory requirements and reduces the efficiency of long running dialogues. Addressing these challenges requires rethinking how chatbot interactions are designed and conceptualized, and adopting perspectives that recognize sustainability as partly dependent on the conversational norms through which users engage with LLM based systems.