Temporal Blindness in Multi-Turn LLM Agents: Misaligned Tool Use vs. Human Time Perception

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language model (LLM) agents suffer from “temporal blindness” in multi-turn dialogues: they fail to perceive real-world time progression, leading to suboptimal tool invocation decisions—either omitting necessary calls by over-relying on historical context or redundantly invoking tools. This paper introduces TicToc-v1, the first benchmark explicitly augmenting dialogue contexts with timestamps and establishing a human-preference-based evaluation framework for time-sensitive decision-making. Through timestamp augmentation, multi-turn trajectory sampling, and comparative human annotation, we systematically assess the temporal alignment of tool invocation across LLMs of varying scales. Experimental results show baseline models achieve only marginal performance above random chance (~55% accuracy); even with timestamp augmentation, the best alignment rate reaches merely 65%. These findings expose a critical deficit in current LLMs’ temporal awareness and underscore the urgent need for dedicated post-training alignment methods targeting time consciousness.

Technology Category

Application Category

📝 Abstract
Large language model agents are increasingly used in multi-turn conversational settings to interact with and execute tasks in dynamic environments. However, a key limitation is their temporal blindness: they, by default, operate with a stationary context, failing to account for the real-world time elapsed between messages. This becomes a critical liability when an agent must decide whether to invoke a tool based on how much time has passed since the last observation. Without temporal awareness, agents often either over-rely on previous context (skipping necessary tool calls), or under-rely on it (unnecessarily repeating tool calls). To study this challenge, we introduce TicToc-v1, a test set of multi-turn user-agent trajectories across 34 scenarios with varying time sensitivity. Each trajectory ends with a user question, where the need for a tool call depends on the amount of time elapsed since the last message. To give LLMs temporal context, we augment dialogue messages with explicit timestamps, bridging the gap between static dialogue and evolving environments. We then collected human preferences for these samples, creating two subsets: one where humans preferred relying on the previous observation (prefer-noTool), and another where they preferred a new tool call (prefer-Tool). We evaluated how well LLM tool-calling decisions align with human preferences under varying time intervals on TicToc-v1. Our analysis show that without time information, most models perform only slightly better than random, with the top alignment rate being just over 60%. While adding timestamps leads to a slight improvement, particularly for larger models, the improvement is modest, peaking at around 65%. We also show that naive, prompt-based alignment have limited effectiveness. Our findings highlight the need for specific post-training alignment to align multi-turn LLM tool use with human temporal perception.
Problem

Research questions and friction points this paper is trying to address.

LLM agents lack temporal awareness in multi-turn conversations
Agents misuse tools due to ignoring real-world time intervals
Current models poorly align tool calls with human time perception
Innovation

Methods, ideas, or system contributions that make the work stand out.

Augmenting dialogue messages with explicit timestamps
Creating test set with time-sensitive multi-turn scenarios
Evaluating tool-calling alignment with human temporal preferences
🔎 Similar Papers
No similar papers found.