🤖 AI Summary
This work addresses the lack of effective copyright protection mechanisms for trajectory data generated by large language model (LLM) agents, which are vulnerable to theft and difficult to trace. To this end, we propose ActHook, a black-box watermarking method based on a hooking mechanism that, for the first time, adapts the concept of software engineering hooks to protect LLM agent trajectories. By leveraging a secret key to trigger covert hook actions at decision points, ActHook embeds detectable copyright identifiers without altering task logic or degrading performance. Experimental results demonstrate that ActHook achieves an average watermark detection AUC of 94.3% on Qwen-2.5-Coder-7B across diverse tasks—including mathematical reasoning, web search, and software engineering—validating its effectiveness and practicality.
📝 Abstract
LLM agents rely heavily on high-quality trajectory data to guide their problem-solving behaviors, yet producing such data requires substantial task design, high-capacity model generation, and manual filtering. Despite the high cost of creating these datasets, existing literature has overlooked copyright protection for LLM agent trajectories. This gap leaves creators vulnerable to data theft and makes it difficult to trace misuse or enforce ownership rights. This paper introduces ActHook, the first watermarking method tailored for agent trajectory datasets. Inspired by hook mechanisms in software engineering, ActHook embeds hook actions that are activated by a secret input key and do not alter the original task outcome. Like software execution, LLM agents operate sequentially, allowing hook actions to be inserted at decision points without disrupting task flow. When the activation key is present, an LLM agent trained on watermarked trajectories can produce these hook actions at a significantly higher rate, enabling reliable black-box detection. Experiments on mathematical reasoning, web searching, and software engineering agents show that ActHook achieves an average detection AUC of 94.3 on Qwen-2.5-Coder-7B while incurring negligible performance degradation.