🤖 AI Summary
This work addresses the privacy risk posed by large language models (LLMs) inadvertently revealing author identities in texts such as news articles. To mitigate this, the authors propose SALA, an interpretable agent that integrates stylometric features with LLM-based reasoning to perform robust authorship attribution. SALA leverages database-augmented reasoning to generate traceable decision pathways, which inform a guided rewriting mechanism that substantially reduces text identifiability while preserving semantic integrity. Experimental results on a large-scale news dataset demonstrate that SALA achieves high accuracy in author identification tasks and effectively safeguards author privacy, offering a systematic solution to the de-anonymization risks inherent in LLM applications.
📝 Abstract
The rapid advancement of large language models (LLMs) has enabled powerful authorship inference capabilities, raising growing concerns about unintended deanonymization risks in textual data such as news articles. In this work, we introduce an LLM agent designed to evaluate and mitigate such risks through a structured, interpretable pipeline. Central to our framework is the proposed $\textit{SALA}$ (Stylometry-Assisted LLM Analysis) method, which integrates quantitative stylometric features with LLM reasoning for robust and transparent authorship attribution. Experiments on large-scale news datasets demonstrate that $\textit{SALA}$, particularly when augmented with a database module, achieves high inference accuracy in various scenarios. Finally, we propose a guided recomposition strategy that leverages the agent's reasoning trace to generate rewriting prompts, effectively reducing authorship identifiability while preserving textual meaning. Our findings highlight both the deanonymization potential of LLM agents and the importance of interpretable, proactive defenses for safeguarding author privacy.