🤖 AI Summary
This work identifies and defines a novel bias in large language models (LLMs) termed *expression leakage*: the systematic generation of affective, semantically irrelevant expressions—particularly under negative contextual influence—despite neutral input semantics. To systematically investigate this phenomenon, we introduce the first dedicated benchmark dataset (automatically constructed from CommonCrawl free text), an automated evaluation pipeline, and a human-annotated validation framework. Experimental results reveal that prompt engineering offers limited mitigation; expression leakage attenuates with increasing parameter count within model families; yet effective suppression requires architectural interventions at the modeling stage—e.g., explicit affective decoupling. Our study advances understanding of LLMs’ affective robustness and contextual sensitivity, providing both conceptual insight and a reproducible, multi-layered evaluation paradigm for future research on emotion-contaminated generation.
📝 Abstract
Large language models (LLMs) have advanced natural language processing (NLP) skills such as through next-token prediction and self-attention, but their ability to integrate broad context also makes them prone to incorporating irrelevant information. Prior work has focused on semantic leakage, bias introduced by semantically irrelevant context. In this paper, we introduce expression leakage, a novel phenomenon where LLMs systematically generate sentimentally charged expressions that are semantically unrelated to the input context. To analyse the expression leakage, we collect a benchmark dataset along with a scheme to automatically generate a dataset from free-form text from common-crawl. In addition, we propose an automatic evaluation pipeline that correlates well with human judgment, which accelerates the benchmarking by decoupling from the need of annotation for each analysed model. Our experiments show that, as the model scales in the parameter space, the expression leakage reduces within the same LLM family. On the other hand, we demonstrate that expression leakage mitigation requires specific care during the model building process, and cannot be mitigated by prompting. In addition, our experiments indicate that, when negative sentiment is injected in the prompt, it disrupts the generation process more than the positive sentiment, causing a higher expression leakage rate.