Am I Blue or Is My Hobby Counting Teardrops? Expression Leakage in Large Language Models as a Symptom of Irrelevancy Disruption

📅 2025-08-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies and defines a novel bias in large language models (LLMs) termed *expression leakage*: the systematic generation of affective, semantically irrelevant expressions—particularly under negative contextual influence—despite neutral input semantics. To systematically investigate this phenomenon, we introduce the first dedicated benchmark dataset (automatically constructed from CommonCrawl free text), an automated evaluation pipeline, and a human-annotated validation framework. Experimental results reveal that prompt engineering offers limited mitigation; expression leakage attenuates with increasing parameter count within model families; yet effective suppression requires architectural interventions at the modeling stage—e.g., explicit affective decoupling. Our study advances understanding of LLMs’ affective robustness and contextual sensitivity, providing both conceptual insight and a reproducible, multi-layered evaluation paradigm for future research on emotion-contaminated generation.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have advanced natural language processing (NLP) skills such as through next-token prediction and self-attention, but their ability to integrate broad context also makes them prone to incorporating irrelevant information. Prior work has focused on semantic leakage, bias introduced by semantically irrelevant context. In this paper, we introduce expression leakage, a novel phenomenon where LLMs systematically generate sentimentally charged expressions that are semantically unrelated to the input context. To analyse the expression leakage, we collect a benchmark dataset along with a scheme to automatically generate a dataset from free-form text from common-crawl. In addition, we propose an automatic evaluation pipeline that correlates well with human judgment, which accelerates the benchmarking by decoupling from the need of annotation for each analysed model. Our experiments show that, as the model scales in the parameter space, the expression leakage reduces within the same LLM family. On the other hand, we demonstrate that expression leakage mitigation requires specific care during the model building process, and cannot be mitigated by prompting. In addition, our experiments indicate that, when negative sentiment is injected in the prompt, it disrupts the generation process more than the positive sentiment, causing a higher expression leakage rate.
Problem

Research questions and friction points this paper is trying to address.

LLMs generate sentimentally charged unrelated expressions
Expression leakage increases with negative sentiment injection
Mitigating leakage requires model-building care, not just prompting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmark dataset from common-crawl free-form text
Automatic evaluation pipeline matching human judgment
Parameter scaling reduces expression leakage
🔎 Similar Papers
No similar papers found.
B
Berkay Köprü
audEERING GmbH, Gilching, Germany
M
Mehrzad Mashal
Agile Robots SE, Munich, Germany
Y
Yigit Gurses
Agile Robots SE, Munich, Germany
A
Akos Kadar
Agile Robots SE, Munich, Germany
M
Maximilian Schmitt
audEERING GmbH, Gilching, Germany
D
Ditty Mathew
audEERING GmbH, Gilching, Germany
Felix Burkhardt
Felix Burkhardt
audEERING
Speech and language processing
Florian Eyben
Florian Eyben
audEERING GmbH
Emotion RecognitionAffective ComputingSignal ProcessingSpeech RecognitionMachine Learning
B
Björn W. Schuller
Chair of Health Informatics, Technical University of Munich, Germany; Group on Language, Audio & Music, Imperial College London, U. K.