In Agents We Trust, but Who Do Agents Trust? Latent Source Preferences Steer LLM Generations

📅 2026-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the potential for large language model (LLM) agents to exhibit systematic preferences toward specific information sources during content curation, thereby compromising the objectivity of their outputs. Through controlled experiments in both synthetic and real-world task scenarios, the authors systematically evaluate the response behaviors of twelve prominent LLMs from six providers when presented with varying information sources. The findings reveal that multiple models display significant and predictable source preferences, which are context-dependent, resistant to mitigation via prompting, and capable of overriding the intrinsic quality of the content itself. These results provide empirical grounding for phenomena such as political bias in news recommendation systems, highlighting a critical yet previously underexamined dimension of LLM behavior.

Technology Category

Application Category

📝 Abstract
Agents based on Large Language Models (LLMs) are increasingly being deployed as interfaces to information on online platforms. These agents filter, prioritize, and synthesize information retrieved from the platforms' back-end databases or via web search. In these scenarios, LLM agents govern the information users receive, by drawing users' attention to particular instances of retrieved information at the expense of others. While much prior work has focused on biases in the information LLMs themselves generate, less attention has been paid to the factors that influence what information LLMs select and present to users. We hypothesize that when information is attributed to specific sources (e.g., particular publishers, journals, or platforms), current LLMs exhibit systematic latent source preferences- that is, they prioritize information from some sources over others. Through controlled experiments on twelve LLMs from six model providers, spanning both synthetic and real-world tasks, we find that several models consistently exhibit strong and predictable source preferences. These preferences are sensitive to contextual framing, can outweigh the influence of content itself, and persist despite explicit prompting to avoid them. They also help explain phenomena such as the observed left-leaning skew in news recommendations in prior work. Our findings advocate for deeper investigation into the origins of these preferences, as well as for mechanisms that provide users with transparency and control over the biases guiding LLM-powered agents.
Problem

Research questions and friction points this paper is trying to address.

LLM agents
source preferences
information bias
content selection
latent preferences
Innovation

Methods, ideas, or system contributions that make the work stand out.

latent source preferences
LLM agents
information bias
source attribution
algorithmic transparency
🔎 Similar Papers
No similar papers found.