TrustTrade: Human-Inspired Selective Consensus Reduces Decision Uncertainty in LLM Trading Agents

πŸ“… 2026-03-23
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the vulnerability of existing large language model (LLM)-based trading agents to noise and misinformation, stemming from a β€œuniform trust” bias that leads to unstable decisions and factual hallucinations. Inspired by human cognition, the authors propose a multi-agent selective consensus framework that dynamically weights information sources based on semantic and numerical consistency. The approach integrates deterministic temporal anchors, a reflective memory mechanism, and a test-time adaptive risk-preference strategy, enhancing decision robustness and risk awareness without requiring additional training. Backtesting in high-noise market environments (Q1 2024 and Q1 2026) demonstrates that the method effectively calibrates LLM trading behavior from extreme risk-return regimes toward a more human-like moderate risk-reward profile.

Technology Category

Application Category

πŸ“ Abstract
Large language models (LLMs) are increasingly deployed as autonomous agents in financial trading. However, they often exhibit a hazardous behavioral bias that we term uniform trust, whereby retrieved information is implicitly assumed to be factual and heterogeneous sources are treated as equally informative. This assumption stands in sharp contrast to human decision-making, which relies on selective filtering, cross-validation, and experience-driven weighting of information sources. As a result, LLM-based trading systems are particularly vulnerable to multi-source noise and misinformation, amplifying factual hallucinations and leading to unstable risk-return performance. To bridge this behavioral gap, we introduce TrustTrade (Trust-Rectified Unified Selective Trader), a multi-agent selective consensus framework inspired by human epistemic heuristics. TrustTrade replaces uniform trust with cross-agent consistency by aggregating information from multiple independent LLM agents and dynamically weighting signals based on their semantic and numerical agreement. Consistent signals are prioritized, while divergent, weakly grounded, or temporally inconsistent inputs are selectively discounted. To further stabilize decision-making, TrustTrade incorporates deterministic temporal signals as reproducible anchors and a reflective memory mechanism that adapts risk preferences at test time without additional training. Together, these components suppress noise amplification and hallucination-driven volatility, yielding more stable and risk-aware trading behavior. Across controlled backtesting in high-noise market environments (2024 Q1 and 2026 Q1), the proposed TrustTrade calibrates LLM trading behavior from extreme risk-return regimes toward a human-aligned, mid-risk and mid-return profile.
Problem

Research questions and friction points this paper is trying to address.

uniform trust
decision uncertainty
LLM trading agents
multi-source noise
factual hallucinations
Innovation

Methods, ideas, or system contributions that make the work stand out.

selective consensus
uniform trust
cross-agent consistency
reflective memory
deterministic temporal signals
M
Minghan Li
Harvard AI and Robotics Lab, Harvard University
R
Rachel Gonsalves
Harvard AI and Robotics Lab, Harvard University; Harvard Business School, Harvard University
W
Weiyue Li
Harvard AI and Robotics Lab, Harvard University
S
Sunghoon Yoon
Daegu Gyeongbuk Institute of Science and Technology
Mengyu Wang
Mengyu Wang
Assistant Professor, Harvard Medical School
Artificial IntelligenceMachine LearningOphthalmologyGlaucomaComputational Mechanics