π€ AI Summary
This work addresses the vulnerability of existing large language model (LLM)-based trading agents to noise and misinformation, stemming from a βuniform trustβ bias that leads to unstable decisions and factual hallucinations. Inspired by human cognition, the authors propose a multi-agent selective consensus framework that dynamically weights information sources based on semantic and numerical consistency. The approach integrates deterministic temporal anchors, a reflective memory mechanism, and a test-time adaptive risk-preference strategy, enhancing decision robustness and risk awareness without requiring additional training. Backtesting in high-noise market environments (Q1 2024 and Q1 2026) demonstrates that the method effectively calibrates LLM trading behavior from extreme risk-return regimes toward a more human-like moderate risk-reward profile.
π Abstract
Large language models (LLMs) are increasingly deployed as autonomous agents in financial trading. However, they often exhibit a hazardous behavioral bias that we term uniform trust, whereby retrieved information is implicitly assumed to be factual and heterogeneous sources are treated as equally informative. This assumption stands in sharp contrast to human decision-making, which relies on selective filtering, cross-validation, and experience-driven weighting of information sources. As a result, LLM-based trading systems are particularly vulnerable to multi-source noise and misinformation, amplifying factual hallucinations and leading to unstable risk-return performance. To bridge this behavioral gap, we introduce TrustTrade (Trust-Rectified Unified Selective Trader), a multi-agent selective consensus framework inspired by human epistemic heuristics. TrustTrade replaces uniform trust with cross-agent consistency by aggregating information from multiple independent LLM agents and dynamically weighting signals based on their semantic and numerical agreement. Consistent signals are prioritized, while divergent, weakly grounded, or temporally inconsistent inputs are selectively discounted. To further stabilize decision-making, TrustTrade incorporates deterministic temporal signals as reproducible anchors and a reflective memory mechanism that adapts risk preferences at test time without additional training. Together, these components suppress noise amplification and hallucination-driven volatility, yielding more stable and risk-aware trading behavior. Across controlled backtesting in high-noise market environments (2024 Q1 and 2026 Q1), the proposed TrustTrade calibrates LLM trading behavior from extreme risk-return regimes toward a human-aligned, mid-risk and mid-return profile.