Chronologically Consistent Large Language Models

📅 2025-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In social science research, applying large language models (LLMs) risks prospective bias and training data leakage, undermining the validity of historical backtesting. To address this, we propose a temporally strict alignment paradigm for LLM construction, introducing ChronoBERT and ChronoGPT: both are pretrained exclusively on chronologically segmented corpora—using only text available *prior* to each timestamp—to ensure strict synchronization among model parameters, training data, and historical evolution. Our framework integrates financial news semantic encoding, lightweight adaptation, and downstream regression modeling. On standard NLP benchmarks, ChronoBERT matches or exceeds BERT’s performance. In stock return forecasting, it achieves state-of-the-art real-time Sharpe ratios—demonstrating, for the first time, the empirical feasibility and practical utility of time-consistent LLMs in social science applications.

Technology Category

Application Category

📝 Abstract
Large language models are increasingly used in social sciences, but their training data can introduce lookahead bias and training leakage. A good chronologically consistent language model requires efficient use of training data to maintain accuracy despite time-restricted data. Here, we overcome this challenge by training a suite of chronologically consistent large language models, ChronoBERT and ChronoGPT, which incorporate only the text data that would have been available at each point in time. Despite this strict temporal constraint, our models achieve strong performance on natural language processing benchmarks, outperforming or matching widely used models (e.g., BERT), and remain competitive with larger open-weight models. Lookahead bias is model and application-specific because even if a chronologically consistent language model has poorer language comprehension, a regression or prediction model applied on top of the language model can compensate. In an asset pricing application predicting next-day stock returns from financial news, we find that ChronoBERT's real-time outputs achieve a Sharpe ratio comparable to state-of-the-art models, indicating that lookahead bias is modest. Our results demonstrate a scalable, practical framework to mitigate training leakage, ensuring more credible backtests and predictions across finance and other social science domains.
Problem

Research questions and friction points this paper is trying to address.

Mitigating lookahead bias in large language models for social sciences
Ensuring chronological consistency in training data for accurate temporal predictions
Maintaining model performance despite strict temporal data constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

ChronoBERT and ChronoGPT models trained with time-restricted data
Outperform BERT despite strict temporal constraints
Mitigate lookahead bias in financial predictions
🔎 Similar Papers
No similar papers found.