LLM-Inspired Pretrain-Then-Finetune for Small-Data, Large-Scale Optimization

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of sparse and noisy observational data in few-shot, large-scale decision-making problems by introducing the pretraining–fine-tuning paradigm to this setting for the first time. The authors propose a problem-specific Transformer architecture that leverages domain knowledge to generate synthetic data for pretraining, followed by fine-tuning on a small amount of real-world data. Theoretically, they establish the first non-asymptotic generalization error bound, elucidating the synergistic mechanism between pretraining and fine-tuning and revealing a scaling law for fine-tuning. Empirically, high-capacity models effectively learn structural priors from synthetic data and adapt efficiently to real environments, with decision performance improving significantly as the instance scale grows.

Technology Category

Application Category

📝 Abstract
We consider small-data, large-scale decision problems in which a firm must make many operational decisions simultaneously (e.g., across a large product portfolio) while observing only a few, potentially noisy, data points per instance. Inspired by the success of large language models (LLMs), we propose a pretrain-then-finetune approach built on a designed Transformer model to address this challenge. The model is first pretrained on large-scale, domain-informed synthetic data that encode managerial knowledge and structural features of the decision environment, and is then fine-tuned on real observations. This new pipeline offers two complementary advantages: pretraining injects domain knowledge into the learning process and enables the training of high-capacity models using abundant synthetic data, while finetuning adapts the pretrained model to the operational environment and improves alignment with the true data-generating regime. While we have leveraged the Transformer's state-of-the-art representational capacity, particularly its attention mechanism, to efficiently extract cross-task structure, our approach is not an off-the-shelf application. Instead, it relies on problem-specific architectural design and a tailored training procedure to match the decision setting. Theoretically, we develop the first comprehensive error analysis regarding Transformer learning in relevant contexts, establishing nonasymptotic guarantees that validate the method's effectiveness. Critically, our analysis reveals how pretraining and fine-tuning jointly determine performance, with the dominant contribution governed by whichever is more favorable. In particular, finetuning exhibits an economies-of-scale effect, whereby transfer learning becomes increasingly effective as the number of instances grows.
Problem

Research questions and friction points this paper is trying to address.

small-data
large-scale optimization
operational decision-making
noisy observations
cross-instance learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

pretrain-then-finetune
Transformer
small-data large-scale optimization
synthetic data pretraining
transfer learning
🔎 Similar Papers
No similar papers found.
Zishi Zhang
Zishi Zhang
Peking University
simulation optimizationAI
J
Jinhui Han
Guanghua School of Management, Peking University, Beijing 100871, China
M
Ming Hu
Rotman School of Management, University of Toronto, Toronto, Ontario, Canada M5S 3E6
Yijie Peng
Yijie Peng
Peking University
SimulationBayesian LearningArtificial IntelligenceHealthcareFinancial Engineering