🤖 AI Summary
This work addresses the degradation in inference performance caused by distributional shift in posterior parameter estimation for agent-based models (ABMs). To mitigate this issue, we introduce test-time training (TTT) into the Bayesian inference framework for ABMs, integrating it with normalizing flows to construct an inference model capable of adapting to distributional changes. We propose several practical TTT strategies tailored to normalizing flows, enabling real-time dynamic adjustment of the inference model during the test phase. Experimental results demonstrate that the proposed approach significantly improves both the accuracy and adaptability of parameter inference for ABMs under distributional shift scenarios.
📝 Abstract
Agent-Based Models (ABMs) are gaining great popularity in economics and social science because of their strong flexibility to describe the realistic and heterogeneous decisions and interaction rules between individual agents. In this work, we investigate for the first time the practicality of test-time training (TTT) of deep models such as normalizing flows, in the parameters posterior estimations of ABMs. We propose several practical TTT strategies for fine-tuning the normalizing flow against distribution shifts. Our numerical study demonstrates that TTT schemes are remarkably effective, enabling real-time adjustment of flow-based inference for ABM parameters.