🤖 AI Summary
This paper addresses the statistical estimation of two key parameters—the confidence bound and convergence rate—in the Bounded Confidence Model (BCM). We systematically analyze the intrinsic bias properties and identifiability limitations of their maximum likelihood estimators (MLEs). Theoretically, the MLE for the confidence bound exhibits finite-sample bias but is strongly consistent; in contrast, the MLE for the convergence rate suffers from persistent bias and fails to achieve asymptotic unbiasedness. Moreover, joint estimation is fundamentally ill-posed over a nontrivial region of the parameter space due to the presence of multiple local maxima in the likelihood function, leading to structural non-identifiability. To our knowledge, this work provides the first formal characterization of the bias patterns and local extremum structure inherent in BCM parameter estimation. Our findings establish a theoretical foundation for robust model calibration and yield diagnostic tools for assessing estimator reliability in opinion dynamics modeling.
📝 Abstract
Opinion dynamics models such as the bounded confidence models (BCMs) describe how a population can reach consensus, fragmentation, or polarization, depending on a few parameters. Connecting such models to real-world data could help understanding such phenomena, testing model assumptions. To this end, estimation of model parameters is a key aspect, and maximum likelihood estimation provides a principled way to tackle it. Here, our goal is to outline the properties of statistical estimators of the two key BCM parameters: the confidence bound and the convergence rate. We find that their maximum likelihood estimators present different characteristics: the one for the confidence bound presents a small-sample bias but is consistent, while the estimator of the convergence rate shows a persistent bias. Moreover, the joint parameter estimation is affected by identifiability issues for specific regions of the parameter space, as several local maxima are present in the likelihood function. Our results show how the analysis of the likelihood function is a fruitful approach for better understanding the pitfalls and possibilities of estimating the parameters of opinion dynamics models, and more in general, agent-based models, and for offering formal guarantees for their calibration.