🤖 AI Summary
This paper addresses the long-standing lack of a unified theoretical foundation for Fisher’s fiducial inference by establishing a rigorous, general, and motivationally grounded mathematical definition. Building on Doob’s martingale representation theorem, it characterizes the fiducial distribution as an inverse-probability mapping from observed data to the true parameter value, and introduces— for the first time—a formal definition centered on martingale structure, thereby systematizing and rigorously extending Hannig’s fiducial framework. By integrating martingale theory, Bayesian posterior characterization, and inverse-probability modeling, the proposed definition preserves the conceptual core of classical fiducial reasoning while achieving deep unification with modern probability theory. It fills a fundamental gap in the measure-theoretic foundations of fiducial inference and provides a new statistical paradigm that balances interpretability with mathematical rigor.
📝 Abstract
Since the idea of fiducial inference was put forward by Fisher, researchers have been attempting to place it within a rigorous and well motivated framework. It is fair to say that a general definition has remained elusive. In this paper we start with a representation of Bayesian posterior distributions provided by Doob that relies on martingales. This is explicit in defining how a true parameter value should depend on a random sample and hence an approach to"inverse probability"(Fisher, 1930). Taking this as our cue, we introduce a definition of fiducial inference that extends existing ones due to Hannig.