🤖 AI Summary
Addressing the challenge of evaluating task competency for autonomous systems under uncertainty, this paper proposes the Factorized Machine Self-Confidence (FaMSeC) framework—the first to formalize self-confidence as a computable five-dimensional metric: outcome quality, solver efficacy, model reliability, goal alignment, and historical experience adaptability. Methodologically, FaMSeC innovatively integrates meta-utility functions, behavioral simulation, and surrogate prediction models to jointly infer each competency factor. Technically, it employs MDP-based solver statistics, probabilistic exceedance boundary analysis, and meta-utility modeling. Experiments demonstrate that all five metrics exhibit strong discriminability, consistency, and cross-task generalizability. FaMSeC thus provides a transparent, embeddable, and real-time executable mechanism for competency assessment—advancing trustworthy autonomous decision-making.
📝 Abstract
How can intelligent machines assess their competency to complete a task? This question has come into focus for autonomous systems that algorithmically make decisions under uncertainty. We argue that machine self-confidence -- a form of meta-reasoning based on self-assessments of system knowledge about the state of the world, itself, and ability to reason about and execute tasks -- leads to many computable and useful competency indicators for such agents. This paper presents our body of work, so far, on this concept in the form of the Factorized Machine Self-confidence (FaMSeC) framework, which holistically considers several major factors driving competency in algorithmic decision-making: outcome assessment, solver quality, model quality, alignment quality, and past experience. In FaMSeC, self-confidence indicators are derived via 'problem-solving statistics' embedded in Markov decision process solvers and related approaches. These statistics come from evaluating probabilistic exceedance margins in relation to certain outcomes and associated competency standards specified by an evaluator. Once designed, and evaluated, the statistics can be easily incorporated into autonomous agents and serve as indicators of competency. We include detailed descriptions and examples for Markov decision process agents, and show how outcome assessment and solver quality factors can be found for a range of tasking contexts through novel use of meta-utility functions, behavior simulations, and surrogate prediction models. Numerical evaluations are performed to demonstrate that FaMSeC indicators perform as desired (references to human subject studies beyond the scope of this paper are provided).