Beyond Quantification: Navigating Uncertainty in Professional AI Systems

📅 2025-09-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional probabilistic uncertainty quantification in AI systems (e.g., confidence scores) fails to capture inherently non-quantifiable judgmental uncertainty in high-stakes professional domains—such as domestic violence risk assessment, cultural sensitivity evaluation, or conceptual understanding recognition—where uncertainty is epistemically and contextually grounded rather than statistically estimable. Method: We propose a non-quantitative, meaning-oriented uncertainty representation paradigm that reframes uncertainty articulation as a collaborative meaning-making process within professional communities. Integrating human-computer interaction, practice theory, and participatory design, we develop a co-evolving refinement mechanism wherein domain experts iteratively define and optimize uncertainty expression forms. Contribution/Results: We establish the first theoretical framework for non-quantitative uncertainty tailored to professional practice and empirically validate its efficacy across multiple domains, demonstrating significant improvements in expert–AI collaborative decision quality.

Technology Category

Application Category

📝 Abstract
The growing integration of large language models across professional domains transforms how experts make critical decisions in healthcare, education, and law. While significant research effort focuses on getting these systems to communicate their outputs with probabilistic measures of reliability, many consequential forms of uncertainty in professional contexts resist such quantification. A physician pondering the appropriateness of documenting possible domestic abuse, a teacher assessing cultural sensitivity, or a mathematician distinguishing procedural from conceptual understanding face forms of uncertainty that cannot be reduced to percentages. This paper argues for moving beyond simple quantification toward richer expressions of uncertainty essential for beneficial AI integration. We propose participatory refinement processes through which professional communities collectively shape how different forms of uncertainty are communicated. Our approach acknowledges that uncertainty expression is a form of professional sense-making that requires collective development rather than algorithmic optimization.
Problem

Research questions and friction points this paper is trying to address.

Addressing non-quantifiable uncertainties in professional AI systems
Developing richer uncertainty expressions beyond probabilistic measures
Creating participatory processes for professional uncertainty communication
Innovation

Methods, ideas, or system contributions that make the work stand out.

Participatory refinement processes for uncertainty communication
Collective professional development over algorithmic optimization
Richer expressions beyond probabilistic quantification methods
🔎 Similar Papers
No similar papers found.
S
Sylvie Delacroix
Dickson Poon School of Law, King’s College London, The Strand, WC2R 2LS, London, UK; Centre for Language AI Research, Tohoku University, 6-3-09 Aramaki-Aza-Aoba, 980-8579, Sendai, Japan
D
Diana Robinson
Department of Computer Science and Technology, University of Cambridge, 15 JJ Thomson Avenue, CB3 0FD, Cambridge, UK
Umang Bhatt
Umang Bhatt
University of Cambridge
Machine LearningArtificial IntelligenceHuman-AI Collaboration
J
Jacopo Domenicucci
Dickson Poon School of Law, King’s College London, The Strand, WC2R 2LS, London, UK; Centre for Language AI Research, Tohoku University, 6-3-09 Aramaki-Aza-Aoba, 980-8579, Sendai, Japan
J
Jessica Montgomery
Department of Computer Science and Technology, University of Cambridge, 15 JJ Thomson Avenue, CB3 0FD, Cambridge, UK
G
Gael Varoquaux
Laboratoire de Neurosciences Cognitives, École Normale Supérieure, 29 rue d’Ulm, 75005, Paris, France
Carl Henrik Ek
Carl Henrik Ek
University of Cambridge
Machine Learning
Vincent Fortuin
Vincent Fortuin
Principal Investigator, Helmholtz AI & TU Munich
Bayesian deep learningDeep generative AIPAC-Bayes
Yulan He
Yulan He
Professor, King's College London; Turing AI Fellow
Natural Language ProcessingLarge Language ModelsAI for education and health
Tom Diethe
Tom Diethe
AstraZeneca; University of Bristol
Machine LearningComputational BiologyDrug DevelopmentPrivacy Enhancing Technologies
N
Neill Campbell
Department of Computer Science, University of Bath, Claverton Down, BA2 7PB, UK
Mennatallah El-Assady
Mennatallah El-Assady
ETH Zürich
VisualizationIntelligence AugmentationXAIInteractive Machine LearningNatural Language
S
Soren Hauberg
School of Computer Science and Statistics, Trinity College Dublin, College Green, Dublin 2, Ireland
Ivana Dusparic
Ivana Dusparic
Professor in Computer Science, Trinity College Dublin
reinforcement learningself-adaptive systemsmulti-agent systemsintelligent mobility
N
Neil Lawrence
Department of Computer Science and Technology, University of Cambridge, 15 JJ Thomson Avenue, CB3 0FD, Cambridge, UK