Unpacking Interpretability: Human-Centered Criteria for Optimal Combinatorial Solutions

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of human–algorithm collaboration when optimization algorithms yield multiple equally optimal solutions, a scenario hindered by the lack of a clear definition of human-interpretable differences among such solutions. Through a behavioral experiment in which participants selected the more understandable option between two equivalent bin-packing solutions, the authors combined response times, eye-tracking data, and structural feature quantification to identify three key structural attributes of interpretability: alignment with greedy heuristics, simplicity of intra-bin composition, and orderliness of visual representation. The findings reveal that orderly visual representation and alignment with heuristic strategies most strongly influence human preference, while compositional simplicity also shows consistent effects. These results provide actionable insights for designing optimization systems that jointly achieve optimality and human interpretability in real-world applications.

Technology Category

Application Category

📝 Abstract
Algorithmic support systems often return optimal solutions that are hard to understand. Effective human-algorithm collaboration, however, requires interpretability. When machine solutions are equally optimal, humans must select one, but a precise account of what makes one solution more interpretable than another remains missing. To identify structural properties of interpretable machine solutions, we present an experimental paradigm in which participants chose which of two equally optimal solutions for packing items into bins was easier to understand. We show that preferences reliably track three quantifiable properties of solution structure: alignment with a greedy heuristic, simple within-bin composition, and ordered visual representation. The strongest associations were observed for ordered representations and heuristic alignment, with compositional simplicity also showing a consistent association. Reaction-time evidence was mixed, with faster responses observed primarily when heuristic differences were larger, and aggregate webcam-based gaze did not show reliable effects of complexity. These results provide a concrete, feature-based account of interpretability in optimal packing solutions, linking solution structure to human preference. By identifying actionable properties (simple compositions, ordered representation, and heuristic alignment), our findings enable interpretability-aware optimization and presentation of machine solutions, and outline a path to quantify trade-offs between optimality and interpretability in real-world allocation and design tasks.
Problem

Research questions and friction points this paper is trying to address.

interpretability
combinatorial optimization
human-algorithm collaboration
solution preference
packing problem
Innovation

Methods, ideas, or system contributions that make the work stand out.

interpretability
combinatorial optimization
human-centered AI
heuristic alignment
solution structure
🔎 Similar Papers
No similar papers found.
D
Dominik Pegler
Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna
Frank Jäkel
Frank Jäkel
TU Darmstadt
D
David Steyrl
Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna
Frank Scharnowski
Frank Scharnowski
University of Vienna
F
Filip Melinscak
Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna