Supporting Co-Adaptive Machine Teaching through Human Concept Learning and Cognitive Theories

📅 2024-09-25
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
To address human–machine concept misalignment under subjectively ambiguous scenarios, this paper proposes a bidirectional co-adaptation mechanism: users dynamically refine concept definitions during annotation, while the model concurrently optimizes its decision boundary. Methodologically, we introduce the first dual-cognitive-driven framework integrating variation theory and structural alignment theory, enabling a neuro-symbolic counterfactual generation pipeline for interpretable, batch-wise alignment annotation; we further design an interactive visualization interface to enhance cognitive feedback. An 18-participant lab study demonstrates that our approach significantly improves annotation consistency (+37%), deepens users’ conceptual reflection, and accelerates model convergence by 2.1×. The core contribution lies in deeply embedding cognitive science principles into interactive machine learning architectures, thereby achieving bidirectional human–machine concept alignment and co-evolution.

Technology Category

Application Category

📝 Abstract
An important challenge in interactive machine learning, particularly in subjective or ambiguous domains, is fostering bi-directional alignment between humans and models. Users teach models their concept definition through data labeling, while refining their own understandings throughout the process. To facilitate this, we introduce MOCHA, an interactive machine learning tool informed by two theories of human concept learning and cognition. First, it utilizes a neuro-symbolic pipeline to support Variation Theory-based counterfactual data generation. By asking users to annotate counterexamples that are syntactically and semantically similar to already-annotated data but predicted to have different labels, the system can learn more effectively while helping users understand the model and reflect on their own label definitions. Second, MOCHA uses Structural Alignment Theory to present groups of counterexamples, helping users comprehend alignable differences between data items and annotate them in batch. We validated MOCHA's effectiveness and usability through a lab study with 18 participants.
Problem

Research questions and friction points this paper is trying to address.

Fostering bi-directional alignment between humans and models in interactive machine learning
Supporting human concept learning through counterfactual data generation
Enhancing user comprehension and batch annotation via Structural Alignment Theory
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neuro-symbolic pipeline for counterfactual data generation
Variation Theory-based counterexample annotation
Structural Alignment Theory for batch annotation
🔎 Similar Papers
No similar papers found.