🤖 AI Summary
This study addresses AI alignment in early developmental contexts by moving beyond individual optimization toward collaborative governance among multiple stakeholders, including families and speech-language pathologists. To this end, it proposes a “Hierarchical Community Alignment” framework that integrates expert knowledge structures, professional guidance mechanisms, and adaptive family feedback, reconceptualizing AI alignment as a sociotechnical co-governance process. Built upon multimodal large language models (MLLMs), the research implements a three-phase study involving five families and three speech-language pathologists, successfully establishing a closed-loop pipeline from expert analysis to parental feedback in MLLM outputs. The findings demonstrate the feasibility and necessity of the proposed framework, offering a novel paradigm for community-oriented AI alignment.
📝 Abstract
In early developmental contexts, particularly in parent-child interaction analysis, alignment involves families and professionals such as speech-language pathologists (SLPs) who interpret children's everyday interactions from different roles. When multimodal large language models (MLLMs) are introduced to support this process, alignment becomes a question of how authority, responsibility, and emotional risk are distributed across stakeholders. Through a three-part study with five families and three SLPs, we trace how MLLM-generated outputs move from expert-facing analysis to parent-facing feedback. We propose layered community alignment: grounding representations in expert-aligned structures, mediating translation through professional guardrails, and enabling family-level adaptation within those boundaries. We argue that alignment in developmental settings should be treated as a community-governed process rather than an individual optimisation problem.