🤖 AI Summary
This study addresses a critical gap in human-computer interaction (HCI) research: the lack of systematic understanding regarding how “frameworks” are practically used and conceptually constructed. Through a systematic review of 615 CHI papers from 2015 to 2024 that center on frameworks, this work proposes six distinct types of framework engagement and develops a functional taxonomy analyzing framework practices along four dimensions—role, structure, validation, and reuse. The analysis reveals that proposals of novel frameworks significantly outnumber iterative refinements of existing ones, and that many frameworks suffer from ambiguous functional scope and insufficient validation. These findings highlight a non-cumulative tendency in HCI’s approach to framework development and call for more rigorous, reflective, and sustainable paradigms in both the construction and application of frameworks within the field.
📝 Abstract
In HCI, frameworks function as a type of theoretical contribution, often supporting ideation, design, and evaluation. Yet, little is known about how they are actually used, what functions they serve, and which scholarly practices that shape them. To address this gap, we conducted a systematic review of 615 papers from a decade of CHI proceedings (2015-2024) that prominently featured the term framework. We classified these papers into six engagement types. We then examined the role, form, and essential components of newly proposed frameworks through a functional typology, analyzing how they are constructed, validated, and articulated for reuse. Our results show that enthusiasm for proposing new frameworks exceeds the willingness to iterate on existing ones. They also highlight the ambiguity in the function of frameworks and the scarcity of systematic validation. Based on these insights, we call for more rigorous, reflective, and cumulative practices in the development and use of frameworks in HCI.