🤖 AI Summary
Existing XAI research—particularly in training data attribution (TDA)—overemphasizes mathematical rigor while neglecting real-world user needs, often resorting to “solutionism” by uncritically adapting methods from other subfields. Method: This work pioneers the integration of design thinking into TDA, employing a mixed-methods requirements study comprising 10 in-depth interviews and a structured survey with 31 diverse AI practitioners. Contribution/Results: We identify several critical user tasks long overlooked in prior TDA literature, yielding (1) the first human-centered TDA needs map; (2) a novel evaluation framework oriented toward practical utility rather than algorithmic fidelity; and (3) a methodological blueprint for shifting XAI research paradigms from algorithm-centric to user-centric design. Collectively, this work bridges a persistent gap between theoretical attribution mechanisms and actionable, human-grounded interpretability.
📝 Abstract
While Explainable AI (XAI) aims to make AI understandable and useful to humans, it has been criticised for relying too much on formalism and solutionism, focusing more on mathematical soundness than user needs. We propose an alternative to this bottom-up approach inspired by design thinking: the XAI research community should adopt a top-down, user-focused perspective to ensure user relevance. We illustrate this with a relatively young subfield of XAI, Training Data Attribution (TDA). With the surge in TDA research and growing competition, the field risks repeating the same patterns of solutionism. We conducted a needfinding study with a diverse group of AI practitioners to identify potential user needs related to TDA. Through interviews (N=10) and a systematic survey (N=31), we uncovered new TDA tasks that are currently largely overlooked. We invite the TDA and XAI communities to consider these novel tasks and improve the user relevance of their research outcomes.