🤖 AI Summary
Current automated detection tools struggle to meet regulatory practice demands due to insufficient transparency, interpretability, and the inability to map findings to specific legal provisions, resulting in a disconnect between academic research and enforcement applications. Through in-depth interviews with nine regulatory practitioners and an analysis integrating regulatory workflows with technical feasibility, this study systematically uncovers, from a regulatory perspective, the practical barriers to deploying automated tools for identifying deceptive designs. The work proposes a human-in-the-loop compliance review framework that is user-need-driven, supports the entire investigative workflow, and aligns both research and regulatory objectives, offering critical guidance for developing automated detection systems that genuinely meet real-world enforcement requirements.
📝 Abstract
Although deceptive design patterns are subject to growing regulatory oversight, enforcement races to keep up with the scale of the problem. One promising solution is automated detection tools, many of which are developed within academia. We interviewed nine experienced practitioners working within or alongside regulatory bodies to understand their work against deceptive design patterns, including the use of supporting tools and the prospect of automation. Computing technologies have their place in regulatory practice, but not as envisioned in research. For example, investigations require utmost transparency and accountability in all the activities we identify as accompanying dark pattern detection, which many existing tools cannot provide. Moreover, tools need to map interfaces to legal violations to be of use. We thus recommend conducting user requirement research to maximize research impact, supporting ancillary activities beyond detection, and establishing practical tech adoption pathways that account for the needs of both scientific and regulatory activities.