🤖 AI Summary
This paper identifies concrete compliance barriers that anti-fraud laws pose to computer science research—including penetration testing, web scraping, AI system auditing, social engineering, and legally grounded identity research. Through legal text analysis, interdisciplinary case studies, and methodological mapping, it constructs, for the first time, a comprehensive taxonomy of anti-fraud legal risks spanning over ten high-risk research scenarios, explicitly distinguishing their regulatory logic from that of statutes such as the CFAA or DMCA. The study’s core contributions are threefold: (1) it fills a critical gap in the interdisciplinary literature at the intersection of law and computing research; (2) it introduces the first practical compliance framework tailored to AI security auditing and identity-related research; and (3) it derives actionable, policy-relevant recommendations—offering researchers a theoretically rigorous yet operationally feasible pathway for navigating legal risk.
📝 Abstract
Computer science research sometimes brushes with the law, from red-team exercises that probe the boundaries of authentication mechanisms, to AI research processing copyrighted material, to platform research measuring the behavior of algorithms and users. U.S.-based computer security research is no stranger to the Computer Fraud and Abuse Act (CFAA) and the Digital Millennium Copyright Act (DMCA) in a relationship that is still evolving through case law, research practices, changing policies, and legislation. Amid the landscape computer scientists, lawyers, and policymakers have learned to navigate, anti-fraud laws are a surprisingly under-examined challenge for computer science research. Fraud brings separate issues that are not addressed by the methods for navigating CFAA, DMCA, and Terms of Service that are more familiar in the computer security literature. Although anti-fraud laws have been discussed to a limited extent in older research on phishing attacks, modern computer science researchers are left with little guidance when it comes to navigating issues of deception outside the context of pure laboratory research. In this paper, we analyze and taxonomize the anti-fraud and deception issues that arise in several areas of computer science research. We find that, despite the lack of attention to these issues in the legal and computer science literature, issues of misrepresented identity or false information that could implicate anti-fraud laws are actually relevant to many methodologies used in computer science research, including penetration testing, web scraping, user studies, sock puppets, social engineering, auditing AI or socio-technical systems, and attacks on artificial intelligence. We especially highlight the importance of anti-fraud laws in two research fields of great policy importance: attacking or auditing AI systems, and research involving legal identification.