🤖 AI Summary
Existing Chrome Web Store review mechanisms fail to prevent malicious extensions from being published, posing significant privacy and security risks to users. Method: This paper proposes a supervised machine learning–based detection approach. It is the first work to systematically identify and quantify concept drift in browser extensions—demonstrating that such drift is a primary cause of failure for commercial detection tools. We construct the largest publicly available labeled dataset to date (7,000+ malicious and 60,000+ benign extensions) and train and evaluate three classifier families. Results: Our models achieve 98% accuracy in controlled laboratory evaluations. In real-world deployment, they successfully identified 68 malicious extensions that had evaded official Chrome Web Store reviews and flagged over 1,000 high-risk suspicious samples. Beyond validating the efficacy of supervised learning for extension malware detection, this work catalyzes a paradigm shift toward adaptive, evolution-aware security assessment frameworks for browser extensions.
📝 Abstract
Google Chrome is the most popular Web browser. Users can customize it with extensions that enhance their browsing experience. The most well-known marketplace of such extensions is the Chrome Web Store (CWS). Developers can upload their extensions on the CWS, but such extensions are made available to users only after a vetting process carried out by Google itself. Unfortunately, some malicious extensions bypass such checks, putting the security and privacy of downstream browser extension users at risk.
Here, we scrutinize the extent to which automated mechanisms reliant on supervised machine learning (ML) can be used to detect malicious extensions on the CWS. To this end, we first collect 7,140 malicious extensions published in 2017--2023. We combine this dataset with 63,598 benign extensions published or updated on the CWS before 2023, and we develop three supervised-ML-based classifiers. We show that, in a "lab setting", our classifiers work well (e.g., 98% accuracy). Then, we collect a more recent set of 35,462 extensions from the CWS, published or last updated in 2023, with unknown ground truth. We were eventually able to identify 68 malicious extensions that bypassed the vetting process of the CWS. However, our classifiers also reported >1k likely malicious extensions. Based on this finding (further supported with empirical evidence), we elucidate, for the first time, a strong concept drift effect on browser extensions. We also show that commercial detectors (e.g., VirusTotal) work poorly to detect known malicious extensions. Altogether, our results highlight that detecting malicious browser extensions is a fundamentally hard problem. This requires additional work both by the research community and by Google itself -- potentially by revising their approaches. In the meantime, we informed Google of our discoveries, and we release our artifacts.