LLMs and Finetuning: Benchmarking cross-domain performance for hate speech detection

📅 2023-10-29
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the generalization challenge of hate speech detection across platforms in online communication. We systematically evaluate the effectiveness and adaptability of open-source pre-trained language models—including BERT, RoBERTa, and DeBERTa—for cross-domain hate speech detection. Through supervised fine-tuning experiments and ordinary least squares regression, we quantitatively analyze the impact of fine-tuning strategies, training data scale, and annotation granularity. Key findings are: (1) the advantage of fine-grained labeling diminishes with increasing data volume; (2) LLMs without domain-specific pre-training still substantially outperform existing state-of-the-art methods; and (3) data scale proves more decisive for performance than annotation granularity. We establish a reproducible cross-domain benchmark and propose empirically grounded best practices, offering both theoretical insights and practical engineering guidance for LLM-based hate speech detection.
📝 Abstract
In the evolving landscape of online communication, hate speech detection remains a formidable challenge, further compounded by the diversity of digital platforms. This study investigates the effectiveness and adaptability of pre-trained and fine-tuned Large Language Models (LLMs) in identifying hate speech, to address two central questions: (1) To what extent does the model performance depend on the fine-tuning and training parameters?, (2) To what extent do models generalize to cross-domain hate speech detection? and (3) What are the specific features of the datasets or models that influence the generalization potential? The experiment shows that LLMs offer a huge advantage over the state-of-the-art even without pretraining. Ordinary least squares analyses suggest that the advantage of training with fine-grained hate speech labels is washed away with the increase in dataset size. While our research demonstrates the potential of large language models (LLMs) for hate speech detection, several limitations remain, particularly regarding the validity and the reproducibility of the results. We conclude with an exhaustive discussion of the challenges we faced in our experimentation and offer recommended best practices for future scholars designing benchmarking experiments of this kind.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' effectiveness in cross-domain hate speech detection
Assessing impact of fine-tuning parameters on model performance
Identifying dataset features affecting hate speech detection generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes pre-trained and fine-tuned LLMs
Benchmarks cross-domain hate speech detection
Analyzes fine-grained label impact
🔎 Similar Papers
No similar papers found.