Weak Links in LinkedIn: Enhancing Fake Profile Detection in the Age of LLMs

๐Ÿ“… 2025-07-21
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Large language models (LLMs) have drastically lowered the barrier to generating realistic fake LinkedIn profiles, severely undermining the robustness of existing text-based detectors. This paper presents the first systematic evaluation of mainstream detectors under LLM-generated samples, revealing critical failure modesโ€”specifically, false acceptance rates (FAR) as high as 42โ€“52%. To address this, we propose a GPT-assisted adversarial training framework: it leverages GPT to synthesize high-quality adversarial examples and jointly models numerical features with multimodal text embeddings (e.g., BERT and Sentence-BERT). Ablation studies validate the efficacy of our feature fusion strategy. Experiments demonstrate that our approach reduces FAR to 1โ€“7% while maintaining low false rejection rates (0.5โ€“2%), significantly improving generalization and adversarial robustness. The framework establishes a scalable, interpretable paradigm for fake identity detection in the LLM era.

Technology Category

Application Category

๐Ÿ“ Abstract
Large Language Models (LLMs) have made it easier to create realistic fake profiles on platforms like LinkedIn. This poses a significant risk for text-based fake profile detectors. In this study, we evaluate the robustness of existing detectors against LLM-generated profiles. While highly effective in detecting manually created fake profiles (False Accept Rate: 6-7%), the existing detectors fail to identify GPT-generated profiles (False Accept Rate: 42-52%). We propose GPT-assisted adversarial training as a countermeasure, restoring the False Accept Rate to between 1-7% without impacting the False Reject Rates (0.5-2%). Ablation studies revealed that detectors trained on combined numerical and textual embeddings exhibit the highest robustness, followed by those using numerical-only embeddings, and lastly those using textual-only embeddings. Complementary analysis on the ability of prompt-based GPT-4Turbo and human evaluators affirms the need for robust automated detectors such as the one proposed in this study.
Problem

Research questions and friction points this paper is trying to address.

Detecting LLM-generated fake LinkedIn profiles effectively
Improving robustness of text-based fake profile detectors
Reducing false accept rates for GPT-generated profiles
Innovation

Methods, ideas, or system contributions that make the work stand out.

GPT-assisted adversarial training for detection
Combined numerical and textual embeddings
Robust automated fake profile detectors
๐Ÿ”Ž Similar Papers
No similar papers found.
A
Apoorva Gulati
BITS Pilani, India
R
Rajesh Kumar
Bucknell University, USA
Vinti Agarwal
Vinti Agarwal
Birla Institute of Science and Technology, Pilani, India
Machine LearningSemi-supervised learningGraph deep learningSocial Recommender SystemsData
A
Aditya Sharma
BITS Pilani, India