SoMe: A Realistic Benchmark for LLM-based Social Media Agents

πŸ“… 2025-12-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Current LLM-based agents lack comprehensive evaluation benchmarks tailored to realistic social media scenarios. To address this, we propose SoMeβ€”the first benchmark for social media agents that jointly assesses multimodal content understanding, user behavior modeling, and complex decision-making across eight task categories, leveraging over 9.16 million real-world posts and 18,000 human-annotated queries. SoMe innovatively integrates multi-platform real data with tool-augmented task design, enabling evaluation of API invocation, content parsing, and user profiling capabilities. Experimental results expose systematic deficiencies in mainstream LLMs across understanding accuracy, temporal sensitivity, and reasoning consistency. By significantly enhancing evaluation fidelity and difficulty, SoMe fills a critical gap in assessing LLM capabilities within dynamic, noisy social environments.

Technology Category

Application Category

πŸ“ Abstract
Intelligent agents powered by large language models (LLMs) have recently demonstrated impressive capabilities and gained increasing popularity on social media platforms. While LLM agents are reshaping the ecology of social media, there exists a current gap in conducting a comprehensive evaluation of their ability to comprehend media content, understand user behaviors, and make intricate decisions. To address this challenge, we introduce SoMe, a pioneering benchmark designed to evaluate social media agents equipped with various agent tools for accessing and analyzing social media data. SoMe comprises a diverse collection of 8 social media agent tasks, 9,164,284 posts, 6,591 user profiles, and 25,686 reports from various social media platforms and external websites, with 17,869 meticulously annotated task queries. Compared with the existing datasets and benchmarks for social media tasks, SoMe is the first to provide a versatile and realistic platform for LLM-based social media agents to handle diverse social media tasks. By extensive quantitative and qualitative analysis, we provide the first overview insight into the performance of mainstream agentic LLMs in realistic social media environments and identify several limitations. Our evaluation reveals that both the current closed-source and open-source LLMs cannot handle social media agent tasks satisfactorily. SoMe provides a challenging yet meaningful testbed for future social media agents. Our code and data are available at https://github.com/LivXue/SoMe
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM agents' social media content comprehension and decision-making
Assessing agent performance across diverse social media tasks and data
Identifying limitations of current LLMs in realistic social media environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces SoMe benchmark for evaluating social media agents
Includes diverse tasks and annotated data from multiple platforms
Provides realistic testbed to assess LLM agent limitations
πŸ”Ž Similar Papers
No similar papers found.