Can LLMs Generate Behaviors for Embodied Virtual Agents Based on Personality Traits?

📅 2025-08-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether large language models (LLMs) can generate multimodal agent behaviors—spanning verbal and nonverbal modalities—that are both consistent with and perceptible as reflecting specific personality traits. Method: Focusing on extraversion as a core dimension, we conduct empirical evaluations in two social scenarios—negotiation and ice-breaking—and introduce the first personality-aware prompting framework that embeds Five-Factor Inventory (BFI) trait scores into LLM inputs to jointly govern linguistic output and nonverbal actions (e.g., posture, gaze). Validation employs linguistic analysis, action-mapping modeling, and user perception experiments. Contribution/Results: Generated behaviors significantly differentiate introverted vs. extraverted tendencies (p < 0.01), and human participants reliably identify agent personality traits with high accuracy (mean 82.3%). This work establishes the first framework for personality-guided, controllable multimodal behavior generation from LLMs and empirically confirms its cross-modal consistency and social interpretability.

Technology Category

Application Category

📝 Abstract
This study proposes a framework that employs personality prompting with Large Language Models to generate verbal and nonverbal behaviors for virtual agents based on personality traits. Focusing on extraversion, we evaluated the system in two scenarios: negotiation and ice breaking, using both introverted and extroverted agents. In Experiment 1, we conducted agent to agent simulations and performed linguistic analysis and personality classification to assess whether the LLM generated language reflected the intended traits and whether the corresponding nonverbal behaviors varied by personality. In Experiment 2, we carried out a user study to evaluate whether these personality aligned behaviors were consistent with their intended traits and perceptible to human observers. Our results show that LLMs can generate verbal and nonverbal behaviors that align with personality traits, and that users are able to recognize these traits through the agents' behaviors. This work underscores the potential of LLMs in shaping personality aligned virtual agents.
Problem

Research questions and friction points this paper is trying to address.

Generating personality-driven verbal behaviors for virtual agents
Producing nonverbal cues aligned with personality traits
Evaluating human recognition of LLM-generated personality expressions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Personality prompting with LLMs
Generates verbal and nonverbal behaviors
Virtual agents based on traits
🔎 Similar Papers
No similar papers found.
B
Bin Han
University of Southern California, Los Angeles, CA, USA
D
Deuksin Kwon
University of Southern California, Los Angeles, CA, USA
Spencer Lin
Spencer Lin
University of Southern California
Socially Interactive AgentsExtended RealityMultimodal AI
K
Kaleen Shrestha
University of Southern California, Los Angeles, CA, USA
Jonathan Gratch
Jonathan Gratch
Professor of Computer Science and Psychology, University of Southern
affective computingSocially Interactive Agentsartificial intelligencevirtual humansemotion