OmniSapiens: A Foundation Model for Social Behavior Processing via Heterogeneity-Aware Relative Policy Optimization

📅 2026-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing approaches to modeling human social behavior often treat heterogeneous behavioral dimensions in isolation, leading to high training costs and poor generalization. To address this, this work proposes Heterogeneous-Aware Relative Policy Optimization (HARPO), which, for the first time in reinforcement learning, explicitly models a balancing mechanism over heterogeneous behavioral data. HARPO modulates the advantage function to harmonize multi-task and multi-sample policy updates, enabling unified and stable social behavior modeling. Built upon this framework, we develop OmniSapiens-7B 2.0, a 7-billion-parameter foundation model for social behavior that achieves performance gains of 16.85% and 9.37% in multi-task and zero-shot settings, respectively, significantly outperforming existing methods and demonstrating consistently strong results across diverse behavioral tasks.

Technology Category

Application Category

📝 Abstract
To develop socially intelligent AI, existing approaches typically model human behavioral dimensions (e.g., affective, cognitive, or social attributes) in isolation. Although useful, task-specific modeling often increases training costs and limits generalization across behavioral settings. Recent reasoning RL methods facilitate training a single unified model across multiple behavioral tasks, but do not explicitly address learning across different heterogeneous behavioral data. To address this gap, we introduce Heterogeneity-Aware Relative Policy Optimization (HARPO), an RL method that balances leaning across heterogeneous tasks and samples. This is achieved by modulating advantages to ensure that no single task or sample carries disproportionate influence during policy optimization. Using HARPO, we develop and release Omnisapiens-7B 2.0, a foundation model for social behavior processing. Relative to existing behavioral foundation models, Omnisapiens-7B 2.0 achieves the strongest performance across behavioral tasks, with gains of up to +16.85% and +9.37% on multitask and held-out settings respectively, while producing more explicit and robust reasoning traces. We also validate HARPO against recent RL methods, where it achieves the most consistently strong performance across behavioral tasks.
Problem

Research questions and friction points this paper is trying to address.

social behavior processing
heterogeneous behavioral data
foundation model
multi-task learning
generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Heterogeneity-Aware Relative Policy Optimization
Foundation Model
Social Behavior Processing
Reinforcement Learning
Multitask Generalization
🔎 Similar Papers
No similar papers found.