Social Hippocampus Memory Learning

📅 2026-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of privacy leakage and high communication overhead in heterogeneous federated learning, where sharing models or intermediate representations poses significant risks. To overcome these issues, the authors propose SoHip, a memory-centric social machine learning framework that introduces, for the first time, a hippocampus-inspired memory mechanism. In SoHip, each agent consolidates its local short-term memory into long-term memory and collaboratively learns through lightweight memory sharing and fusion, while keeping both data and models strictly local. This approach eliminates the need for model alignment and substantially reduces communication costs, offering provable theoretical privacy guarantees. Experimental results on two benchmark datasets demonstrate that SoHip significantly outperforms seven baseline methods, achieving a maximum accuracy improvement of 8.78%.

Technology Category

Application Category

📝 Abstract
Social learning highlights that learning agents improve not in isolation, but through interaction and structured knowledge exchange with others. When introduced into machine learning, this principle gives rise to social machine learning (SML), where multiple agents collaboratively learn by sharing abstracted knowledge. Federated learning (FL) provides a natural collaboration substrate for this paradigm, yet existing heterogeneous FL approaches often rely on sharing model parameters or intermediate representations, which may expose sensitive information and incur additional overhead. In this work, we propose SoHip (Social Hippocampus Memory Learning), a memory-centric social machine learning framework that enables collaboration among heterogeneous agents via memory sharing rather than model sharing. SoHip abstracts each agent's individual short-term memory from local representations, consolidates it into individual long-term memory through a hippocampus-inspired mechanism, and fuses it with collectively aggregated long-term memory to enhance local prediction. Throughout the process, raw data and local models remain on-device, while only lightweight memory are exchanged. We provide theoretical analysis on convergence and privacy preservation properties. Experiments on two benchmark datasets with seven baselines demonstrate that SoHip consistently outperforms existing methods, achieving up to 8.78% accuracy improvements.
Problem

Research questions and friction points this paper is trying to address.

federated learning
heterogeneous agents
privacy preservation
model sharing
social machine learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Social Machine Learning
Federated Learning
Memory-Centric Learning
Hippocampus-Inspired Mechanism
Heterogeneous Agents
🔎 Similar Papers
No similar papers found.