🤖 AI Summary
This work addresses the common oversight in existing commonsense reasoning models—the neglect of personality traits in cognitive and reasoning processes—thereby limiting their applicability in personalized settings. To bridge this gap, the authors introduce PCoKG, a personality-aware commonsense knowledge graph comprising over 520,000 quadruples, which explicitly integrates personality traits into commonsense reasoning. They propose an iterative refinement framework leveraging a tripartite debate mechanism among large language models role-playing as supporter, opponent, and arbiter, combined with LoRA-based fine-tuning and an ATOMIC data filtering and expansion strategy. Extensive evaluation across multiple personality-related tasks demonstrates that the approach significantly improves the consistency between generated responses and reference answers in personality-conditioned dialogue generation. Moreover, performance scales consistently with the size of the base model, underscoring the robustness and practical utility of PCoKG.
📝 Abstract
Most commonsense reasoning models overlook the influence of personality traits, limiting their effectiveness in personalized systems such as dialogue generation. To address this limitation, we introduce the Personality-aware Commonsense Knowledge Graph (PCoKG), a structured dataset comprising 521,316 quadruples. We begin by employing three evaluators to score and filter events from the ATOMIC dataset, selecting those that are likely to elicit diverse reasoning patterns across different personality types. For knowledge graph construction, we leverage the role-playing capabilities of large language models (LLMs) to perform reasoning tasks. To enhance the quality of the generated knowledge, we incorporate a debate mechanism consisting of a proponent, an opponent, and a judge, which iteratively refines the outputs through feedback loops. We evaluate the dataset from multiple perspectives and conduct fine-tuning and ablation experiments using multiple LLM backbones to assess PCoKG's robustness and the effectiveness of its construction pipeline. Our LoRA-based fine-tuning results indicate a positive correlation between model performance and the parameter scale of the base models. Finally, we apply PCoKG to persona-based dialogue generation, where it demonstrates improved consistency between generated responses and reference outputs. This work bridges the gap between commonsense reasoning and individual cognitive differences, enabling the development of more personalized and context-aware AI systems.