Generalizing Trust: Weak-to-Strong Trustworthiness in Language Models

📅 2024-12-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether trustworthiness attributes—robustness, fairness, and privacy—co-improve during “weak-to-strong” generalization in large language models (LLMs). Focusing on the paradigm where strong models are trained via fine-tuning on weak-model outputs, we conduct the first systematic evaluation of cross-model transferability of trustworthiness. We propose two trust-aware regularization strategies: Weak TFT and Weak+WTS TFT. Methodologically, our approach integrates trustworthiness regularization, distillation from weak-model outputs, and multi-dimensional evaluation—including fairness, adversarial/out-of-distribution robustness, and privacy preservation. Experimental results demonstrate that fairness and robustness exhibit significant cross-model generalization, whereas privacy fails to transfer effectively. Moreover, the two-stage regularization enables the strong model to outperform the original weak model across most trustworthiness dimensions. Our work establishes “weak-to-strong trustworthy generalization” as a novel paradigm for developing reliable LLMs.

Technology Category

Application Category

📝 Abstract
The rapid proliferation of generative AI, especially large language models, has led to their integration into a variety of applications. A key phenomenon known as weak-to-strong generalization - where a strong model trained on a weak model's outputs surpasses the weak model in task performance - has gained significant attention. Yet, whether critical trustworthiness properties such as robustness, fairness, and privacy can generalize similarly remains an open question. In this work, we study this question by examining if a stronger model can inherit trustworthiness properties when fine-tuned on a weaker model's outputs, a process we term weak-to-strong trustworthiness generalization. To address this, we introduce two foundational training strategies: 1) Weak Trustworthiness Finetuning (Weak TFT), which leverages trustworthiness regularization during the fine-tuning of the weak model, and 2) Weak and Weak-to-Strong Trustworthiness Finetuning (Weak+WTS TFT), which extends regularization to both weak and strong models. Our experimental evaluation on real-world datasets reveals that while some trustworthiness properties, such as fairness, adversarial, and OOD robustness, show significant improvement in transfer when both models were regularized, others like privacy do not exhibit signs of weak-to-strong trustworthiness. As the first study to explore trustworthiness generalization via weak-to-strong generalization, our work provides valuable insights into the potential and limitations of weak-to-strong generalization.
Problem

Research questions and friction points this paper is trying to address.

Generative AI
Trust Generalization
Large Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Weak TFT
Weak+WTS TFT
Trust Generalization
🔎 Similar Papers
No similar papers found.