Enhancing Trustworthiness with Mixed Precision: Benchmarks, Opportunities, and Challenges

📅 2025-11-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM quantization methods primarily optimize for perplexity or classification accuracy, neglecting critical trustworthiness metrics—including adversarial robustness, fairness, machine ethics, and out-of-distribution robustness—thereby limiting deployment in high-stakes domains such as finance and healthcare. This work systematically evaluates how different quantization strategies (weight, activation, and KV-cache quantization) and compression ratios affect these four trustworthiness dimensions, revealing substantial performance instability across settings. To address this, we propose Precision Ensemble Voting: a novel method that fuses predictions from multiple mixed-precision variants of the same base model. Extensive experiments demonstrate that our approach improves trustworthiness metrics by up to 5.8% while preserving generation quality, and maintains consistent gains across diverse quantization configurations. This provides a scalable, robust pathway for trustworthy model compression in safety-critical applications.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have shown promising performance across various tasks. However, their autoregressive decoding process poses significant challenges for efficient deployment on existing AI hardware. Quantization alleviates memory and compute pressure by compressing weights, activations, and KV caches to low precisions while preserving generation quality. However, existing quantization frameworks typically focus on perplexity or classification accuracy, often omitting critical trustworthiness metrics. This gap introduces risks when applying quantized LLMs to downstream high-stakes domains such as finance and healthcare. In this work, we systematically investigate the impact of quantization on four trustworthiness metrics (adversarial robustness, fairness, machine ethics, and out-of-distribution robustness) and identify the instability across compression ratios and quantization methods. Building on these observations, we develop a novel precision-ensemble voting approach that leverages predictions from mixed-precision variants of the same model and consistently improves performance by up to $5.8%$ on trustworthiness metrics. Our results highlight the importance of considering trustworthiness when developing model compression techniques and point to research opportunities at the intersection of compression and trustworthiness for safety-critical applications.
Problem

Research questions and friction points this paper is trying to address.

Quantization impacts LLM trustworthiness metrics like adversarial robustness and fairness
Existing quantization methods overlook trustworthiness in high-stakes domains such as finance
A precision-ensemble voting method improves trustworthiness by leveraging mixed-precision variants
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixed-precision ensemble voting improves trustworthiness
Systematically evaluates quantization impact on four trustworthiness metrics
Addresses trustworthiness gaps in existing quantization frameworks
🔎 Similar Papers
No similar papers found.
Guanxi Lu
Guanxi Lu
Imperial College London
H
Hao Mark Chen
Department of Computing, Imperial College London
Z
Zhiqiang Que
Department of Computing, Imperial College London
Wayne Luk
Wayne Luk
Professor of Computer Engineering, Imperial College London
Hardware and ArchitectutreReconfigurable ComputingDesign Automation
H
Hongxiang Fan
Department of Computing, Imperial College London