🤖 AI Summary
To address weak generalization in natural language inference (NLI) caused by annotation artifacts and biases in supervised data, this paper proposes the first unsupervised chain-of-thought (CoT) reinforcement learning framework for NLI based on Group Relative Policy Optimization (GRPO), eliminating the need for human-annotated reasoning chains. The method integrates GRPO optimization with efficient fine-tuning via LoRA/QLoRA and AWQ quantization, achieving only 22 GB memory consumption on a 32B-parameter model. Experiments on ANLI and 11 adversarial NLI benchmarks demonstrate state-of-the-art performance: the approach surpasses prior methods on 7 metrics and matches them on the rest, significantly improving robustness and deployment feasibility. The core contribution lies in the first application of GRPO to unsupervised CoT training for NLI and empirical validation of its effectiveness under resource-constrained conditions.
📝 Abstract
Natural Language Inference (NLI) is a central task in natural language understanding with applications in fact-checking, question answering, and information retrieval. Despite its importance, current NLI systems heavily rely on supervised learning with datasets that often contain annotation artifacts and biases, limiting generalization and real-world applicability. In this work, we apply a reinforcement learning-based approach using Group Relative Policy Optimization (GRPO) for Chain-of-Thought (CoT) learning in NLI, eliminating the need for labeled rationales and enabling this type of training on more challenging datasets such as ANLI. We fine-tune 7B, 14B, and 32B language models using parameter-efficient techniques (LoRA and QLoRA), demonstrating strong performance across standard and adversarial NLI benchmarks. Our 32B AWQ-quantized model surpasses state-of-the-art results on 7 out of 11 adversarial sets$unicode{x2013}$or on all of them considering our replication$unicode{x2013}$within a 22GB memory footprint, showing that robust reasoning can be retained under aggressive quantization. This work provides a scalable and practical framework for building robust NLI systems without sacrificing inference quality.