🤖 AI Summary
This study investigates the safety and robustness of generative language models in adversarial legal contract negotiation. Addressing the lack of competitive scenario modeling in existing evaluations, we formalize contract negotiation as a multi-model head-to-head adversarial game and introduce a dynamic security assessment framework grounded in red-teaming. Our methodology integrates legal compliance checking, quantitative bias measurement, and risk attribution analysis to systematically evaluate vulnerabilities of leading open-source models in critical negotiation phases—such as clause concession and liability avoidance. Experiments uncover systematic biases and legal-risk-generation flaws across models. We propose actionable model selection guidelines and adversarial robustness optimization strategies, thereby bridging theoretical and practical gaps in safety verification for competitive AI deployments.
📝 Abstract
Generative language models are increasingly used for contract drafting and enhancement, creating a scenario where competing parties deploy different language models against each other. This introduces not only a game-theory challenge but also significant concerns related to AI safety and security, as the language model employed by the opposing party can be unknown. These competitive interactions can be seen as adversarial testing grounds, where models are effectively red-teamed to expose vulnerabilities such as generating biased, harmful or legally problematic text. Despite the importance of these challenges, the competitive robustness and safety of these models in adversarial settings remain poorly understood. In this small study, we approach this problem by evaluating the performance and vulnerabilities of major open-source language models in head-to-head competitions, simulating real-world contract negotiations. We further explore how these adversarial interactions can reveal potential risks, informing the development of more secure and reliable models. Our findings contribute to the growing body of research on AI safety, offering insights into model selection and optimisation in competitive legal contexts and providing actionable strategies for mitigating risks.