🤖 AI Summary
Swiss multilingual legal translation has long relied on scarce bilingual legal linguists, impeding judicial accessibility. To address this, we introduce SwiLTra-Bench—the first large-scale, expert-validated benchmark for legal translation across five official Swiss languages (German, French, Italian, Romansh, and English), comprising 180,000 aligned sentence pairs from statutes, case summaries, and press releases. We propose a standardized evaluation framework tailored to Switzerland’s federal multilingual legal system and develop SwiLTra-Judge, an automated assessment system achieving a 0.92 Spearman correlation with human expert judgments. Experimental results show that state-of-the-art closed-source LLMs (e.g., Claude-3.5-Sonnet) achieve superior zero-shot translation performance across all language directions; while open-source models improve substantially after fine-tuning, they still underperform the best zero-shot baselines. This work establishes a rigorous empirical foundation—comprising a high-quality benchmark, a domain-specific evaluation protocol, and validated automatic metrics—for trustworthy legal translation in multilingual jurisdictions.
📝 Abstract
In Switzerland legal translation is uniquely important due to the country's four official languages and requirements for multilingual legal documentation. However, this process traditionally relies on professionals who must be both legal experts and skilled translators -- creating bottlenecks and impacting effective access to justice. To address this challenge, we introduce SwiLTra-Bench, a comprehensive multilingual benchmark of over 180K aligned Swiss legal translation pairs comprising laws, headnotes, and press releases across all Swiss languages along with English, designed to evaluate LLM-based translation systems. Our systematic evaluation reveals that frontier models achieve superior translation performance across all document types, while specialized translation systems excel specifically in laws but under-perform in headnotes. Through rigorous testing and human expert validation, we demonstrate that while fine-tuning open SLMs significantly improves their translation quality, they still lag behind the best zero-shot prompted frontier models such as Claude-3.5-Sonnet. Additionally, we present SwiLTra-Judge, a specialized LLM evaluation system that aligns best with human expert assessments.