š¤ AI Summary
This study investigates large language modelsā (LLMs) capacity to internalize multi-level legal abstractionsāconstitutional principles, statutory provisions, and case-law doctrinesāwithin the German legal system, specifically for criminal classification of hate speech. We propose a multi-tiered legal conditioning framework grounded in prompt engineering and in-context learning, sequentially injecting constitutional values, §130 of the German Criminal Code, and summaries of landmark rulings. Our empirical analysis reveals, for the first time, that conditioning at abstract levels induces logical inconsistencies and factual hallucinations, whereas concrete-level conditioning improves target-group identification accuracy but fails to bridge the gap in legality assessmentāremaining substantially below expert human performance. The core contribution is the empirical establishment of a negative correlation between legal knowledge abstraction level and LLM reasoning reliability, thereby providing both theoretical grounding and a methodological framework for hierarchical legal knowledge modeling in AI systems.
š Abstract
The assessment of legal problems requires the consideration of a specific legal system and its levels of abstraction, from constitutional law to statutory law to case law. The extent to which Large Language Models (LLMs) internalize such legal systems is unknown. In this paper, we propose and investigate different approaches to condition LLMs at different levels of abstraction in legal systems. This paper examines different approaches to conditioning LLMs at multiple levels of abstraction in legal systems to detect potentially punishable hate speech. We focus on the task of classifying whether a specific social media posts falls under the criminal offense of incitement to hatred as prescribed by the German Criminal Code. The results show that there is still a significant performance gap between models and legal experts in the legal assessment of hate speech, regardless of the level of abstraction with which the models were conditioned. Our analysis revealed, that models conditioned on abstract legal knowledge lacked deep task understanding, often contradicting themselves and hallucinating answers, while models using concrete legal knowledge performed reasonably well in identifying relevant target groups, but struggled with classifying target conducts.