🤖 AI Summary
This study empirically evaluates the deterrent effect of five binding precedents (SV 11, 14, 17, 26, 37) issued by Brazil’s Supreme Federal Court on repetitive litigation. It addresses the central question: *Do binding precedents reduce the filing of substantively similar cases?*
Method: For the first time, it systematically applies NLP techniques—including TF-IDF, regular expressions, LSTM, and BERT—to measure legal text similarity and compare judicial topic distributions pre- and post-precedent issuance.
Contribution/Results: None of the precedents significantly reduced related case filings; some even triggered new litigation streams. Deep learning models (LSTM/BERT) underperformed traditional methods (TF-IDF/regex) in this domain-specific retrieval task. Precedent ineffectiveness exhibited high heterogeneity and case-specific dependence, refuting unitary causal explanations. The study establishes a reproducible, quantitative framework and methodological benchmark for assessing the real-world efficacy of judicial precedent.
📝 Abstract
Binding precedents (S'umulas Vinculantes) constitute a juridical instrument unique to the Brazilian legal system and whose objectives include the protection of the Federal Supreme Court against repetitive demands. Studies of the effectiveness of these instruments in decreasing the Court's exposure to similar cases, however, indicate that they tend to fail in such a direction, with some of the binding precedents seemingly creating new demands. We empirically assess the legal impact of five binding precedents, 11, 14, 17, 26 and 37, at the highest court level through their effects on the legal subjects they address. This analysis is only possible through the comparison of the Court's ruling about the precedents' themes before they are created, which means that these decisions should be detected through techniques of Similar Case Retrieval. The contributions of this article are therefore twofold: on the mathematical side, we compare the uses of different methods of Natural Language Processing -- TF-IDF, LSTM, BERT, and regex -- for Similar Case Retrieval, whereas on the legal side, we contrast the inefficiency of these binding precedents with a set of hypotheses that may justify their repeated usage. We observe that the deep learning models performed significantly worse in the specific Similar Case Retrieval task and that the reasons for binding precedents to fail in responding to repetitive demand are heterogeneous and case-dependent, making it impossible to single out a specific cause.