Datasets:
Access PortuLex on Hugging Face
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
The PortuLex benchmark includes datasets with specific access requirements:
- RRI dataset requires the acceptance of these terms: https://bit.ly/rhetoricalrole.
- For the FGV-STF corpus, you must request it directly from the original authors: https://www.sciencedirect.com/science/article/abs/pii/S0306457321002727.
Log in or Sign Up to review the conditions and access this dataset content.
PortuLex_benchmark
"PortuLex" benchmark is a four-task benchmark designed to evaluate the quality and performance of language models in the Portuguese legal domain.
Dataset | Task | Train | Dev | Test |
---|---|---|---|---|
RRI | CLS | 8.26k | 1.05k | 1.47k |
LeNER-Br | NER | 7.83k | 1.18k | 1,39k |
UlyssesNER-Br | NER | 3.28k | 489 | 524 |
FGV-STF | NER | 415 | 60 | 119 |
Dataset Details
PortuLex is composed by: LeNER-Br, Rhetorical Role Identification (RRI), FGV-STF, UlyssesNER-Br.
- LeNER-Br: the first Named Entity Recognition (NER) corpus for the legal domain in Brazilian Portuguese from higher and state-level courts.
- RRI: rhetorical annotations from judicial sentences from the Court of Justice of Mato Grosso do Sul (Brazil).
- FGV-STF: decisions from the Supreme Federal Court for entity extraction.
- UlyssesNER-Br: NER corpus of bills and legislative queries from the Chamber of Deputies of Brazil.
Dataset Evaluation
Macro F1-Score (%) for multiple models evaluated on PortuLex benchmark test splits:
Model | LeNER | UlyNER-PL | FGV-STF | RRIP | Average (%) |
---|---|---|---|---|---|
Coarse/Fine | Coarse | ||||
BERTimbau-based | 88.34 | 86.39/83.83 | 79.34 | 82.34 | 83.78 |
BERTimbau-large | 88.64 | 87.77/84.74 | 79.71 | 83.79 | 84.60 |
Albertina-PT-BR-base | 89.26 | 86.35/84.63 | 79.30 | 81.16 | 83.80 |
Albertina-PT-BR-xlarge | 90.09 | 88.36/86.62 | 79.94 | 82.79 | 85.08 |
BERTikal-base | 83.68 | 79.21/75.70 | 77.73 | 81.11 | 79.99 |
JurisBERT-base | 81.74 | 81.67/77.97 | 76.04 | 80.85 | 79.61 |
BERTimbauLAW-base | 84.90 | 87.11/84.42 | 79.78 | 82.35 | 83.20 |
Legal-XLM-R-base | 87.48 | 83.49/83.16 | 79.79 | 82.35 | 83.24 |
Legal-XLM-R-large | 88.39 | 84.65/84.55 | 79.36 | 81.66 | 83.50 |
Legal-RoBERTa-PT-large | 87.96 | 88.32/84.83 | 79.57 | 81.98 | 84.02 |
Ours | |||||
RoBERTaTimbau-base (Reproduction of BERTimbau) | 89.68 | 87.53/85.74 | 78.82 | 82.03 | 84.29 |
RoBERTaLegalPT-base (Trained on LegalPT) | 90.59 | 85.45/84.40 | 79.92 | 82.84 | 84.57 |
RoBERTaCrawlPT-base (Trained on CrawlPT) | 89.24 | 88.22/86.58 | 79.88 | 82.80 | 84.83 |
RoBERTaLexPT-base (Trained on CrawlPT + LegalPT) | 90.73 | 88.56/86.03 | 80.40 | 83.22 | 85.41 |
Citation
@InProceedings{garcia2024_roberlexpt,
author="Garcia, Eduardo A. S.
and Silva, N{\'a}dia F. F.
and Siqueira, Felipe
and Gomes, Juliana R. S.
and Albuqueruqe, Hidelberg O.
and Souza, Ellen
and Lima, Eliomar
and De Carvalho, André",
title="RoBERTaLexPT: A Legal RoBERTa Model pretrained with deduplication for Portuguese",
booktitle="Computational Processing of the Portuguese Language",
year="2024",
publisher="Association for Computational Linguistics"
}
Acknowledgment
This work has been supported by the AI Center of Excellence (Centro de Excelência em Inteligência Artificial – CEIA) of the Institute of Informatics at the Federal University of Goiás (INF-UFG).
- Downloads last month
- 36