Datasets:
cjvt
/

Modalities:
Text
Languages:
Slovenian
Libraries:
Datasets
License:
Matej Klemen commited on
Commit
2c7d0bc
2 Parent(s): c3ccd8e df3fd7b

Merge branch 'main' of https://huggingface.co/datasets/cjvt/si_nli into main

Browse files
Files changed (1) hide show
  1. README.md +102 -1
README.md CHANGED
@@ -1,3 +1,104 @@
1
  ---
2
- license: cc-by-nc-sa-4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language:
5
+ - sl
6
+ language_creators:
7
+ - found
8
+ - expert-generated
9
+ license:
10
+ - cc-by-nc-sa-4.0
11
+ multilinguality:
12
+ - monolingual
13
+ pretty_name: Slovene natural language inference dataset
14
+ size_categories:
15
+ - 1K<n<10K
16
+ source_datasets: []
17
+ tags: []
18
+ task_categories:
19
+ - text-classification
20
+ task_ids:
21
+ - multi-class-classification
22
+ - natural-language-inference
23
  ---
24
+
25
+ # Dataset Card for SI-NLI
26
+
27
+ ### Dataset Summary
28
+
29
+ SI-NLI (Slovene Natural Language Inference Dataset) contains 5,937 human-created Slovene sentence pairs (premise and hypothesis) that are manually labeled with the labels "entailment", "contradiction", and "neutral". We created the dataset using sentences that appear in the Slovenian reference corpus [ccKres](http://hdl.handle.net/11356/1034). Annotators were tasked to modify the hypothesis in a candidate pair in a way that reflects one of the labels. The dataset is balanced since the annotators created three modifications (entailment, contradiction, neutral) for each candidate sentence pair. The dataset is split into train, validation, and test sets, with sizes of 4,392, 547, and 998.
30
+
31
+ Only the hypothesis and premise are given in the test set (i.e. no annotations) since SI-NLI is integrated into the Slovene evaluation framework [SloBENCH](https://slobench.cjvt.si/). If you use the dataset to train your models, please consider submitting the test set predictions to SloBENCH to get the evaluation score and see how it compares to others.
32
+
33
+ If you have access to the private test set (with labels), you can load it instead of the public one by setting the environment variable `SI_NLI_TEST_PATH` to the file path.
34
+
35
+ ### Supported Tasks and Leaderboards
36
+
37
+ Natural language inference.
38
+
39
+ ### Languages
40
+
41
+ Slovenian.
42
+
43
+ ## Dataset Structure
44
+
45
+ ### Data Instances
46
+
47
+ A sample instance from the dataset:
48
+ ```
49
+ {
50
+ 'pair_id': 'P0',
51
+ 'premise': 'Vendar se je anglikanska večina v grofijah na severu otoka (Ulster) na plebiscitu odločila, da ostane v okviru Velike Britanije.',
52
+ 'hypothesis': 'A na glasovanju o priključitvi ozemlja k Severni Irski so se prebivalci ulsterskih grofij, pretežno anglikanske veroizpovedi, izrekli o obstanku pod okriljem VB.',
53
+ 'annotation1': 'entailment',
54
+ 'annotator1_id': 'annotator_C',
55
+ 'annotation2': 'entailment',
56
+ 'annotator2_id': 'annotator_A',
57
+ 'annotation3': '',
58
+ 'annotator3_id': '',
59
+ 'annotation_final': 'entailment',
60
+ 'label': 'entailment'
61
+ }
62
+ ```
63
+
64
+ ### Data Fields
65
+
66
+ - `pair_id`: string identifier of the pair (`""` in the test set),
67
+ - `premise`: premise sentence,
68
+ - `hypothesis`: hypothesis sentence,
69
+ - `annotation1`: the first annotation (`""` if not available),
70
+ - `annotator1_id`: anonymized identifier of the first annotator (`""` if not available),
71
+ - `annotation2`: the second annotation (`""` if not available),
72
+ - `annotator2_id`: anonymized identifier of the second annotator (`""` if not available),
73
+ - `annotation3`: the third annotation (`""` if not available),
74
+ - `annotator3_id`: anonymized identifier of the third annotator (`""` if not available),
75
+ - `annotation_final`: aggregated annotation where it could be unanimously determined (`""` if not available or an unanimous agreement could not be reached),
76
+ - `label`: aggregated annotation: either same as `annotation_final` (in case of agreement), same as `annotation1` (in case of disagreement), or `""` (in the test set). **Note that examples with disagreement are all put in the training set**. This aggregation is just the most simple possibility and the user may instead do something more advanced based on the individual annotations (e.g., learning with disagreement).
77
+
78
+ \* A small number of examples did not go through the annotation process because they were constructed by the authors when writing the guidelines. The quality of these was therefore checked by the authors. Such examples do not have the individual annotations and the annotator IDs.
79
+
80
+ ## Additional Information
81
+
82
+ ### Dataset Curators
83
+
84
+ Matej Klemen, Aleš Žagar, Jaka Čibej, Marko Robnik-Šikonja.
85
+
86
+ ### Licensing Information
87
+
88
+ CC BY-NC-SA 4.0.
89
+
90
+ ### Citation Information
91
+
92
+ ```
93
+ @misc{sinli,
94
+ title = {Slovene Natural Language Inference Dataset {SI}-{NLI}},
95
+ author = {Klemen, Matej and {\v Z}agar, Ale{\v s} and {\v C}ibej, Jaka and Robnik-{\v S}ikonja, Marko},
96
+ url = {http://hdl.handle.net/11356/1707},
97
+ note = {Slovenian language resource repository {CLARIN}.{SI}},
98
+ year = {2022}
99
+ }
100
+ ```
101
+
102
+ ### Contributions
103
+
104
+ Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.