cointegrated's picture
Update README.md
2faa680
|
raw
history blame
2.39 kB
metadata
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: dev
        path: data/dev-*
      - split: test
        path: data/test-*
dataset_info:
  features:
    - name: premise
      dtype: string
    - name: hypothesis
      dtype: string
    - name: label
      dtype: string
    - name: source
      dtype: string
    - name: split
      dtype: string
    - name: premise_ru
      dtype: string
    - name: hypothesis_ru
      dtype: string
    - name: reverse_entailment_score
      dtype: float64
    - name: len_ratio
      dtype: float64
    - name: idx
      dtype: int64
  splits:
    - name: train
      num_bytes: 1156491691
      num_examples: 1756548
    - name: dev
      num_bytes: 78632908
      num_examples: 106557
    - name: test
      num_bytes: 30464486
      num_examples: 34615
  download_size: 504709758
  dataset_size: 1265589085

Dataset Card for "nli-rus-translated-v2021"

This dataset was introduced in the Habr post "Нейросети для Natural Language Inference (NLI): логические умозаключения на русском языке".

It is composed from various English NLI datasets automatically translated into Russian with two different methods.

Here are the sizes of the source datasets included into different splits:

source train dev test
add_one_rte 4991 387 0
anli_r1 16946 1000 1000
anli_r2 45460 1000 1000
anli_r3 100459 1200 1200
copa 800 200 0
fever 162330 20478 20343
help 29347 3355 3189
iie 281643 31232 0
imppres 10179 7661 7660
joci 8412 939 0
mnli 392662 19647 0
monli 2186 269 223
mpe 9000 1000 0
qnli 108436 5732 0
scitail 24900 2126 0
sick 9500 500 0
snli 549297 9831 0

Most of the original data were taken from the repository felipessalvatore/NLI_datasets.

More Information needed